Public Cloud Repatriation
As many organizations are moving their workloads, applications, and data to the public clouds, it is vital to think about not just the advantages, but also the drawbacks. One trend I see in the recent past which is gaining momentum is public cloud repatriation. Public cloud repatriation means the procedure of migrating workloads, applications, and data from a public cloud environment back to an in-house (private cloud or on-premises) environment. This can be driven by multiple factors like cost, compliance, and security.
It is statistical fact that Public Cloud Repatriation is an acknowledgment that someone in the past made a mistake moving/deploying the workloads to the public cloud without the right mapping of organization’s vision to implementation strategy and challenges. It is also a revisitation or modification in the strategy of the organization due to changed business and technology priorities. There are numerous reasons why organizations are choosing Public Cloud Repatriation as their choice going forward. The most common include:
- Performance: Some applications require low-latency connectivity due to lower transaction processing time requirements. This may lead to the need for very-high bandwidth connectivity that can't be easily accomplished using the public cloud services.
- Compliance: Regulatory changes for data sovereignty may impact an organization’s path over public clouds. Some organizations may possibly have regulatory requirements that can't be met in the public cloud.
- Security: Public cloud services are considered less secure than private cloud or on-premises environments, and organizations have started feeling that keeping sensitive workloads and data in-house to protect from cyber threats is a better option than keeping/moving them to a public cloud.
- Control: Many organizations want to run specialized software or use the functionality of specialized software which can’t be achieved using public cloud services and does not fit the organization’s level of control requirements.
- Cost: Escalating Public Cloud Cost is the most common reason for repatriation. Public cloud services can be costly, specifically for organizations that need a lot of resources or have a lot of data when usage increases. By moving workloads back to in-house environments, organizations can reduce their costs and control their IT expenditures in a better way.
High cloud bill is hardly the fault of the cloud provider because of the transparency shown over a long time now. The bills are generally self-inflicted by organizations that don’t re-factor data and applications to enhance their cost-efficiencies on the cloud platforms. Yes, the applications run as same as they did on the in-house platform, but then the organization will have to pay for the similar inefficiencies they were facing on the in-house platform and chose not to deal with while choosing the migration to a public cloud. The public cloud bills are higher than the expectations since the “lifted-&-shifted” applications are not able to take benefit of the public cloud’s native capabilities like storage management, auto-scaling, and security which allows data/workloads to function efficiently.
But it is easy to point out the fault of “not re-factoring application and data” for public cloud platforms during migration. The truth is that re-factoring is complex, expensive, and time-consuming and typically involves several technical, business, and operational considerations such as human resources, re-architecture, data migration, and modifications to network & security infrastructure. For organizations that did not optimize applications/data for the public cloud and decided to repatriate, it does not make much monetary sense to re-factor those applications/data now. Overall, you can’t ignore the fact that it makes economic sense to move some workloads back to a traditional data center.
Also, the organization’s IT teams are unaware of the hidden cloud costs, wasted resources, and unforeseen circumstances that can come along. They need to understand where the costs are, how to automate tasks and optimize them, have accountability and create a discipline to manage cloud costs by creating the balance between affordability and reliability.
Now, based on today’s business environment, repatriation is being chosen as a more cost-effective alternative, even contemplating the expense and trouble of operating in-house.
However, don’t forget that after a few years on the public cloud, many applications and workloads became dependent on specialized cloud-based services. Those typically cannot be repatriated as those inexpensive equivalents are unlikely to run on traditional platforms. When advanced IT services are involved (Deep analytics, massive scaling, AI, etc.), public clouds in general are more cost-effective.
The aim is to identify the most optimized architecture to sustain and strengthen the business. Sometimes it’s a public cloud; sometimes it's in-house or sometimes it’s a hybrid or sometimes it's SaaS. Like any technology, the public cloud is superior for some use cases than for others. This will change over time, and organizations will adapt again. No embarrassment.
The key to making sure there is a negligible chance for cloud repatriation, is to ensure cloud technologies or any innovations are not introduced without expert advice around most KPIs, and metrics factored based on long term organizations roadmap and strategies. It's also important to weigh the pros and cons before making a final decision. By carefully planning and executing the process, organizations can ensure that they get the most out of their cloud investments. One of the major challenges in public cloud repatriation is choosing the right technology platform and the complexity of the process. Moving applications, workloads, and data to a private cloud or traditional platform requires a substantial amount of planning & coordination, and organizations must guarantee that they have the right expertise, experience, and tools to make the right decision and transition smoothly.