Five years ago, I attended a Keynote featuring the then Chief Operating Officer of the Google Cloud Platform (GCP), Diane Bryant, where she explained the importance for organizations to avoid vendor lock-in when it comes to public cloud consumption. Back in 2018, GCP was far off the pace set by AWS and Azure so the sceptic in me thought; well, of course she would say that in an effort to entice Amazon and Microsoft’s clientele to also consume GCP resources. Pitching multi-cloud was an opportunistic way to convince people they don’t even need to forego the years’ worth of effort invested in other cloud platforms to get immense value from investing time and money across other clouds; namely GCP.
With that said, seeing is believing. I have worked in multiple environments where we consumed services and resources hosted in Azure and Azure only. Reality came crashing down one day when Virtual Machines (VMs) inexplicably became inaccessible. It turned out there was an API issue causing machines to fail to boot, which coincided with our scheduled shift change that generates a login storm.
When employees tried to login to start the day, they were met with errors. Of course, they frantically started calling our Service Desk and flooded our poor support team. As we were completely dependent on Azure, our only course of action was to ask people to sit tight until we emailed out an all-clear notification.
If we had adequate resources in our own private data center that we could direct the load to during that outage, we could have prevented hours of lost productivity but we did not. Hardware became scarce due to supply chain issues and as demand was much higher than supply, the cost of hardware also increased. Regardless of cost, the simple fact of the matter was we could not accommodate that scale of workload, even if we wanted to. As many organizations discovered around the same time, you could quickly and elastically scale as your business needs dictated – a major selling point of public cloud services – but as illustrated with the virtual machine accessibility issue I mentioned earlier, the old adage is true; do not put all your eggs in one basket. Multi-cloud is not just a clever sales pitch from executives of major cloud platforms. It is a clear and sensible corporate strategy.
The Challenges of Adopting a Multi-Cloud Strategy
Unfortunately, there are several challenges to adopting a multi-cloud approach. From an infrastructure perspective, the tooling has been rather good for several years. For example, Terraform provides a pretty standard method for spinning up, configuring, and managing cloud resources across several clouds. This works great when you have infrastructure and services that are supported in any public cloud. In some cases, organizations may choose or be forced to consume first-party proprietary services in one cloud, that can make spreading that workload across multiple clouds more difficult and often impossible. The unfortunate truth is this is unavoidable sometimes. An example of this might be Microsoft Entra ID (formerly Azure Active Directory) or Microsoft MFA. Due to the current nature of enterprise IT and how Windows-centric it is, you are not likely to have a secondary directory service in operation that is supported by all applications and services in your estate that require a domain.
Another Challenge with Multi-Cloud Strategies are Applications
During the same event where Diane delivered her keynote, I attended a lunchtime table talk where most of the people I spoke with said they installed all their applications into their virtual machine images. Some of those I talked to were not even aware there was an alternative to doing this! The inefficiency of putting applications into images was bad in traditional on-premises environments, but it becomes much worse and much more taxing when you try to go to a hybrid and/or multi-cloud model.
If you attempt to install all your applications into virtual machine images, you will eventually encounter problems such as application conflicts and potential image corruption.
The old way of handling this was simply to remove conflicting applications and put them in their own separate images. This requires time and effort to maintain multiple images, which drastically inflates costs in the cloud, as on-premises storage is relatively cheap in comparison you may not realize the full implications of such inefficiencies. When living in the cloud you may incur additional storage consumption costs for hosting those images stored in your tenant and you may require additional cloud resources to publish one or more of your conflicting applications via session hosts. The result is often more work at greater costs and sometimes an even worse digital employee experience.
Those images also become more complicated to manage if working across multiple clouds. Some of your standard images, optimizations, and processes in something like Citrix DaaS may not be the same as those required in Azure Virtual Desktop, Amazon Workspaces, etc. there are products out there that can help with trying to bridge that gap and enable you to more seamlessly deploy single images across multiple clouds, but they fall short when it comes to your applications.
Effectively Managing Your Applications Across Multiple Clouds
The best approach for your applications is to deliver as many of those applications as possible outside of your virtual machine images. This streamlines virtual machine provisioning and maintenance. Building and maintaining images can be as simple as using something like Terraform to build a base machine that includes critical agents like antivirus, a monitoring solution, application management agent, and more – with the option to include certain Windows features and common Microsoft run times in the images – then having all other applications delivered outside of the image dynamically.
To dynamically deliver applications outside of your images, you will need to look to modern provisioning solutions to account for a multi-cloud or hybrid approach. It is best to avoid requiring site-to-site VPNs for delivering applications across sites. It is better to use a cloud-based management platform. These platforms are designed to support managing applications across devices anywhere, and importantly, on any network. They typically operate over HTTPS and require minimal firewall configuration to implement which means they can deliver to any virtual machines running on any network, including virtual machines in AWS, Azure, GCP etc. Being cloud-based, there is no hefty infrastructure to manage and maintain for your IT team. No sprawling cloud resources required which can result in unpredictable pricing. When consumed as a service, the price is fixed and predictable.
Dynamically delivering most of your applications outside of your virtual machine images will reduce the size of your images, cutting down on storage consumption costs. It also helps you avoid image sprawl. In other words, reducing the number of images, further reducing storage costs and the time and effort required to manage and maintain images, being that application updates do not require an image update.
The need for organizations to avoid vendor lock-in for their cloud infrastructure makes multi-cloud approaches a necessity. It is one thing to choose a single product for monitoring your storage or digital signing of documents, but it is something completely different when talking about an organization’s entire infrastructure. If you run all your critical infrastructure on a single cloud platform, you are the mercy of that platform for all your systems and productivity. If that single platform has an outage that lasts hours and you do not have an adequate alternative residing on a different cloud platform (and/or in your own private data centers) you could potentially lose millions of dollars. Once you have infrastructure that spans across clouds for resiliency, you need to worry about your applications. What good are virtual machines without applications?
To manage applications across multiple clouds, the best approach is to use a cloud-native solution that can deliver applications over standard protocols such as HTTPS. This enables you to manage your applications on any Windows machine regardless of where it is located and regardless of what network it is connected to which is perfect for public cloud. Dynamically delivering applications outside of your virtual machines leads to a simplified and standardized approach to image management across any public cloud. This approach will not only make multi-cloud strategies more manageable by reducing the complexity of image management, it will even save your organization money on storage costs and IT team productivity loss.
Subscribe to our Newsletter
Join our email list for all the latest insights on simplifying the mobilization and management of applications across Windows desktop and multi-cloud environments.