There is not much doubt that where it comes to cloud, for most organizations, multi-cloud is already reality. According to Flexera’s most recent 2020 State of the Cloud report, 93% of companies reported that they had multi-cloud strategies. It has been well covered in these pages and elsewhere. In its assessment of Google Cloud strategy published in August, ZDNet colleague Dion Hinchcliffe noted that multi-cloud is at the core of the cloud challenge strategy. In our 2020 assessment of hybrid cloud infrastructure platforms, we noted that Google is hardly alone among cloud providers in expanding their footprint into foreign territory.
So why are so many organizations formally or informally adopting multi-cloud strategies? That could be one of many answers. The most cited answer is fear of cloud vendor lock-in, but keep that in mind, because from what we’ve seen, inertia is at the top of the list.
It is nothing new for companies to have one of everything in their technology portfolios. Often, corporate IT sets a standard, but rarely do these standards be followed religiously throughout the organization. There are many years of shadow IT, starting from the days when PCs came in through the back door via the department’s purchase orders, often in spite of or to circumvent IT backlogs or bottlenecks in centralized purchasing. And of course, there are organizations that are the product of M&A; it sometimes takes years for acquired devices to migrate from their old systems.
So it should come as no surprise that when it comes to cloud adoption, it’s just the latest manifestation of organizations that have different technology portfolios. Why should cloud adoption be different? Considering that in many organizations, cloud adoption began with AppDev departments tactically running DevTest workloads because it was much more appropriate than having to purchase dedicated hardware. Then came the Cambrian explosion of mobile apps following the rollout of the iPhone AppStore, which was often implemented by product marketing rather than enterprise teams, and since then, the adoption of SaaS and AutoML services available only in the cloud grew for not to mention operational applications and analytics running with data that was only in the cloud.
In some cases, different cloud choices can be attributed to application settings, such as Azure alliances like SAP with its next generation S / 4HANA business applications; SAS med Viya analysis; or databases via a OEM agreement. Or open source database providers that Google has made first-class citizens with theirs open source database partnership programs. In other cases, it may be for reasons of performance or data sovereignty if a particular cloud provider has a region that is physically much closer to or within the same country where the data is. However, this distinction is likely to be fleeting as each of the major cloud providers expands their global footprint to become ubiquitous across different geographical areas.
In fact, in some heavily regulated sectors, such as financial services, there may be requirements to avoid reliance on a single cloud provider indicating that another cloud provider should be used for disaster recovery purposes.
There is little security in today’s economy, but it is clear that the current pandemic is accelerating existing trends towards more cloud capture. With the economy in turmoil, most companies are reassessing their core products and services given the shift towards digital business. It dictates more attention to the company minus the distractions of keeping the lights on, this is where the cloud comes in. This means a fresh look at whether the back-end systems that are resistant to cloud migration will finally move. It also means leveraging cloud services, such as in machine learning, customer engagement and analytics, that organizations can use to start new business areas.
In most cases, companies are likely to run specific systems in certain clouds. For example, they might be running some mobile apps in AWS, the CRM system in Azure, and then looking to Google Cloud for some of its AI capabilities. Or the delimitation of clouds may be driven by the business unit.
But what about another scenario? Can companies probably run a single application or database across multiple cloud providers? In most cases, we are pretty dubious. Challenges include start-up costs (fees for extracting data from a cloud); network delay cloud vendor-specific APIs; and safety and management silos that counteract one of the cloud’s strongest features: the possibility of a simplified, uniform control plan. Even though starting costs are calculated (we could imagine multi-cloud service providers could get creative here), you still have security, management and integration overhead to run across multiple clouds.
In essence, running a single logical instance of a database across multiple cloud providers is akin to running a single logical instance of a transaction application across multiple databases. Maybe not impossible, but would it be advisable?
For most organizations that adhere to single cloud running systems, there will still be this variation in control plans, but at least different clouds can be processed individually just as you would handle different database platforms across your business. You do not necessarily want to run them together, so you do not have the complexity of that management.
But then there are the Kubernetes (K8s). Could its emergence give the fleeting control plan uniformity across different cloud providers so that everything you run in the cloud looks and works the same, regardless of cloud provider?
The design of K8s is clearly portability of applications rather than databases. Nevertheless, databases can be packed into containers and / or they can offer operators that call for help processes, such as authentication or metrics collection, that are containerized and run as microservices that can be marshaled in a K8s environment. And with K8s, the APIs are harmonized.
While K8s can enable cloud databases to interoperate across clouds, it does not address control and security complexity when driving across discrete environments, each with their authorization and approval; logging, monitoring and metrics and encryption regimes. Yes, security can be declarative with K8s, and K8s provide interoperability across clouds. But it does not necessarily harmonize the underlying control plans unless you mix and match operators under the hood. If you want the simplicity of cloud management across multiple clouds, the choice will inevitably be layering on a third-party tool or framework.
So what should I do with Google’s positioning as the most multi-cloud-friendly cloud provider? Google is promoting Anthos as the pillar of this strategy. In addition to Google Cloud, Anthos is now running in AWS with Azure on the way. Anthos repackages Google Cybernet Engine (GKE) and related components for cluster and multi-cluster management, configuration management, service mesh, logging and other functions required to run a cloud-native environment. It should allow your applications and databases to run either in your own private cloud or as an instance in a competing cloud. Its latest extensions include existing identity and access management systems, making it more Google Cloud independent and are in beta for a “Bare Metal” option that allows Anthos customers to move from VMware.
Google is not the only one in this game, as IBM is also aggressively pushing the portability off Red Hat OpenShift (where as Cloud Packs is built), and although nothing has been announced, we might well expect Microsoft to create Azure Arc, if implemented with a Kubernetes control plan, also portable. And Confluent, with its 6.0 platform, introduces cluster coupling so you can connect Kafka clusters across multiple data centers, geographic areas and its Confluent Cloud the service allows you to operate a virtual Kafka cloud spanning multiple clouds.
Doubling on multi-cloud Google has just released BigQuery Omni, so you can run Google’s data warehouse anywhere Anthos runs, whether in your own private or hybrid cloud, in the data center or on the edge, or in another public cloud. The core concept of BigQuery Omni is that you can run it locally where the data is without having to move data back to the Google cloud. But it is also conceivable to let you run a single composite implementation of the data warehouse across all clouds that Anthos supports. It’s Google’s approach to push analytics down to where the data is located, with the useful assumption that if any data is sent back to the home base of Google Cloud, it would only be result sets.
On a recent analyst call, a BigQuery customer considering adding the Omni sees it as more of an edge analytics device running in a hybrid cloud on-site, and then returning the results to the core deployment running on GCP.
For us, the myth of multi-cloud is the expectation that they can look and run as a single logical entity. The truth is that clouds are platforms, and even with standards, they will still have their differences – just like SQL databases. The truth about multi-cloud is that it will be a way for companies to spread their bets.
So in the overwhelming set of cases, multi-cloud is not about running unified databases or applications across two or more clouds. Instead, multi-cloud strategy is about freedom of choice of cloud – what runs, where. While there is always the unicorn who publishes statements about their choice of a single strategic cloud provider (and these are usually reference customers who get celebrity treatment), our admission is what is the exception.
Unless your IT organization performs the technological equivalent of living off the grid, making all the home grown over the operating system layer, you will need to make a critical roadmap choice at some level in the stack. Eventually, it will include one or, more likely, multiple clouds.