Vendor Lock-In and the Cloud

I’ve been thinking a lot lately about vendor lock in and how easy getting into that situation has become with the rise of cloud services. Coming from an AWS shop in particular, it was painfully evident early that trying to remain cloud agnostic is not only not always cost effective, but sometimes it has impacts on what you’re trying to build making it significantly more complicated. Sure, there’s plenty of tools available out there that are cloud agnostic that you can piece together, but man do these providers make it so much easier to use their services.

From an integration and architecture standpoint it’s so tempting to just go with the native services in your provider, but I just am compelled to think about the risk associated with that. AWS isn’t going anywhere, Azure has its sights set on enterprise world domination; but it still just seems so risky. My past experience has been very much starting with a multi-cloud lens and then caving into vendor specific services ultimately leading to lock in in some form or fashion so I’m interested in the other side of that coin.

I think I have a personal preference for a multi-cloud approach but thought this to be a good topic to gauge what everyone else thinks. Preferences for all one cloud vs. multi-cloud approaches? What are some challenges?


That is a very interesting discussion @bnwoods :thinking:

I believe the biggest issue here is that Cloud providers (Specially AWS, Azure, and GCP) are not “making money” by selling individual services, just like Apple with IOS, macOS and etc… It’s all about “The Ecosystem”. Just like you, I also prefer the multi-cloud approach but in some cases is really hard or even impossible.

One perspective that I can offer from my region (Middle East), in most cases, there is no option to not be locked-in because the amount of regions are very limited, in Dubai where I live today there is only 1 Cloud Provider with a region established, Azure. So basically all customers are locking them up because there is no other option in most of the cases, we have data privacy laws and many other things that complicate to use anything else, and the days that we are living today, the local data center has become less than an option for obvious reasons… I believe everyone is afraid of being “Lock-In”, but sometimes there is no easy option not to…

Cloud Providers are doing a very good job by making it so incredibly easy to subscribe to services that are very difficult to replace, especially compared to Open-Source solutions, TBH, never saw an AWS architecture for instance that does not take advantage of key native services like SNS or SQS, even if they are very small, in some cases are impeditive and unproductive to change to a more open solution, sure for big enterprises like banks or government agencies they can afford to be a little “less productive” by not taking advantage of these key services, but I don’t believe is the same case for a startup with limit resources, for startups, this could be the difference between succeeded or failed, there is where the lock-in plays a big role when productivity is so important that lock-in makes almost mandatory.

What I see is that this will “break” (eventually) and be a little easier to adopt multi-cloud is with even more open-source projects and more adoption from enterprises to these OSS tools, that can become standard in the market, just like K8s today is the de-facto way to orchestrate containers, it’s fully open-source and available on all major Clouds. Today is easy to see that any Cloud that does not support K8s in any way, will be out very soon, a couple of years ago if someone has said that, I’ll just not believe, at all.


What a great topic, @bnwoods.

My engineer brain tells me that I should never ever go for a lock-in situation. I should always be able to build with whatever the best tool is, no matter what vendor offers it. I want to build great, fast, reliable, cheap, and secure platforms and no vendors should stop me to achieve this nirvana. And there is one more reason to go the multi-cloud route that I heard before, the cloud vendor getting into a space that makes it a competitor of the business one works for.

The reality is like you both mentioned, hard.

Kubernetes shows up as a clear contender here. It is the multi-cloud dream coming true. Deploy the infrastructure once and just add more nodes/servers wherever you see fit and make sure it joins the poll. But what if I need to deploy a latency-averse application and its microservices end up split between AWS and GCP? How do I do networking on such a complex monster? Who will maintain the cluster itself? And this is just about the computing layer…

What about communication? Sure, I can ignore Amazon SQS and go the Open Source with RabbitMQ. But if I do, I need to install it. To install it, I need servers. If I run servers, I need to maintain them. It’s getting so expensive! I want to be on the top/right of the Shared Responsibility Model, not at the bottom/left.

All this made me remind a conversation that I had with a friend of mine a few years ago where he said something to that, in all honesty, is simpler said than done. As a Cloud Engineer we need to make the math both in going lock-in and multi-cloud route for a project and then pick the winner. But how if there are so many hidden costs on both routes?

I can think of one way of achieving Multi-Cloud, though. It’s not easy or free, but it seems more possible than my previous examples, and I’ve seen different organizations adopting it. Different projects go to different vendors, so you don’t have all your eggs in the same basket. Why try to break a project/service down between different providers when you can have entire different projects running across them? This way the lock-in risk doesn’t disappear, but it is minimized since part of the business is running in a different vendor.

What do you think of that approach?


Corey Quinn wrote a great article recently on the lock-in you don’t see and how no matter how hard you try to avoid it, you’re going to have a tough time if you decide you need to jump from one provider to another.

@felipecosta raised some great points about data sovereignty and how some providers have an advantage in certain countries and regions.

I like @raphabot’s ideas of spreading independent services across vendors – that’s something where you might have some chance of success. When those services have to talk to each other, though, I think there’s definitely opportunities for friction and heavy lifting that wouldn’t exist if you were using a single provider.

For me, the cognitive load that comes from juggling multiple cloud vendors in a single solution seems prohibitive. Even if you’re operating at the “it’s just an instance” / “it’s just a container” level, there’s networking, monitoring, deployment, … literally all the hard stuff. I’ve been down the Kubernetes path and I’m very happy to leave it behind.


I’ve actually seen this approach as well. I think the real problem is when organizations are “all in” in a given cloud and then you have one-off projects in other clouds. Because then, from a visibility and costing perspective things get messy.

Also, I completely agree @glb – there’s so much underlying infra an architecture it’s so prohibitive. But it’s just terrifying for me to think of getting locked in to some AWS service or model out there and then SURPRISE you’re stuck and now it’s cost is stupid. And we’ve all been witness to Killed by Google. Not that I think GCP is going anywhere, but that doesn’t mean services wont at some point. Each major cloud just kind of has, for lack of better phrasing, dark orbs surrounding them where anything goes right now.


@bnwoods from an AWS cost perspective the one thing that I’ve consistently seen is that costs go down. It’s pretty cool, and is one of their core flywheel behaviours: they take the profits and advantages they get, then use them to drive costs down, rinse and repeat. My negatives with them have been more on the “oh, Cool New Thing doesn’t do what I need it to, missed it by this much” side. :slightly_frowning_face:

I totally hear you on the Killed by Google front, though. I would be extremely shy of building there, though I know some super-smart folks who’ve built their entire business on GCP because it was the right thing for them. :man_shrugging:


That actually reminds me of my experience with AWS in a past role as well. There were so many times where 3 AWS native tools did similar things. All three had different bits of functionality that was needed but none had it all. :joy:


But we shouldn’t fear lock-in either because it will impede progress. The company I used to work for was considering going to cloud (this was many years ago) but they were scared of vendor lock-in. So they evaluated AWS and decided against it because they couldn’t easily move their servers elsewhere if needed. As a result, we ended up building our own VMware based private cloud in our data centers. It was costly and quickly evident that it was a bad decision and they ended up going with AWS a couple of years later.

If we embrace the cattle not pets mentality then it’s easier to move from one cloud to another if needed but I agree, it’s all the ancillary stuff that will be the hard part to break free of (ie all those lovely services we have grown to love like Route53, Lambda, SSM, etc etc.)

I’m not a huge fan of multi-cloud approach honestly. I prefer concentrating on 1 thing and fully leverage it and not spread out my resources too thin. But I’ll admit that I have cloud bias and tend to favor one cloud solution over the others.


Did anyone see this article yet from Last Week in AWS?

I find I kind of agree here. Sure, we wanna be Cloud agnostic but from a practicality standpoint having that ability seems to be getting further and further away.

And, they’re very right about lock in too. We are all locked in. It’s how I felt when I initially authored the post — that lock in is unavoidable, but man I was so interested to hear if the inverse in possible. What I’ve garnered here from all of you is that is kind of just the reality we are dealing with here.

Also, one point I failed to consider is the negotiating leverage point in the linked article. That’s actually a really good point concerning higher spend yielding better negotiating power vs trying to spread that spend out.

I’ve learned a lot from this discussion


@bnwoods very true. I’ve spoken to many enterprise companies. A large majority have some sort of multi cloud presence. This is because of the diverse teams, as well as business units. It also comes down to licensing, sometimes a company has a volume license with a cloud vendor, and then uses another. Or, thinking one cloud vendor specializes in something versus another (ML works better on X cloud versus Y cloud).


Shadow IT comes to mind here. One group goes off and does their own thing, while another goes a totally different direction. So you end up with multiple clouds as a result. Then you have other companies that sign on with a particular cloud, partner up with them and you get 1 cloud to rule them all. It’s fascinating actually.

When you have multiple cloud vendors, that can lead to increased staffing costs because to do it right, you need experts in that tech to pull it off and if you have multiple clouds, you need multiple experts. Thinking back over my past, we had a huge database team just because we had Oracle specialists, DB2 guys, etc etc. When we decided to go all Oracle, you could reduce your staff. Same could apply to cloud vendors, depending how vested you are in their tech.

1 Like

I once heard that Shadow IT was the BU pioneering Cloud Computing in the organization (before it was replaced with Covid).