4 factors to ruin your serverless migration

As more and more companies embark on their digital transformation journey, organisations need to upgrade and evolve their infrastructure technology to maintain a competitive edge. The promise of serverless is to allow organisations to focus on their application and not worry about infrastructure, which includes buying, scaling, and securing infrastructure to run their applications.

It’s critical that serverless is chosen for the right kind of task. It’s perfect for handling short tasked based jobs that take a relatively small period of time to execute, but it’s less useful for long running tasks or for complex tasks. However, even with a well-considered strategy and a solid budget, the migration to serverless can go awry.

1. Managing complexity

Serverless can introduce more complexity, especially with inter-task communication. You end up using significantly more jobs that are smaller in size: each task is simpler, but since you have more interconnected tasks the overall system complexity is increased. This extra complexity is things such as configurations, deployment scripts, and tooling artifacts.

While building software gets easier, monitoring it can become harder. You have multiple languages, many services, and a myriad of paths through the system.

This leads to  the architecture of an  application as a whole becoming significantly more complex.

2. Vendor lock-in

There’s currently no single interface standard for serverless. Selecting serverless technologies like Lambda, which is tied to AWS, makes it difficult to move to Microsoft Azure functions or Google Cloud Platform (GCP) functions. The same problem exists with Azure and PCP with their methods of doing those things: If you start using GCP’s Big Data and start tying your application to GCP, it quickly becomes very difficult  to move off of GCP. This is an important challenge that anyone using serverless technology has to deal with.

I’ve also seen a couple of cases of smaller companies that have decided to go “all in” with Lambda, and have actually prided themselves that they’ve built their entire applications just on Lambda. The problem is that at some point in time they’re going to start seeing the worst of Lambda and start wishing they had infrastructure in place in order to run some of their application on traditional computing. This will not be an easy thing to do because they’ve structured and defined everything to run just within Lambda.

3. Understanding performance capabilities

The advantage of serverless from a performance standpoint is scale. Whether you have one hundred requests or a million requests coming in that have to be processed simultaneously, it doesn’t matter. The system will scale to handle that.

However, performance is seemingly a lot more random and less deterministic by using serverless technologies. For example, one request might take three milliseconds, another request five milliseconds, another one three hundred milliseconds, and then the next one-to-two milliseconds. It’s a lot harder to predict what the performance will be when you’re using serverless technologies.

Additionally, an interesting side-effect that occurs here where generally the higher you’re scaling your application, the more predictable your performance becomes. This seems backwards but it ends up being true! You tend to have these random performance spikes more often with less-used functions than with more-used functions. One reason for this is data caching: the more you cache data and the more often you access it, the better the performance you get. Caching falls apart when you are accessing data that you haven’t accessed very often.

The same thing happens with Lambda: If you aren’t using a Lambda function, it will be put it into cold storage. When you try to use it again, it will have to come out of cold storage and and get back up and running. When managing applications on serverless, it is key to also understand the model that the infrastructure vendor uses for scheduling your functions. Things like warming up a function become important to consider.

4. Unexpected cost scaling

Cost can often scale differently than expected with serverless vs traditional server-based application infrastructure cost varies, making it harder to predict the cost of running an application.

Lambda can be extremely cost-effective, depending on what baseline it is compared to. It can also be cost-prohibitive. It all depends on what your alternatives are. Rather than saying that cost is higher or lower for Lambda, I believe the best way to so say it is that cost is another variable that is not as predictable as many would like it to be.

I have seen customers spend a varied amount of money on Lambda, from hundreds to millions. Organisations need to carefully consider serverless technologies because it’s not necessarily something that’s going to be a guaranteed benefit for teams.

Lee Atchison is senior director, strategic architecture at New Relic. He has designed and led the building of the New Relic platform and infrastructure products, and helped architect a solid service-based architecture that scales as New Relic has grown from a simple SaaS startup to a high-traffic public enterprise.

This article was first published by Computerworld

TOP