@adlrocha — To serverless or not to serverless?

PS: In the end there is always a server involved.

Alfonso de la Rocha
5 min readOct 13, 2019

Originally published at: https://adlrocha.substack.com

In a project I have been involved lately, I’ve been considering migrating all the system to a serverless infrastructure. I was searching for a way of saving costs in the infrastructure layer, and (at least for now) making the infrastructure 100% a variable cost. The project is in such an early stage that I don’t want the infrastructure to be a fixed cost draining my resources, i.e. I want near zero costs while no one is using the system.

Of course, a serverless infrastructure has many other advantages apart from the cost model, such as simple scaling, minimum infrastructure operation, flexibility, etc. I was achieving all of this with my current container-based approach, but the fact that at least one container had to be on 24x7 in the system, made me enter the serverless adventure.

So, what is serverless?

Serverless refers to a software architecture where code is executed in response to events using short-lived containers rather than using containers with demonized entrypoints (whose life remains while the daemon doesn’t exit, such as a nodejs or golang server).

The kind of events that can trigger a serverless function are: HTTP requests, time based rules (similar to OS crons), a publish subscribe mechanism (e.g. Kafka/SNS) or come from a stream (e.g. NATs/Kinesis).

Serverless applications are built into functions, which are usually packaged and shipped as either a Zip containing code and any dependencies, or as a Docker image depending on the serverless framework/platform.

What are the advantages?

Serverless architectures have some additional advantages apart from the cost savings (or let’s better call it a different cost model, you’ll see later why):

  • Outsourcing complexity. No server management necessary.
  • Extreme pay-as-you-go cost model, which can lead into cost savings. The more traffic you have, the more functions you trigger, the more computing power you require, the more you will pay, period. No hidden costs if no one uses the system.
  • It can be inherently scalable. You don’t have to worry anymore about defining the scaling policy of your containers. In a serverless approach, the infrastructure will scale along with your load.
  • Quick deployments and updates. Updating a bug can be as easy as uploading the code for the specific bugged function. No additional management required.
  • Code can run close to the end user decreasing latency. As the code is not hosted in a specific server, it can be run anywhere, including the edge of the network, as near as possible to end users. The combination of edge computing and serverless can become a big thing in a few years.
  • Event-driven development.

But be Careful! It’s not gold all that glitters…

Just like when I buy a book I prefer to read first the bad reviews to understand its weak points from a disappointed reader, I like to do the same with new technologies and ideas in order to avoid the confirmation bias. And it hasn’t been any different for the serverless case. Actually, I found the perfect reasoned statement against serverless architectures in the following article: “Serverless: 15 percent slower and eight times more expensive”. It empirically demonstrates some of the flaws of a serverless-based system: in short, performance and a different cost model. This doesn’t mean that a serverless infrastructure its always more expensive, or has worst performance, than its alternatives. What it means is that you have to consider where, and how, you use this approach to be certain that you get all of the benefits and none of the harms from it.

The other big disadvantage of a serverless architecture is a potential vendor-lock-in. Right now there is not a de-facto serverless open source solution such as Kubernetes and Docker for microservices, so that regardless of where you deploy your system (using Azure, AWS, on-premise, etc.) as long as you deploy over a Kubernetes infrastructure, your system is ready for action. Every cloud provider has its own Function as a Service alternative (FaaS is the non-commercial name for serverless orchestrators): AWS Lambda, Azure’s FaaS, OpenWhisk, Kubeless, etc. and developing for each of them is not the same, so migrating your serverless system from one host, or one framework, to another may not be straightforward… and for me THIS IS A HUGE THING. I don’t want to be forced to marry any of them.

What are the alternatives?

If you want to deploy a serverless system you have two alternatives: using a hosted serverless infrastructure (FaaS) from one of the big cloud providers:

Or using one of the self-hosted open-source serverless frameworks out there. These solutions can be deployed over Kubernetes, so you could have an hybrid microservices/serverless architecture in your system. The catch? You won’t be benefiting from the efficient pay-as-you-go model of the aforementioned hosted serverless alternatives, as you may need to have your Kubernetes cluster up-and-running ready for some traffic. Some of the most popular self-hoster serverless frameworks are:

And if you want a comparison with the pros and cons of all of these self-hosted frameworks over Kubernetes, have a look at this article.

The Amazon and Netflix Case

And you may be wondering, Is someone using serverless and benefiting from it already? What a better way of learning about the “goodness” of serverless than from two of the big guns: AMAZON…

… AND NETFLIX!

Hands-on

I couldn’t leave without giving you some homework. If you want to start going serverless and understand how it practically works, go for any of the AWS Lambda or Azure tutorials. But if you are like me and prefer open source software, and to avoid vendor lock-in, here are some simple tutorials to try two of the self-hosted serverless frameworks I like the most (for now):

Stay tuned!

If you are curious to know the project in which I am involved, and the software architecture I end up choosing for it. See you next week!

https://adlrocha.substack.com

References:

--

--