Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Beginning Serverless Architectures with Microsoft Azure

You're reading from   Beginning Serverless Architectures with Microsoft Azure Design scalable applications and microservices that effortlessly adapt to the requirements of your customers

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher Packt
ISBN-13 9781789537048
Length 100 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Daniel Bass Daniel Bass
Author Profile Icon Daniel Bass
Daniel Bass
Arrow right icon
View More author details
Toc

Understanding the Real Benefits of Serverless Computing

While serverless computing has a compelling set of benefits, it is no silver bullet. To architect effective solutions, you need to be aware of its strengths and weaknesses, which are shown in the following table:

Benefits

We will discuss each of its main benefits in detail here.

Speed of Development

Firstly, a major strength of serverless computing is its speed of development. The developer doesn't have to concern oneself with, or write any of the code for, the underlying architecture. The process of listening for HTTP messages (or other data inputs/events) and routing is done entirely by the cloud provider, with only some configuration provided by the developer. This allows a developer to simply write code that implements business logic, and deploy it straight to Azure. Each function can then be tested independently.

Automatic Scaling

Secondly, serverless computing has automatic scaling by default. The routing system in a serverless service like Azure Functions will detect how many requests are coming in and deploy the serverless code to more servers when necessary. It will also reduce the number of servers all the way down to zero, if necessary. This is particularly useful if you are running an advertisement or a publicity stunt of some kind—half-time advertisements in the Premier League final, for example. Your backend can instantly scale to handle the massive influx of new users for this brief period of time, before returning to normal very quickly. It's also useful for separating out what would usually be a large, monolithic backend. Some of the parts of the backend will probably be used a lot more than others, but usually, you have to scale the whole thing manually to keep the most-utilized functions responsive. If each function is separated, automatically scaling it allows the developer to identify and optimize the most-used functions.

Flexible Costs

Continuing on the subject of being able to scale down to zero servers, the cost of serverless code is very flexible. Due to auto-scaling, providers are able to charge according to resource usage. This means that you only pay according to usage, which has benefits for all developers, but for small businesses most of all. If your business has lots of customers one month, you can scale accordingly and pay your higher bill with the higher revenue you have made. If you have a quieter month, your serverless computing bill will be proportionally lower for that lower revenue period.

Reduced Management Overhead

Finally, serverless code requires very little active management. An application running on a shared server will usually require a significant amount of monitoring and management, for both the server itself and its resource allocation at any given moment. Containerized code, or code running on virtual machines, is better, but still requires either container management software or a person to monitor the usage and then scale down or scale up appropriately, even if the server itself no longer requires active management. Serverless code has no server visible to the developer and will scale according to demand, meaning that it requires monitoring only for exceptions or security issues, with no real active management to keep a normal service running.

Drawbacks

Serverless code does have some weaknesses which are described here.

Warmup Latency

The first weakness is warmup latency. Although the traffic managers of various platforms (AWS, Azure, Google Cloud, and so on) are very good at responding to demand when there are already instances of serverless code running, it's a very different problem when there are no instances running at all. The traffic managers need to detect the message, allocate a server, and deploy the code to it before running it. This is necessarily slower than having a constantly running container or server. One way to combat this is to keep your code small and simple, as the biggest slowdown in this process can be transferring large code files.

Vendor Lock-in

Secondly, there is the issue of vendor lock-in. Each cloud provider implements its serverless service differently, so serverless code written for Azure is difficult to port over to AWS. If prices spike heavily in the future, then you will be locked in to that provider for your serverless architecture.

There's also the issue of languages. JavaScript is the only language that is universally available, with patchy service across providers for other languages, like C# and Java. There is a solution to this, however; it is called the serverless framework. This is a framework that you can use to write simple HTTP-triggered functions, which can then be deployed to all of the major cloud providers. Unfortunately, this means that you will miss out on a lot of the best features of Azure Functions, because their real power comes from deep integration with other Azure services.

Lack of Low-Level Control

Finally, there is the issue of a lack of low-level control. If you are writing a low latency trading platform, then you may be used to accessing networking ports directly, manually allocating memory, and executing some commands using processor-specific code. Your own application might require similar low-level access, and this isn't possible in serverless computing. One thing to bear in mind, however, is that it's possible to have part of the application running on a server that you have low-level access to, and background parts of it running in a serverless function.

If an Azure Function isn't executed for a while, the function stops being deployed and the server gets reallocated to other work. When the next request comes in the function needs to deploy the code, warm up the server and execute the code, so it's slower. Inevitably, this leads to latency when the functions are triggered again, making serverless computing somewhat unsuitable for use cases that demand continuous low-latency. Also, by its very nature, serverless computing prevents you from accessing low level commands and the performance benefits that they can give. It's important to emphasize that this doesn't mean serverless computing is unbearably slow; it just means that applications that demand the utmost performance are unlikely to be suitable for serverless computing.

Overall, there is a clear benefit to using serverless computing, and particularly, Azure Serverless, especially if you use some of the tips detailed in the weaknesses section. The benefits are strong for both the developer and the business.

The serverless framework (https://serverless.com/) can help with vendor lock-in by making your serverless functions cross-cloud.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime