Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Serverless computing in Azure with .NET
Serverless computing in Azure with .NET

Serverless computing in Azure with .NET: Build, test, and automate deployment

Arrow left icon
Profile Icon Sasha Rosenbaum
Arrow right icon
€22.99 €32.99
Book Aug 2017 468 pages 1st Edition
eBook
€22.99 €32.99
Print
€41.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Sasha Rosenbaum
Arrow right icon
€22.99 €32.99
Book Aug 2017 468 pages 1st Edition
eBook
€22.99 €32.99
Print
€41.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€22.99 €32.99
Print
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Serverless computing in Azure with .NET

Understanding Serverless Architecture

This chapter provides a theoretical introduction into serverless computing, and the types of workloads it is best suited for.

In this chapter, we will cover the following topics:

  • The features of serverless computing
  • Serverless compute best practices
  • Serverless computing advantages and disadvantages
  • The types of services and applications that are a good fit for serverless

Being a technical person, you might be tempted to skip the theory and dive into practice. It is highly advised, however, that you read the next few pages before diving into implementation details.

What is serverless?

Being an emerging trend in the technology world, serverless computing is rapidly gaining popularity. The most wide-spread definition of serverless at this point is driven by the arrival of technologies such as AWS Lambda, Azure Functions, IBM OpenWhisk, and Google Cloud Functions:

Serverless computing is a code execution model where server-side logic is run in stateless, event-triggered, ephemeral compute containers that are fully managed by a third-party.

This definition of serverless is synonymous with Functions as a Service (FaaS). We will use these terms interchangeably in this book.

In different programming languages, we may encounter the terms “function”, “procedure”, and “method” referring to different types of routines performing a task. In this context, the term function is not programming language specific, but rather conceptual:

In programming, a function is a named section of a program that performs a specific task.

Ironically, serverless computing does not actually run without servers. Rather, it involves outsourcing the server provisioning and management to a third-party.

Nearly all existing serverless computing technologies are provided by major public cloud vendors. The sheer scale of today's public cloud vendors allows for the following two things that make serverless more attractive than ever before:

  • Realizing the cost benefits of the economy of scale: For any specific development team, or even organization, it would be difficult to reach the scale at which outsourcing parts of the application to separately managed compute containers provides worthwhile cost benefits. At public cloud vendors' scale, serverless compute becomes inexpensive because the compute power allocation is balanced across thousands of servers and billions of executions, with each specific client application peaking at different times. The nature of software-defined data centers also allows for more efficient server allocation.
  • Minimizing the adverse effects of vendor lock-in: The modern IT world is rapidly coming to a consensus that the benefits of public cloud outweigh the disadvantages of any vendor lock-in that comes with it. With many IT services moving to public cloud, it becomes easier and more beneficial to leverage a cloud provider for hosting serverless applications.
With the arrival of Azure Functions Runtime, you can truly run your functions on any server, whether in the cloud or in an on-premises data center, eliminating the vendor lock-in concerns.

By now, you are probably familiar with some variation of a "shared responsibility" diagram outlining the differences between IaaS, PaaS, and SaaS. Let's add a visual to show where Functions as a Service (FaaS) fits in:

As you can see from the diagram, FaaS takes vendor responsibility one step further, abstracting away the application context along with the physical hardware and virtual servers.

For this reason, despite the book title, I, personally, think that the term serverless is not completely accurate, and the actual architectural approach we are working with would be better described by the term Applicationless.

Azure serverless

The Azure serverless offering is called Azure Functions.

The implementation details of serverless computing differ by vendor, and it is difficult to overview the serverless computing features without being vendor-specific. This book is dedicated to Azure Functions, and thus will focus on Azure-specific features whenever there is difference between vendors.

Architecture

To illustrate where serverless computing would come into your application, let's take a look at a classic three-tier architecture. In this commonly used approach, the application is broken down into the following tiers:

  • Presentation Tier: The presentation tier handles the user interface and typically operates as a thin client on a web or mobile device.
  • Logic Tier: The logic tier, also known as application tier, handles the functional process logic and the business rules of the application. This tier can serve one or more presentation tier clients and scale independently.
  • Data Tier: The data tier persists the application data in databases or file shares and handles the data access layer.

Any of these tiers can be further expanded and broken into separate services. For a deeper dive into three-tier architecture, please visit the following link:

https://en.wikipedia.org/wiki/Multitier_architecture#Three-tier_architecture

A basic three-tier architecture can be presented as the diagram below:

With the introduction of serverless computing, all, or parts of your application's logic tier can be replaced by serverless computing containers, or FaaS.

Depending on an application, functions can handle all of the business logic, or work jointly with other types of services to comprise the logic tier.

A basic three-tier architecture with the logic tier fully handled by functions can be presented as the following diagram:

It is crucial to note that not all types of functionality typically handled by the business logic tier are well suited for FaaS. To see which functionality can be replaced by FaaS, let us discuss the inherent features of serverless computing.

Inherent features

The following list outlines the inherent features of serverless computing, which also dictate the implementation best practices. In some cases, the best practices are imposed by the serverless provider, while in others they remain a developer responsibility.

Asynchronous

Serverless computing is event-triggered and asynchronous by nature. It is therefore important to use non-blocking, awaitable calls in functions.

Stateless

Serverless computing is inherently stateless, meaning that no state should be maintained on the host machine. This also means not sharing state between any parallel or sequential function executions. Any required state needs to be persisted to a database, a file server, or a cache.

In recent years, the stateless approach was made popular by the Twelve-Factor methodology, and many applications have already been refactored to use stateless web and logic tiers. The following quote is from the Twelve Factor App Methodology, factor 6:

VI. Processes
the app as one or more stateless processes
Twelve-Factor processes are stateless and share nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.

To learn more about the Twelve-Factor Methodology, please visit https://12factor.net.

While the Twelve-Factor Methodology is increasingly popular, and makes applications easy to deploy and scale, the restriction of local state is not always a good thing. The main benefit of local state is the low latency of access, and some applications cannot attain optimal performance without it. As an example, when building an application used to trade in a financial market, persisting state to a database or even a cache can become extremely costly. Applications that require local state would not be a good fit for serverless computing. To learn more about stateful alternatives, please look into Azure Service Fabric stateful services:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-introduction

Note that some of the serverless computing vendors completely prevent you from accessing the host machine. With Azure Functions, you do have read/write access to the host machine's virtual D drive, however, it is highly recommended that you don't use it to persist state.

Idempotent and defensive

To ensure consistency, serverless computing functions should be idempotent.

Mathematically, a function is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once, that is, ƒ(ƒ(x)) ≡ ƒ(x).

To give a simple example of a non-idempotent function, imagine a function with a task of calculating a square root of the input number. If the function is run a second time on an input value that has already been processed, it will result in an incorrect output, as √(√(x)) ≠ √(x). Thus, the only way to ensure that the function remains idempotent is making sure that the same input isn't processed twice.

In an asynchronous, highly parallelized environment ran by ephemeral compute containers, we need to work extra hard to ensure that execution errors will not impact all of the subsequent events. What happens when a function crashes midway through encoding a large media file? What happens if a function tasked with processing 100 rows in a database crashes before finishing? Will the remainder of the input remain unprocessed, or will its already processed part be re-processed?

To ensure consistency, we need to store the required state information with our data, allowing a function to exit gracefully if no more processing is required. In addition, we need to implement a circuit-breaker pattern to ensure that a failing function will not retry infinitely. To learn more about the circuit-breaker pattern, please visit the following link:

https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker

Azure Functions in particular have some built-in defensive mechanisms that you can leverage. For instance, for a storage queue triggered function, a queue message processing will be retried five times in case of failure, after which it will be dropped to a poison-message queue.

Execution restrictions

In comparison to a traditional application, a FaaS environment has two very important execution restrictions: the length of time the function can run and the time it takes to start the first function execution after a period of inactivity.

Limited execution time

In a FaaS environment, the runtime of each particular function execution should be as short as possible.

Some vendors impose hard limits on the functions' execution time, limiting the runtime to a few minutes. These limits impose a certain style of programming, but can get cumbersome to deal with.

Azure Functions are offered under two different hosting plans: a Consumption plan and an App Service plan. The Consumption plan scales dynamically on-demand, while an App Service plan always has at least one VM instance provisioned. Because of the different approaches to resource provisioning, these plans have different execution constraints.

Under the App Service plan there is no limit on the function execution time.

Under the Consumption plan there is a default limit of 5 minutes, which can be increased up to 10 minutes by making a change in the function configuration.

Even under the App Service plan, however, it is highly recommended to keep the function execution time as short as possible. A long running function can be broken down into shorter functions that each perform a particular task.

For very long running and/or compute-intensive work, consider a different type of Compute as a Service -Azure Batch. You can refer to the following link for more information on Azure Batch:

https://docs.microsoft.com/en-us/azure/batch/batch-technical-overview.

Startup latency

In a FaaS environment, the functions should be kept as light as possible. Loading many explicit or implicit external dependencies (when a library you reference loads many additional modules it relies on) can increase the function load time and even cause timeouts. Thus, functions should keep their external dependencies to a minimum.

In addition, in most FaaS environments, functions face a significantly increased cold start latency. After a period of inactivity an unused function goes idle. The next time the function is loaded, compute and memory will need to be allocated to it, external dependencies will need to be loaded, and, in the case of compiled languages like C#, the code needs to be re-compiled. All of these factors can cause a significant delay in function startup time.

In Azure C# based functions specifically, the cold start problem has been alleviated with the release of .NET Class Library based functions, since the functions are precompiled and can be loaded more quickly. In addition, when running under the App Service plan (rather than a Consumption plan), the cold start problem is eliminated.

Advantages of serverless computing

The advantages of FaaS can be grouped into a few categories.

Some of the advantages exist in most PaaS environments, however, they may be more pronounced in a FaaS environment.

Some of the advantages are similar to the advantages of the Microservices architecture, in which the application is structured as a collection of loosely coupled services, each of which handles a particular task. To learn more about Microservices architecture, please visit http://microservices.io/patterns/microservices.html.

Lastly, some of the advantages are specific to the FaaS environment only.

Scalability

Serverless computing makes it very easy to scale the application out by provisioning more compute power as required, and deallocating it when the demand is low. This allows developers to avoid the risk of failing their users during peak demand, while also avoiding the cost of allocating massive standby infrastructure.

This makes serverless computing particularly useful for applications experiencing inconsistent traffic. Let's take a look at the following examples:

  • An application used during sporting events: In this case, your application is likely to experience highly variable traffic loads, with a significant difference between high and low traffic. Serverless can help mitigate the complexity and cost of providing adequate service.
  • A retail application: It is common for retail applications to experience extremely high loads during holiday seasons or during marketing campaigns. While these loads are predictable, they often differ so significantly from the day-to-day load, that maintaining the required standby infrastructure can get very costly. Serverless can eliminate the need for standby infrastructure.
  • A periodic social media update application: Imagine an application which posts an update to a Twitter feed once every hour. This application requires very little compute power. In the traditional IT world, such an application would typically run on two servers to ensure resiliency, which is extremely wasteful from the compute power standpoint. Deploying multiple applications to the same server can often become problematic for operational/organizational reasons, and in most organizations, the on-premises compute power is heavily underutilized (on-premises, teams tend to significantly over-provision hardware because it is quite difficult to add more compute power in the future). Serverless computing fits very well to solve this problem.

It is important to note that the scalability advantage exists in every PaaS service, however, with serverless computing, the scaling typically is completely dynamic and handled by the vendor. This means that while in a typical PaaS service, you will need to define metrics (such as high CPU or memory utilization) and, to an extent, define the scaling procedure (such as a number of additional nodes to provision or whether or not the application needs to scale back down after the demand decreases) with serverless computing, the vendor will simply allocate additional compute to your function based on the number of requests coming in.

Pay-As-You-Go

In serverless computing, you only pay for what you use. The Pay-As-You-Go model is likely to result in cost savings in most cases (remember the underutilized infrastructure), and becomes particularly beneficial in the inconsistent traffic scenarios described in the previous section. The model also means that any speed optimization of your service translates directly into cost savings.

Pay-As-You-Go is also an advantage of any PaaS service, however, most PaaS services do not get as granular in allocating compute power.

While the translation of execution time to cost is a lot more direct in an FaaS environment, it is wise to calculate whether or not the dynamic compute allocation is actually the best pricing model for your application. We will discuss cost-effective services design in more detail in Chapter 13, Designing for High Availibility Disaster Recovery and Scale.

Reduced operational costs

In a serverless computing environment, you do not need to provision, manage, patch, or secure servers. You are outsourcing the management of both the physical hardware and the virtual servers, operational systems, networking, and security to the serverless computing vendor. This provides cost savings in the following two ways:

  • Direct infrastructure cost
  • IT operations cost

This advantage also exists in any PaaS services, and for a FaaS service it may actually not be as straightforward as it seems. While there are very clear cost benefits to not managing servers, it is important to remember that operations typically cover a lot more than server management, including tasks such as application deployment, monitoring, and security. More on this in the next section.

Speed of deployment

Serverless computing makes it incredibly easy to go from an idea to execution. Whether proving the business value of an idea or needing a sandbox to test a scenario, the ease of creating new business logic layer with serverless computing provides an excellent ability to test drive your minimal viable product.

Independent technology stack and updates

Similar to Microservices architecture, FaaS forces a pattern of breaking the logic layer into smaller, task-specific services. This provides the following tangible benefits:

  • Versioning the services independently of one another: In a monolithic application, changing even a small part of business logic will trigger a redeployment of the entire monolith. In a FaaS environment, each function handles a particular task, and thus the implementation of each function can be changed independently, as long as the contract with the services upstream and downstream of the function is maintained. This can have a tremendous effect on the agility and flexibility of the application update process.
  • Freedom to use a different technology stack for each service: In a monolith application, the developer is committed to a particular technology stack, whether or not it is well suited for the task at hand. In a FaaS environment, the developer is free to implement each task in the way best suited for the job, and most serverless computing vendors provide a number of different languages/platforms to choose from. If part of your application can benefit from Python's powerful tooling for processing regular expressions, you can easily deploy a Python-based Azure Function along with your C#-based functions, either packaged in the same Function App or separately. This freedom can greatly improve code efficiency and simplicity.
In Azure Functions, specifically, the continuous delivery setup deploys the entire Function App, not a single function, so in cases where a function needs to often change independently, it is best if it is deployed as a separate Function App. We will discuss the Function versus Function App topic in more detail in the next chapter.

Integration with the cloud provider

Existing serverless frameworks are closely integrated with other services offered by the same public cloud vendor. They make it easy to trigger the functions based on events in other cloud services and store the outputs in cloud data stores. They are hosted on the same infrastructure, which makes for minimal latency. As such, serverless functions are ideal for augmentation of other cloud services with bits of custom code performing tasks that aren't offered as a fully managed service.

Open source

While they are fully managed by the Microsoft engineering, Azure Functions are an open source offering based on the Azure WebJobs SDK, which means that as a developer you could contribute quality code and help develop required features or resolve issues.

To learn more about Azure Functions and the Azure WebJobs SDK, visit https://github.com/Azure/Azure-Functions.

Disadvantages of serverless computing

The following section outlines the current disadvantages of leveraging serverless computing.

Some of these disadvantages arise from additional complexity of the application architecture. Others stem from the lack of maturity of current serverless environments tooling and the problems that come with outsourcing parts of your system.

Distributed system complexity

Similar to the Microservices architecture, serverless introduces increased system complexity and a requirement for network communication between application layers. The added complexity centers around the following two main aspects:

  • Implicit interfaces between services: As discussed earlier, functions make the application changes easier by allowing for separate versioning of services. This, however, introduces an implicit contract between different parts of the system, that could be broken by one of the sides. In a monolith application, breaking changes can be easily caught by the compiler or integration testing. In a FaaS environment, a developer could make a breaking change without impact awareness.
  • Network and queueing: In a FaaS environment, parts of the application communicate with each other using HTTP requests or queueing mechanisms. This introduces additional latency, adds a dependency on queueing services, and makes handling errors and retries significantly more complex.

Potential load on downstream components

When relying on the inherent dynamic scalability of the serverless computing for the business logic layer, it is easy to miss the potential overload on the downstream components such as databases and file stores. During the design and testing phases of the application development, it is crucial to verify that downstream components are able to handle the potential high load created by the dynamic scaling of the serverless computing tier.

Potential for repetitive code

The assumption of the three-tier architecture is that the business logic tier can serve multiple different clients, such as various web and mobile devices, different consumer APIs, and so on. When the entire business logic tier is moved into serverless computing, certain functionality is likely to be moved upstream to client applications. This can introduce a situation in which each client application is implementing the same functionality.

Different operations

As we've discussed, the server administration and scaling out is fully handled by the serverless computing vendor. However, this benefit comes with a trade-off. You are still fully responsible for testing, deploying, and monitoring your application. You are also responsible for the application security, as well as for ensuring that it will perform correctly and consistently at scale. With serverless computing, you may be presented with a new set of tools for managing all of the preceding tools that may not integrate well with your current ops stack. Needing to train your team on the new tool stack can be a drawback.

Security and monitoring

With serverless computing offers being new, their security and monitoring tools are also new and often very specific to the serverless environment and the particular vendor. This introduces new complexity into the process of managing operations for the application overall, adding a new type of service to manage. Security and monitoring of Azure Functions will be discussed in depth in Chapter 10, Securing Your Application and Chapter 11, Monitoring Your Application.

Testing

Testing can become more difficult in a serverless environment due to the following few aspects:

  • For the purposes of Integration testing, it is sometimes difficult to replicate the full cloud-based flow on a testing machine.
  • The more distributed the system becomes, the more dependencies and points of failure are introduced, and the harder it becomes to test for every possible variation of the flow
  • Load testing becomes an even more crucial aspect of testing the application, as some issues may only arise at scale

We will discuss testing of serverless applications in more detail in Chapter 8, Testing Your Azure Functions.

Vendor control

Unlike vendor lock-in, vendor control implies that by outsourcing a big part of your operations management to a third-party, you also relinquish control over how these operations are handled. This includes the service limitations, the scaling mechanism, and the potential optimization of hosting your application.

In addition, the vendor has the ultimate control over the environment and tooling, deciding when to roll out features and fix issues (although in the case of Azure Functions, you can help fix issues by contributing to the open source project).

Vendor lock-in

Despite the theoretical portability of implementation code used in functions, the surrounding features and tooling make it relatively difficult to deploy the application with another vendor.

For Azure Functions, specifically, Microsoft has recently released the Azure Functions Runtime, which allows you to run functions on your own server. With Azure Functions Runtime, you can run functions on-premises or even in a different public cloud, which allows you to avoid vendor lock-in. For more information, visit https://docs.microsoft.com/en-us/azure/azure-functions/functions-runtime-overview.

Multitenancy

Just a few years ago, multitenancy used to be on top of the list of concerns of organizations considering leveraging the public cloud. However, multitenancy is also what enables public clouds to become more cost-effective and more innovative than private data centers. In particular, the cost benefits of dynamically allocated serverless computing arise from the economy of scale, which is made possible by utilizing the same infrastructure to serve many different client applications at different times.

At present, most organizations have accepted that public cloud vendors are committed to ensuring that as a customer, you will get the same security isolation and dedicated resources allocation in a public cloud as you would in a single-tenant environment.

Vendor-specific limitations

Some disadvantages of serverless computing are vendor-specific and luckily do not apply to Azure Functions. We will overview them briefly here, as you may see references to them online:

  • Environment configuration: In some serverless computing environments, it is difficult to set environment-specific variables (for instance, dev/test/prod settings) for each function. In Azure Functions, each Function App has a local.settings.json file that defines configuration settings in a manner similar to traditional .NET applications. In an Azure environment, these settings are located in the Function App. This also means that the recommended approach is to deploy dev, test, prod, or other environments into separate Function Apps. More on the concept of Function Apps will be covered in Chapter 2, Getting Started with Azure Environment.
  • Local development tools and debugging: With most serverless computing vendors, there is a noticeable lack of local development tools for functions. The lack of local tools can make it significantly harder to debug or troubleshoot the application. With C# precompiled Azure Functions, the Visual Studio development tools are on par with the rich development environment of traditional .NET applications.
  • Service grouping: With some serverless computing vendors, it is not possible to deploy functions that are part of the same applications as a group, which places more load on the deployment team. Azure Functions allow you to deploy functions, even those written in different languages, together as a part of the same Function App, or separately as part of different Functions Apps.
  • Execution time limit: Under the Consumption plan, the default function execution time limit is currently 5 minutes, and can be extended to 10 minutes. Under the App Service plan, there is no hard execution time limit on a function execution. Thus, when you need a longer-running function, you can choose the App Service plan (although this has cost and dynamic scaling implications).
  • Cold start issues: Under the Consumption plan, after a period of idleness, functions may experience cold start issues while the infrastructure is being provisioned. Under the App Service plan, functions get at least one dedicated instance at all times, and hence there are no cold start issues. Thus, when you need the function to start up quickly after a period of idleness, you can choose an App Service plan.
  • Separate API Gateway configuration: With some serverless computing vendors, an API Gateway may be required to gain access to your functions via HTTP requests. With Azure Functions, the function endpoint URL is automatically provisioned for every HTTP triggered function, making the configuration significantly simpler. The endpoint URL is encrypted with TLS and can also be easily configured to utilize different types of authentication. More on the endpoint configuration and security will be covered in the later chapters of this book.

To read more on some of the serverless computing advantages and disadvantages, please review this excellent post by Mike Roberts at https://martinfowler.com/articles/serverless.html.

Applications

Given all of the things outlined in this chapter, it becomes clear that for most applications, it is easier to view FaaS as a component augmenting your overall application architecture, rather than an environment completely handling the business logic layer.

The following types of workloads are best suited for a serverless environment:

  • Asynchronous
  • Event-driven
  • Stateless
  • Fast

The following types of workloads will benefit the most in serverless environment:

  • Characterized by variable load
  • Requiring massive horizontal scale
  • Event-driven
  • Augmenting other cloud services
  • Standalone (needing a different toolset or version than the rest of the application)

For the right type of workload, serverless computing provides excellent benefits. Serverless computing can bring significant cost savings (discussed in more detail in Chapter 13, Designing High Availibility, Disaster Recovery, and Scale) and the advantages of completely dynamic scaling based on load.

Summary

In this chapter, we discussed the features of serverless computing and the type of workloads that are best suited to be hosted in it. In the following two chapters, we will dive into an implementation of our first serverless function.

We will start in the next chapter with an overview of Azure cloud and tooling and proceed to deploy our first "Hello world" function using Azure Functions Portal.

 

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • ? Take advantage of the agility, scale, and cost-effectiveness of the cloud using Azure Serverless compute
  • ? Build scalable, reliable, and cost-effecient applications with Serverless architecture and .NET
  • ? Learn to use Azure functions to their fullest potential in .NET

Description

Serverless architecture allows you to build and run applications and services without having to manage the infrastructure. Many companies have started adopting serverless architecture for their applications to save cost and improve scalability. This book will be your companion in designing Serverless architecture for your applications using the .NET runtime, with Microsoft Azure as the cloud service provider. You will begin by understanding the concepts of Serverless architecture, its advantages and disadvantages. You will then set up the Azure environment and build a basic application using a sample text sentiment evaluation function. From here, you will be shown how to run services in a Serverless environment. We will cover the integration with other Azure and 3rd party services such as Azure Service Bus, as well as configuring dependencies on NuGet libraries, among other topics. After this, you will learn about debugging and testing your Azure functions, and then automating deployment from source control. Securing your application and monitoring its health will follow from there, and then in the final part of the book, you will learn how to Design for High Availability, Disaster Recovery and Scale, as well as how to take advantage of the cloud pay-as-you-go model to design cost-effective services. We will finish off with explaining how azure functions scale up against AWS Lambda, Azure Web Jobs, and Azure Batch compare to other types of compute-on-demand services. Whether you’ve been working with Azure for a while, or you’re just getting started, by the end of the book you will have all the information you need to set up and deploy applications to the Azure Serverless Computing environment.

What you will learn

  • • Understand the best practices of Serverless architecture
  • • Learn how how to deploy a Text Sentiment Evaluation application in an Azure Serverless environment
  • • Implement security, identity, and access control
  • • Take advantage of the speed of deployment in the cloud
  • • Configure application health monitoring, logging, and alerts
  • • Design your application to ensure cost effectiveness, high availability, and scale

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 17, 2017
Length 468 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781787288393
Vendor :
Microsoft
Category :
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Aug 17, 2017
Length 468 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781787288393
Vendor :
Microsoft
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together

Stars icon
Total 63.96 91.97 28.01 saved
Azure for Architects
€20.98 €29.99
Serverless computing in Azure with .NET
€22.99 €32.99
Azure Serverless Computing Cookbook
€19.99 €28.99
=
Book stack Total 63.96 91.97 28.01 saved Stars icon

Table of Contents

16 Chapters
Preface Chevron down icon Chevron up icon
1. Understanding Serverless Architecture Chevron down icon Chevron up icon
2. Getting Started with the Azure Environment Chevron down icon Chevron up icon
3. Setting Up the Development Environment Chevron down icon Chevron up icon
4. Configuring Endpoints, Triggers, Bindings, and Scheduling Chevron down icon Chevron up icon
5. Integrations and Dependencies Chevron down icon Chevron up icon
6. Integrating Azure Functions with Cognitive Services API Chevron down icon Chevron up icon
7. Debugging Your Azure Functions Chevron down icon Chevron up icon
8. Testing Your Azure Functions Chevron down icon Chevron up icon
9. Configuring Continuous Delivery Chevron down icon Chevron up icon
10. Securing Your Application Chevron down icon Chevron up icon
11. Monitoring Your Application Chevron down icon Chevron up icon
12. Designing for High Availability, Disaster Recovery, and Scale Chevron down icon Chevron up icon
13. Designing Cost-Effective Services Chevron down icon Chevron up icon
14. C# Script-Based Functions Chevron down icon Chevron up icon
15. Azure Compute On-Demand Options Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.