Search icon CANCEL
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Learning Hub
Free Learning
Arrow right icon
Developing Solutions for Microsoft Azure AZ-204 Exam Guide - Second Edition
Developing Solutions for Microsoft Azure AZ-204 Exam Guide - Second Edition

Developing Solutions for Microsoft Azure AZ-204 Exam Guide: A comprehensive guide to passing the AZ-204 exam , Second Edition

By Paul Ivey , Alex Ivanov
$35.99 $24.99
Book May 2024 428 pages 2nd Edition
$35.99 $24.99
$15.99 Monthly
$35.99 $24.99
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now
Table of content icon View table of contents Preview book icon Preview Book

Developing Solutions for Microsoft Azure AZ-204 Exam Guide - Second Edition

Implementing Azure App Service Web Apps

Now it’s time to build on some of the fundamentals you have covered in the previous chapter, focusing on one of Azure’s most popular Platform as a service (PaaS) services, Azure App Service. Many developers that traditionally had web apps hosted on an Internet Information Services (IIS) server (even if it was a cloud-based VM) are moving their applications to be hosted on App Service, which brings even more benefits to this scenario than IaaS.

It’s important to understand that Azure App Service is more than just for hosting web apps, such as hosting web APIs, so we’ll start with an overview of App Service as a whole, before turning our focus to web apps specifically.

By the end of this chapter, you’ll have a solid understanding of Azure App Service. You’ll also understand how you can manage your web applications throughout their life cycle in the cloud, including configuring, scaling, and deploying changes in a controlled and non-disruptive way.

This chapter addresses the Implement Azure App Service Web Apps skills measured within the Develop Azure compute solutions area of the exam, which forms 25-30% of the overall exam points. This chapter will cover the following main topics:

  • Exploring Azure App Service
  • Configuring app settings and logging
  • Scaling App Service apps
  • Leveraging deployment slots

Technical Requirements

The code files for this chapter can be downloaded from:

In addition to any technical requirements from Chapter 1, Azure and Cloud Fundamentals, you will require the following resources to follow along with the exercises in this chapter:

Exploring Azure App Service

Azure App Service is an HTTP-based PaaS service on which you can host web applications, RESTful APIs, and mobile backends, as well as automate business processes with WebJobs. With App Service, you can code in some of the most common languages, including .NET, Java, Node.js, and Python. With WebJobs, you can run background automation tasks using PowerShell scripts, Bash scripts, and more. With App Service being a PaaS service, you get a fully managed service, with infrastructure maintenance and patching managed by Azure, so you can focus on development activities.

If your app runs in a Docker container, you can host the container on App Service as well. You can even run multi-container applications with Docker Compose. We’ll cover containers and Docker in Chapter 3, Implementing Containerized Solutions. The previous chapter, Chapter 1, Azure and Cloud Fundamentals, mentioned that App Service allows you to scale, automatically or manually, with your application being hosted anywhere within the global Azure infrastructure while providing high-availability options.

In addition to the features covered in this chapter, App Service also provides App Service Environments (ASEs), which offer a fully isolated environment for securely running apps when you need very high-scale, secure, isolated network access and high compute utilization.

From a compliance perspective, App Service is International Organization for Standardization (ISO), Payment Card Industry (PCI), and System and Organization Control (SOC) compliant. A good resource on compliance and privacy is the Microsoft Trust Center (

App Service also provides continuous integration and continuous deployment (CI/CD) capabilities by allowing you to connect your app to Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository. App Service can then automatically sync with code changes you make, based on the source control repository and branch you specify.

App Service is charged according to the compute resources (the VMs behind the scenes on which your apps run) you allocate for your apps. Those resources are determined by the App Service plan on which you run your applications. App Service apps always run in an App Service plan, so this seems like the logical point at which to introduce App Service plans.

App Service Plans

If you’re familiar with the concept of a server farm or cluster, where a collection of powerful servers provide functionality beyond that of a single machine, App Service plans should make sense (in fact, the resource type for App Service plans is Microsoft.Web/serverfarms). As briefly mentioned before, an App Service plan defines the compute resources allocated for your web apps. The plural context is used because, just like in a server farm, you can have multiple apps using the same pool of compute resources, which is defined by the App Service plan. App Service plans define the operating system of the underlying VM(s), the region in which the resources are created, the number of VM instances, and the pricing tier (which also defines the size of those VMs).

As you might be used to by now, some pricing tiers will provide access to features that aren’t available in others. For example, the Free and Shared tiers run on the same VM(s) as other App Service apps (including other customers’ apps) and are intended for testing and development scenarios. These tiers also allocate resource quotas for the VM(s), meaning you can’t scale out and you only get a certain allowance of minutes each day during which your app can run and use those resources. All remaining tiers other than Isolated and Isolated v2 have dedicated VMs on which your apps can run unless you specifically place multiple apps within the same App Service plan. The Isolated and Isolated v2 tiers run on dedicated VMs, but they also run on dedicated Azure virtual networks (VNets), providing network and compute isolation as well as the maximum scaling-out capabilities. Azure Function apps also have the option to run in an App Service plan, which will be explored in Chapter 4, Implementing Azure Functions.

A common misunderstanding is that you need to have one App Service plan per App Service. This is not always necessary (you can’t mix Windows and Linux apps within the same App Service plan, so you’d need multiple plans if you have that). Remember that an App Service plan defines a set of resources that can be used by one or more applications. If you have multiple applications that aren’t resource intensive and you have compute to spare within an App Service plan, by all means, consider adding those applications to the same App Service plan.

One way to think of an App Service plan is as the unit of scale for app services. If your App Service plan has five VM instances, your application(s) will run across all five of those instances. If you configured your App Service plan with autoscaling, all the applications within that App Service plan will scale together based on those auto scale settings.

One final point to note is that once you create an App Service plan, you are allocating those VM instances and are therefore paying for those resources regardless of how many App services are running on the plan. Even if you do not have any App services running on an App Service plan, you still have to pay for the allocated resources. If you scale the App Service plan out (horizontally), you are increasing the number of VM instances, which will cost you more because you are allocating more VMs.

Within the Azure portal, App Service plans are described as representing the collection of physical resources that are used to host your apps:

Figure 2.1: The Azure portal description of App Service plans

Figure 2.1: The Azure portal description of App Service plans

Exercise 1: Creating an App Service Plan

In this exercise, you will explore the Azure portal by creating an App Service plan. This will make it easier to understand the configuration options:

  1. Either navigate from to Create a resource and select App Service plan or use the following URL to jump straight to it:
  2. Select your subscription from the Subscription dropdown and select an existing resource group from the Resource Group dropdown, if you have one that you’d like to use. Alternatively, select the option to create a new one.
  3. Enter the desired name for your App Service plan, select Windows for the Operating System option, and select your region, as shown in the following screenshot:
Figure 2.2: App Service plan details within the Azure portal

Figure 2.2: App Service plan details within the Azure portal

  1. Click on the Explore pricing plans link to be taken to a different kind of specification picker than you might be used to from other resource types. You’ll be able to see the different pricing tiers available and their respective VM compute resources using Hardware view, as shown in Figure 2.3:
Figure 2.3: App Service plan hardware view

Figure 2.3: App Service plan hardware view

From within Feature view, you can see the different features that are available for the various tiers, as shown in Figure 2.4:

Figure 2.4: App Service plan feature view

Figure 2.4: App Service plan feature view

  1. You are going to be making use of deployment slots, also known as staging slots, and autoscale in this chapter, so select the least expensive Production tier that provides these features. For this example, that’s Standard S1, as shown in the following figure:
Figure 2.5: Standard S1 production tier App Service plan feature view

Figure 2.5: Standard S1 production tier App Service plan feature view

  1. After selecting an appropriate App Service plan tier that includes staging slots and autoscale, click on Select.
  2. Notice that, depending on which tier you selected, the option to enable Zone redundancy is disabled. This is because that’s only available in higher tiers. Make a note of the SKU code, not the name. In this example, the SKU code is S1, not just Standard:
Figure 2.6: Pricing tier SKU code and zone redundancy options

Figure 2.6: Pricing tier SKU code and zone redundancy options

  1. Click on Review + Create and select Create to provision the new App Service plan. Once completed, go into your new App Service plan and look through the available settings. You will be able to see any apps running within the plan, storage use, networking settings, as well as horizontal and vertical scaling options.
  2. Open a session of your preferred terminal, make sure you’re logged in, and set it to the right subscription.
  3. Create a Linux App Service plan using the following CLI command:
    az appservice plan create -n "<plan name>" -g "<resource group>" --sku "<SKU code>" --is-linux

    Alternatively, use the following PowerShell command:

    New-AzAppServicePlan -Name "<plan name>" -ResourceGroupName "<resource group>" -Tier "<SKU code>" -Location "<region>" -Linux

    While the CLI accepts but doesn’t require a location, because it will inherit from the resource group, PowerShell requires the location to be specified.

Now that you have explored the App Service plans that provide the underlying compute resources for your apps, you can move on to App Service web apps and put these App Service plans to good use.

App Service Web Apps

Originally, App Service was only able to host web apps on Windows, but since 2017, App Service has been able to natively host web apps on Linux for supported application stacks. You can get a list of the available runtimes for Linux by using the following CLI command:

az webapp list-runtimes --os-type linux -o tsv

The versions you see relate to the built-in container images that App Service uses behind the scenes when you use a Linux App Service plan. If your application requires a runtime that isn’t supported, you can deploy the web app with a custom container image. If you want to use your own custom containers, you can specify the image source from various container image repository sources. As you won’t see any container examples in this chapter, you are not required to do that here.

Exercise 2: Creating a Basic Web App Using the Azure Portal

In this exercise, you will create a basic web app using the Azure portal. This will give you a good overview of all the steps and elements needed for this process:

  1. Navigate to Create a resource in the portal and select Web App or directly use this URL:
  2. Make sure you have the correct subscription and resource group selected (or create a new one). Enter a globally unique web app name and select the Code radio button next to Publish.
  3. Select .NET 6.0 (LTS) for Runtime stack and Linux for the Operating System option and select the appropriate region from the Region dropdown.

Notice that the Linux App Service plan has already been selected for you in the Linux Plan field and that you can’t select the Windows one, despite it being in the same subscription and region (the resource group doesn’t matter).

Although we’re using pre-created App Service plans, notice that you can create a new one at this point. If you were to use the az webapp up (don’t do it right now) CLI command, it would automatically create a new resource group, App Service plan, and web app.

  1. Progress to the Deployment screen and notice the settings available. At the time of writing, the only option available on this screen is GitHub Actions, but you do get more options within the Deployment Center area of the app once created.
  2. Continue through the wizard and create the web app. Once completed, go to the resource.
  3. From the Overview blade, notice that App Service Plan is listed under the Essentials section.
  4. Navigate to the Deployment Center area and view the Continuous Deployment (CI/CD) options that are available in addition to GitHub under the Source dropdown.
  5. Back to the Overview blade, select Browse to open the web app in your browser. You will be presented with the generic starter page:
Figure 2.7: Web app starter page content

Figure 2.7: Web app starter page content

  1. Create a Windows web app with the following CLI command:
    az webapp create -n "<globally unique name>" -g "<resource group>" --plan "<name of the Windows App Service plan previously created>"

    Alternatively, use the following PowerShell command:

    New-AzWebApp -Name "<globally unique app name>" -ResourceGroupName "<resource group>" -AppServicePlan "<name of the Windows App Service plan previously created>"

    A location isn’t required here since it will inherit from the App Service plan (App Service plans will only be available within the same subscription and region).

With that, you’ve created some App Service plans and web apps. Now, you can deploy some very basic code to one of your web apps.

  1. If you haven’t already cloned the code repository for this book, do so now from an appropriate directory by using the following command:
    git clone https: //github. com/PacktPublishing/Developing-Solutions-for-Microsoft-Azure-AZ-204-Exam-Guide-2nd-Edition

    Feel free to either work from the Chapter02\01-hello-world directory or create a new folder and copy the contents to it.

  2. Change the terminal directory to the correct directory.
  3. Deploy this basic static HTML application to the Windows web app and launch it in the default browser using the following CLI command:
    az webapp up -n "<name of the Windows web app>" --html -b

    Here, you added the -b (the short version of --launch-browser) argument to open the app in the default browser after launching, but you don’t need to. It just saves time because you should browse to it now anyway. Using the --html argument ignores any app detection and just deploys the code as a static HTML app.

  4. Make an arbitrary change to some of the contents of the index.html file and run the same CLI command to update and browse to your updated application.
  5. Optionally, to save on costs and keep things simple, go to the Azure portal and delete the Windows App Service and the Windows App Service plan with it.

You will only be using the Linux App Service for the rest of this chapter, so the Windows one is no longer required unless you want to compare the experience between Windows and Linux apps as you go along.

That was about as simple as it gets. We’re not going to run through every different type of deployment (using a Git repository, for example), but feel free to check out the Microsoft documentation on that.

At the moment, anybody with a browser and an internet connection could access your web app if they had the URL. Now, let’s explore how authentication and authorization work with App Service so that you can require users to authenticate before being able to view our new web app.

Authentication and Authorization

Many web frameworks have authentication (signing users in) and authorization (providing access to those who should have access) features bundled with them, which could be used to handle our application’s authentication and authorization. You could even write tools to handle them if you’d like the most control. As you may imagine, the more you handle yourself, the more management you need to do. You should keep your security solution up to date with the latest updates, for example.

With App Service, you can make use of its built-in authentication and authorization capabilities so that users can sign in and use your app by writing minimal code (or none at all if the out-of-the-box features give you what you need). App Service uses federated identity, which means that a third-party identity provider (Google, for example) manages the user accounts and authentication flow, and App Service gets the resulting token for authorization. This built-in no-code authentication is often referred to as easy auth.

Rest assured that more detail and context on authentication and authorization will be covered in Chapter 7, Implementing User Authentication and Authorization. This chapter only scratches the surface of the topic in the context of App Service.

Authentication and Authorization Module

Once you enable the authentication and authorization module (which you will shortly), all incoming HTTP requests will pass through it before being handled by your application. The module does several things for you:

  • Authenticates users with the identity provider
  • Validates, stores, and refreshes the tokens
  • Manages the authenticated sessions
  • Injects identity information into the request headers

On Windows App Service apps, the module runs as a native IIS module in the same sandbox as your application code. On Linux and container apps, the module runs in a separate container, isolated from your code. Because the module doesn’t run in-process, there’s no direct integration with specific language frameworks, although the relevant information your app may need is passed through using request headers. This is a good time for the authentication flow to be explained.

Authentication Flow

It’s useful to understand, at least to some extent, what the authentication flow looks like with App Service, which is the same regardless of the identity provider, although this may be different depending on whether or not you sign in with the identity provider’s SDK. With the provider’s SDK, the code handles the sign-in process and is often referred to as client flow; without the identity provider’s SDK, App Service handles the sign-in process and is often referred to as server flow. We’ll discuss some of the theory first, before checking it out in practice.

The first thing to note is that the different identity providers will have different sign-in endpoints. The format of those endpoints is as follows: <AppServiceURL>/.auth/login/<provider>. Here are a few examples of some identity providers:

  • Microsoft identity platform: <AppServiceURL>/.auth/login/aad
  • Facebook: <AppServiceURL>/.auth/login/facebook
  • Google: <AppServiceURL>/.auth/login/google

The following diagram illustrates the different steps of the authentication flow, both using and not using the provider SDKs:

Figure 2.8: Authentication flow steps

Figure 2.8: Authentication flow steps

To summaries this flow, once the identity provider successfully authenticates you, it will redirect your browser session to what is called the redirect URI (also referred to as the callback address). When the provider redirects your browser to the redirect URI, it will also send any relevant tokens so that your app can obtain information about your identity.

Once your browser session has been redirected with the token as a payload, App Service will provide you with an authenticated cookie, which your browser will then use in any subsequent requests to the app (we’ll look at this step by step shortly).

You can configure the behavior of App Service when incoming requests aren’t authenticated. If you allow unauthenticated requests, unauthenticated traffic gets deferred to your application, and authenticated traffic gets passed along by App Service with the authentication information in the HTTP headers. If you set App Service to require authentication, any unauthenticated traffic gets rejected without being passed to your application. The rejection can be a redirect to /.auth/login/<provider> for whichever provider you choose. You can also select the rejection response for all requests. For web apps, that will often be a 302 Found redirect.

Exercise 3: Configuring App Service

In this exercise, you will configure App Service to make use of the authentication and authorization module.

Seeing the authentication flow and authorization behavior in action will help cement your understanding of the topic. We’re going to use the Azure portal for this exercise, as that will be easier to illustrate and understand. The exam does not require you to know all the commands to set this up programmatically; you just need to have some understanding of the setup and behavior:

  1. Open your App Service within the Azure portal and navigate to the Authentication blade.
  2. Select Add identity provider and notice the providers available in the Identity provider dropdown.
  3. From the provider list, select Microsoft.
  4. Leave the default settings for the App registration section:
Figure 2.9: Default App Service app registration settings

Figure 2.9: Default App Service app registration settings

Notice the settings available under the App Service authentication settings section and how they relate to what has been covered so far:

Figure 2.10: App Service authentication settings

Figure 2.10: App Service authentication settings

  1. Select Add. You will see that your App Service has a new identity provider configured and that authentication is required to be able to access the app. Notice App (client) ID? You’ll see that referenced again shortly.
  2. Within the Configuration blade, notice there’s a new application setting for the provider authentication secret of the app registration just created.

Note on UI changes

At the time of writing, the Azure App Service UI started changing for some users. Rather than environment variables being configured within the Configuration blade, some users will see a separate Environment variables blade. While writing this book, the UI changed several times. Whenever you see environment variables being configured within the Configuration blade, note that this may have changed to the Environment variables blade, or Microsoft may have further changed the UI.

  1. Open a new InPrivate/Incognito browser session, open the built-in developer tools (you can open it with F12 for most browsers by default), and navigate to the Network tab. This example uses Microsoft Edge, so references here will relate to the Edge browser:
Figure 2.11: In-browser developer tools Network tab

Figure 2.11: In-browser developer tools Network tab

You’re more than welcome to use tools other than the in-browser developer tools if you wish.

  1. In this new browser session, browse to the URL of your web app (copy it from the Azure portal if you need to). You will be faced with the familiar Microsoft sign-in screen. Within the developer tools, select the entry that just lists the URL of your app, and you’ll see a 302 Found status code:
Figure 2.12: In-browser developer tools showing a 302 Found status code

Figure 2.12: In-browser developer tools showing a 302 Found status code

If you haven’t connected the dots yet, you can review the authentication settings for your app and see that you have configured unauthenticated requests to receive an HTTP 302 Found response and redirect to the identity provider (Microsoft, in this example):

Figure 2.13: Authentication settings summary showing the 302 Found configuration

Figure 2.13: Authentication settings summary showing the 302 Found configuration

  1. Select one of the entries with authorize? in the name. Notice that, on the Payload tab, redirect_uri and client_id relate to the redirect URI/callback address and the app (client) ID mentioned previously, telling the provider where to redirect (with the token) once authentication is completed. Also, notice that the response it expects includes an ID token:
Figure 2.14: In-browser developer tools showing the redirect URI and client ID

Figure 2.14: In-browser developer tools showing the redirect URI and client ID

At this point, you may want to clear the network log when you’re about to finish the sign-in process to start from a clean log when you sign in. You don’t have to, but it may make it easier to select entries when there are fewer of them.

  1. Sign in with your account. Note that you have to consent to the permissions the app registration has configured, so accept them to proceed:
Figure 2.15: Permissions requested by the app registration

Figure 2.15: Permissions requested by the app registration

  1. Select the callback entry and, from the Payload tab, copy the value of id_token (only copy the value, not the id_token wording or any other properties, which is easier to view parsed than source). The value should begin with ey. The format of this token is JSON Web Token (JWT).
  2. With the id_token value copied, open a new tab and browse to, then paste the id_token value you just copied.

    On both the Decoded token and Claims tabs, you can see various properties related to your account and any app-specific roles your account has assigned, which have not been configured in this example.

  3. Go to the Cookies tab to see the AppServiceAuthSession cookie that was provided in the server’s response. Going to the Cookies tab for all subsequent network log entries will show that same authenticated cookie as a request cookie, which is in line with the authentication flow previously illustrated. Feel free to test it out; refresh the page and confirm that the cookie is indeed being used in all requests.

Going into that extra bit of detail and showing the authentication flow in action should help your understanding more than simply telling you the steps of the authentication flow. We’ll now move on to the final topic of our App Service exploration by briefly looking at some of the available networking features.

Networking Features

Unless you’re using an ASE, which is the network-isolated SKU mentioned earlier in this chapter, App Service deployments exist in a multitenant network. Because of this, you can’t connect your App Service directly to your corporate or personal networks. Instead, there are networking features available to control inbound and outbound traffic and allow your App Service to connect to your network.

Outbound Flows

First, let’s talk about outbound communication—that is, controlling communication coming from your application.

If you want your application to be able to make outbound calls to a specific TCP endpoint in a private network (such as your on-premises network, for example), you can leverage the Hybrid Connection feature. At a very high level, you would install a relay agent called Hybrid Connection Manager (HCM) on a Windows Server 2012 or newer machine within the network to which you want your app to connect.

HCM is a lightweight software application that will communicate outbound from your network to Azure over port 443. Both App Service and HCM make outbound calls to a relay in Azure, providing your app with a TCP tunnel to a fixed host and port on the other side of the HCM. When a DNS request from your app matches that of a configured Hybrid Connection endpoint, the outbound TCP traffic is redirected through the hybrid connection. This is one way to allow your Azure-hosted app to make outbound calls to a server or API within your on-premises network, without having to open a bunch of inbound ports in your firewall.

The other networking feature for outbound traffic is VNet integration. VNet integration allows your app to securely make outbound calls to resources in or through your Azure VNet, but it doesn’t grant inbound access. If you connect VNets within the same regions, you need to have a dedicated subnet in the VNet that you’re integrating with. If you connect to VNets in other regions (or a classic VNet within the same region), you need a VNet gateway to be created in the target VNet.

VNet integration can be useful when your app needs to make outbound calls to resources within a VNet and public access to those resources is blocked.

Inbound Flows

There are several features for handling inbound traffic, just as there are for outbound. If you configure your app with SSL, you can make use of the app-assigned address feature, which allows you to support any IP-based SSL needs you may have, as well as set up a dedicated IP address for your app that isn’t shared (if you delete the SSL binding, a new inbound IP address is assigned).

Access restrictions allow you to filter inbound requests using a list of allow and deny rules, similar to how you would with a network security group (NSG). Finally, there is the private endpoint feature, which allows private and secure inbound connections to your app via Azure Private Link. This feature uses a private IP address from your VNet, which effectively brings the app into your VNet. This is popular when you only want inbound traffic to come from within your VNet and not from any public source.

There’s so much more to Azure networking, but these are the headlines specific to Azure App Service. As you may imagine, there’s a lot more to learn about the features we’ve just discussed here. A link to more information on App Service networking can be found in the Further Reading section of this chapter, should you wish to dig deeper.

This ends our exploration of Azure App Service. Armed with this understanding, the remainder of this chapter should be a breeze in comparison. Now that we’ve gone into some depth regarding web apps, let’s look at some additional configuration options, as well as making use of logging with App Service.

Configuring App Settings and Logging

It’s important to understand how to configure application settings and how your app makes of use them, which you will build on in the last section of this chapter. There are also various types of logging available with App Service, some of which are only available to Windows and can be stored and generated in different ways. So, let’s take a look.

Application Settings

In the previous authentication exercise, you navigated to the Configuration blade of your App Service to view an application configuration setting. You will get some more context on this now.

In App Service, application settings are exposed as environment variables to your application at runtime. If you’re familiar with ASP.NET or ASP.NET Core and the appsettings.json or web.config file, these work in a similar way, but the App Service variables override variables defined in the appsettings.json and web.config files. You could have development settings in these files for connecting to local resources such as a local MySQL database in those files but have production settings stored safely in App Service. They are always encrypted at rest and transmitted over an encrypted channel.

Exercise 4: Configuring Applications in App Service

For Linux apps and custom containers, App Service uses the --env flag to pass the application settings as environment variables to that container. In this exercise, you will check these settings out:

  1. Within the Azure portal, find and open your App Service app and navigate to the Configuration blade once more. Here, you will see the existing application setting previously mentioned.
  2. Click on the Advanced edit button above the settings. This will bring up a JSON representation of the current application settings. This is where you can make additions or amendments in bulk, rather than making changes one by one.
  3. Add two new settings (don’t forget to add a comma after the last one, but before the closing square bracket), one named ORIGINAL_SLOT with the value of Production, and the other named CURRENT_SLOT with the same value of Production, which has the slotSetting value set to true:
        "name": "ORIGINAL_SLOT",
        "value": "Production",
        "slotSetting": false
        "name": "CURRENT_SLOT",
        "value": "Production",
        "slotSetting": true

    Don’t worry about what slotSetting is for now; we’ll discuss this soon.

  4. Click OK and then Save at the top of the page, followed by Continue when prompted.
  5. Check out the General settings tab and then the Path mappings tab to see what configuration settings are available.

    If you were in this same area with a Windows App Service app, you would also have a Default documents tab, which would allow you to define a prioritized list of documents to display when navigating to the root URL for the website. The first file in the list that matches is used to display content to the user.

  6. Browse to the URL of the web app to confirm nothing has changed. The app now has new app settings, but you’re not doing anything with them yet.
  7. From the previously downloaded repository for this book, open the 02-appsettings-logging folder and then open the Pages\Index.cshtml file:
Figure 2.16: The Index.cshtml file within VS Code

Figure 2.16: The Index.cshtml file within VS Code

You can see within this file that this app outputs the values for configuration settings with the names ORIGINAL_SLOT and CURRENT_SLOT:

    <h3>Original slot: @Configuration["ORIGINAL_SLOT"].</h3>
    <h3>Current slot: @Configuration["CURRENT_SLOT"].</h3>
  1. Open a terminal session from the 02-appsettings-logging directory and run the app with the following command:
    dotnet run
  2. Open a browser window and navigate to the URL of the local app: http: //localhost: 5000.

    You’ll notice that only Original slot: . and Current slot: . are displayed. This is because the settings don’t exist on the local machine yet.

  3. Open the appsettings.json file and add the relevant settings and whatever values you’d like for them, as per the following example:
      "ORIGINAL_SLOT": "My computer",
      "CURRENT_SLOT": "Also my computer"
  4. Save the file and run the app again with the following command:
    dotnet run

If you browse to the app again, you’ll see the values being displayed. So, what you’ve just done is configure some default settings that our web app can use. It was mentioned previously that App Service app settings will override the values defined in an appsettings.json file. Deploy this app to App Service and see it in action.

  1. Publish your files, ready for deployment, by running the following command:
    dotnet publish -c Release -o out
  2. If you have the Azure App Service VS Code extension installed (listed in the Technical Requirements section of this chapter), right-click on the newly created out folder and select Deploy to Web App.
  3. When prompted, select your App Service and confirm by clicking on the Deploy button in the pop-up window that appears:
Figure 2.17: App Service deployment confirmation window

Figure 2.17: App Service deployment confirmation window

  1. Once deployment has completed, browse to your App Service and you’ll see that the settings configured earlier have indeed taken priority over those configured in the appsettings.json file and are now being displayed.

One final configuration you should be aware of is cross-origin resource sharing (CORS), which comes supported for RESTful APIs with App Service. At a high level, CORS-supported browsers prevent web pages from making requests for restricted resources to a different domain from that which served the web page. By default, cross-domain requests (AJAX requests, for example) are forbidden by something called the same-origin policy, which prevents malicious code from accessing sensitive data on another site. There may be times when you want sites from other domains to access your app (such as if your App Service hosts an API). In this case, you can configure CORS to allow requests from one or more (or all) domains.

CORS can be configured from the CORS blade under the API section of your App Service.

You will explore the App Configuration feature in more detail in Chapter 8, Implementing Secure Azure Solutions. For now, you can move on to the topic of logging.


There are several types of logging available within the App Service. Some of them are Windows-specific while others are available for both Windows and Linux:

Windows Only

  • Detailed error logging: When an application HTTP error code of 400 or greater occurs, App Service can store the .htm error pages, which otherwise would be sent to the client browser, within the App Service file system.
  • Failed request tracing: Detailed tracing information on failed requests (including a trace of the IIS components used to process the request) is stored within the App Service file system.
  • Web server logging: Raw HTTP request data is stored in the W3C extended log file format within the App Service file system or Azure Storage blobs.

Windows and Linux

  • Application logging: Log messages that are generated by either the web framework being used or your application code directly (you’ll see this shortly) are stored within either the App Service filesystem (this is the only option available with Linux apps) or Azure Storage blobs (only available in Windows apps).
  • Deployment logging: Upon publishing content to an app, deployment logging occurs automatically with no configurable settings, which helps determine reasons for a deployment failing. These logs are stored within the App Service filesystem.

For logs stored within the App Service filesystem, you can access them via their direct URLs. For Windows apps, the URL for the diagnostic dump is https: //<app-service>.scm .azurewebsites. net/api/dump. For Linux/container apps, the URL is https: //<app-service>.scm.azurewebsites. net/api/logs/docker/zip. Within the portal, you can use Advanced Tools to access further information and the links just mentioned.

Exercise 5: Implementing and Observing Application Logging

In the 02-appsettings-logging app you just deployed, some code already existed that creates log entries. In this exercise, you will see this in action in App Service:

  1. From the 02-appsettings-logging folder, open the Pages\Index.cshtml.cs file:
Figure 2.18: The Index.cshtml.cs file within VS Code

Figure 2.18: The Index.cshtml.cs file within VS Code

Within that file, you’ll see the following basic code that simply writes an information log message:

_logger.LogInformation("Hello, Packt! I'm logging for the AZ-204!");
  1. Run the app again locally with the following command, then browse to the app, and you’ll see the log entry in the terminal window:
    dotnet run
Figure 2.19: Terminal output showing information logging from the web app

Figure 2.19: Terminal output showing information logging from the web app

Now that we’ve confirmed this works locally, we’ll head to the Azure portal because it’s the easiest way to show options that are different between Linux and Windows apps.

  1. Within the Azure portal, open App Service and click on the App Service logs blade.
  2. Turn Application logging on by setting the toggle to File System and clicking Save.

    To illustrate the differences between Linux and Windows apps, this is what you’d see if you went to the same location from a Windows app:

Figure 2.20: App Service logging options for a Windows App Service

Figure 2.20: App Service logging options for a Windows App Service

  1. Still within the Azure portal, open the Log stream blade. Then, in another browser tab, navigate to the URL of the App Service. You should see the new application log showing something similar to the following:
    023-06-29T19:44:01.262767018Z info: _02_appsettings_logging.Pages.IndexModel[0]
    2023-06-29T19:44:01.262834918Z       Hello, Packt! I'm logging for the AZ-204!

Now that we have a good understanding of some key concepts of App Service and have run through some detailed topics and enabled logging, we’ll look at a topic that was very briefly touched on in Chapter 1, Azure and Cloud Fundamentals: scaling.

Scaling App Service Apps

In Chapter 1, Azure and Cloud Fundamentals, you saw that the cloud offers elasticity so that it can scale and use as much capacity as you need when you need it. The chapter specifically touched on scaling up (that is, vertical scaling) and scaling out (that is, horizontal scaling). In Azure, scaling is managed using a set of rules such as CPU usage thresholds, memory demands, and queue lengths. This is managed in the portal, as shown in the following exercise.

Exercise 6: Configuring Autoscale in Azure App Service

In this exercise, you will explore autoscale settings for Azure App Service or an App Service plan and learn how to adjust resource allocation dynamically based on usage metrics. Let’s jump into the portal once more and take a closer look:

  1. From within the Azure portal, open either your App Service or the App Service plan and open the Scale up blade.

    If you’re in App Service, notice that it has (App Service plan) appended to the blade label to point out that it’s the App Service plan controlling resources, as discussed earlier in this chapter. Don’t change anything here; just notice that these options increase the total resources available. They don’t increase instances. A restart of the app would be required to scale vertically.

  2. Open the Scale out blade and notice that this is currently set to a manually set instance count. While this can be useful, what you want to investigate here is autoscale, so select the Rules Based option, followed by Manage rules based scaling:
Figure 2.21: Options to configure rule-based scaling

Figure 2.21: Options to configure rule-based scaling

Azure portal UI changes

The Azure portal interface is updated all the time, so you may see some slight differences from what you see in screenshots throughout this book. At the time of writing, the options just mentioned were new, so they may have changed by the time you read this.

  1. Select the Custom autoscale and Scale based on a metric option.
  2. Set Instance limits to a minimum of 1 and a maximum of 2. It’s up to you, but this allows the lowest cost while still being able to demonstrate this. You’re welcome to change the values but be aware of the cost. Also, set Default to 1.

    The Default value will be used to determine the instance count should there be any problems with autoscale reading the resource metrics.

  3. Click on the Add a rule link. Here, you can define the metric rules that control when the instance count should be increased or decreased, which is extremely valuable when the workload may vary unpredictably.
  4. Check out the options available but leave the settings as default for now. The graph on this screen helps identify when the rule would have been met based on the options you select. For example, if you change your metric threshold to be greater than 20% for CPU percentage, the graph will show that this rule would have been matched three times over the last 10 minutes (when the lines rise above the dashed line):
Figure 2.22: Custom metric condition visual

Figure 2.22: Custom metric condition visual

  1. Set the threshold to 20 and click on Add.
  2. With this rule added, it’s usually advisable to add a rule to scale back down. So, click on Add a rule again and repeat this process, but this time, use a Less than or equal to operator, change the threshold figure to 15%, and select Decrease count by for the Operation setting. You should now have a scale-out rule increasing the instance count and a scale-in rule decreasing the instance count.
  3. Scroll to the bottom of the page and click on Add a scale condition. Notice that this time, you can set up date and time periods for the rule to apply, either scaling to a specific count during that period or based on a metric, as you did previously. The first condition you configured acts as a default, only executing if none of the other conditions are matched.
  4. Feel free to add and customize conditions and rules until you’re comfortable. Click either Save or Discard in the top-left corner of the screen. You won’t cause autoscale to trigger in this example.

You can view any autoscale actions through the Run history tab or via the App Service Activity Log.

The following are a few quick points on scaling out when using this outside of self-learning:

  • Consider the default instance count that will be used when metrics are not available for any reason.
  • Make sure the maximum and minimum instance values are different and have a margin between them to ensure autoscaling can happen when you need it.
  • Don’t forget to set scale-in rules as well as scale-out. Most of the time, you won’t want to scale out without being able to scale back in.
  • Before scaling in, autoscale will estimate what the final state would be after it has scaled in. If the thresholds are too close to each other, autoscale may estimate that it would have to scale back out immediately after scaling in, and that would likely get stuck in a loop (this is known as “flapping”), so it will decide not to scale in at all to avoid this. Ensuring there’s a margin between metrics can avoid this behavior.
  • A scale-out rule runs if any of the rules are met, whereas a scale-in rule runs only if all rules are met.
  • When multiple scale-out rules are being evaluated, autoscale will evaluate the new capacity of each rule that gets triggered and choose the scale action that results in the greatest capacity, to ensure service availability. For example, if you have a rule that would cause the instance count to scale to five instances and another that would cause the instance count to scale to three instances, when both rules are evaluated to be true, the result would be scaling to five instances, as the higher instance count would result in the highest availability.
  • When there are no scale-out rules and only scale-in rules (providing all the rules have been triggered), autoscale chooses the scale action resulting in the greatest capacity to ensure service availability.

One important point to remember is that, since scaling rules are created on the App Service plan rather than App Service (because the App Service plan is responsible for the resources), if the App Service plan increases the instances, all of your App Services in that plan will run across that many instances, not just the App Service that’s getting all the traffic. App Service uses a load balancer to load balance traffic across all instances for all of your App Services on the App Service plan.

So far, any impactful changes we’ve pushed to App Service would cause the service to restart, which would lead to downtime. This is not desirable in most production environments. App Service has a powerful feature called deployment slots to allow you to test changes before they hit production, control how much traffic gets routed to each deployment slot, promote those changes to production with no downtime, and roll back changes that were promoted to production if needed. Let’s wrap up this chapter by learning about deployment slots.

Leveraging Deployment Slots

The first thing to know about deployment slots is that they are live apps with hostnames, content, and configuration settings. In a common modern development workflow, you’d deploy code through whatever means to a non-production deployment slot (often called staging, although this could be any name and there could be multiple slots between that and production) to test and validate. From there, you may start increasing the percentage of traffic that gets routed to the staging slot when people access the production URL, or you may just swap the slots. Whatever was in production then goes to staging and whatever was in staging goes to production, with no downtime.

Because it is just a swap, if something unexpected does happen as a result, you can swap the slots back, and everything will return to before the swap occurred. Several actions take place during a swap, including the routing rules changing once all the slots have warmed up. There’s a documentation link in the Further Reading section of this chapter should you wish to explore this further. Essentially, there’s a load balancer involved that routes traffic to one slot or the other; when you swap the slots, the load balancer will route production traffic to the previously non-production app, and vice versa.

You read about application configuration settings earlier in this chapter, but we didn’t address what slotSetting meant. With each deployment slot being its own app, they can have their own application configuration as well. If a setting isn’t configured as a deployment slot setting, that setting will follow the app when it gets swapped. If the setting is configured as a deployment slot setting, the setting will always be applied to whichever app is in that specific slot. This is helpful when there are environment-specific settings. For instance, perhaps you have some connection strings that are only for production, and you want whichever app is in the production deployment slot to always use that connection string, regardless of swapping that might occur.

Different App Service plan tiers have a different number of deployment slots available, so that could be a consideration when deciding on which tier to select or scale to. As with some other settings we’ve discussed, Windows apps have an additional setting that’s not available with Linux/container apps: auto-swap.

Under the Configuration blade of a Windows app service and the General settings tab, you’ll see the option to enable auto-swap when code is pushed to that slot. For example, if you enable this setting (again, only available on Windows App Services) on the staging slot each time you deploy code to that slot, once everything is ready, App Service will automatically swap that slot with the slot you specify in the settings. Don’t be disheartened if you want something like that but you’re using Linux/container apps. There are plenty of ways to programmatically achieve a similar experience, using CI/CD pipelines, for example.

Exercise 7: Mastering Deployment Slots

In this exercise, you will look at the advanced features of Azure App Service by creating and managing deployment slots:

  1. From the Azure portal, open the Configuration blade within your App Service and notice CURRENT_SLOT has been configured to be a slot setting.

    This means that regardless of any deployment slot swapping that might occur, this setting will not follow the apps. Whatever app is in the production slot will get this production value. ORIGINAL_SLOT, however, isn’t a slot setting so will follow the app through a swap. You’ll see this momentarily.

  2. Go to the Deployment slots blade and click Add Slot. Enter staging for the name of the deployment slot and choose to clone the settings from the default/production slot (indicated by having just the App Service name), which will copy all of the application settings to the staging slot.

    You could have also used the following CLI command:

    az webapp deployment slot create -g "<resource group>" -n "<app-service>" -s "staging" --configuration-source "<app-service>"

    Alternatively, you could have used the following PowerShell command:

    New-AzWebAppSlot -ResourceGroupName "<resource group>" -Name "<app-service>" -Slot "staging"
  3. Select the staging deployment slot and, from within the Configuration blade, change the value of both CURRENT_SLOT and ORIGINAL_SLOT to Staging rather than Production. Save and continue.

    Conceptually, there are some different configurations between the staging and production slots, which you could have also replicated with different code.

  4. From within VS Code, on the assumption that the out folder still remains from the previous exercise when you deployed the code to App Service, open the command palette either by going to View and then Command Palette or using the relevant shortcuts. In Windows, this is Ctrl + Shift + P by default.
  5. Start typing and then select Azure App Service: Deploy to Slot….
  6. When prompted, select your App Service resource, the new staging slot, browse to and select the out folder previously created, and confirm the deployment when the pop-up window appears requesting confirmation.
  7. Once the deployment completes, browse to the production slot URL for App Service (that is, https: //<app-service>.azurewebsites. net) and confirm that the production text is there. Now, do the same with the staging URL (that is, https: //<app-service>-staging.azurewebsites. net) and confirm that the staging text is there. Once confirmed, navigate back to the main/production URL so that you’re ready for the next step.

    This shows how you could test changes in the staging slot/app before pushing it to production via the staging URL. The documentation also explains how you can use a query string in a link to App Service, which users could use to opt into the staging/beta/preview app experience. Check out the Further Reading section of this chapter for the relevant link.

  8. From the main App Service (not the staging app) within the Azure portal, open the Deployment slots blade and notice that you can change the percentage of traffic that flows to each slot.

    This allows you to control the exposure of the staging slot before making the switch. Rather than using that right now, just click on Swap. Note that you can preview the changes that will be made, which will be the text changing for the ORIGINAL_SLOT application setting. Confirm this by clicking on Swap.

  9. Go back to the tab/window with the production site showing and periodically refresh the page. There should be no downtime. At some point, the Original slot text will change from Production to Staging, showing that the app that was originally in the staging slot was swapped with production and your changes are now live in the production app:
Figure 2.23: Text showing that the previous staging app is now the production app

Figure 2.23: Text showing that the previous staging app is now the production app

  1. When you’re done with this exercise, feel free to clean up your resources by deleting the resource group containing all the resources created in this chapter.

If you wanted to, you could revert the changes by swapping the slots again.

One final point to note is that although the default behavior is for all the slots of App Service to share the same App Service plan, this can be changed by changing the App Service plan in each slot individually.

With that final point, you have come to the end of our exploration of App Service. A lot of the concepts we’ve discovered here will help with the topics that will be covered throughout this book, as a lot of them will dive deeper or reference concepts we’ve already covered in some detail. If you can understand the concepts discussed in this chapter, you’ll already be ahead of the majority of people who pass the exam.


This chapter introduced Azure App Service by looking at some fundamentals, such as App Service plans, as well as some basics of App Service web apps. You then delved into authentication and authorization, stepped through the authentication flow, and saw a summary of some networking features. Once the app was up and running, you looked in some detail into configuration options and how application settings can be used by the application. You learned about the different types of built-in logging available with App Service and went through an exercise to enable the application code to log messages that App Service could process. Then, you learned how to automatically scale your App Service based on conditions and rules to make use of the elasticity that the cloud offers. Finally, you looked at how to make use of deployment slots to avoid downtime during deployments, control how changes are rolled out, and roll back changes if required.

The topics and exercises covered in this chapter should help you understand the concepts that will be discussed later in this book. If you understand the fundamental concepts, you will be much better prepared for the exam, which may contain some abstract questions that require this kind of understanding, rather than just example questions.

In the next chapter, you will look at containerized solutions. You will start with an introduction to containers and Docker, before exploring the container-related services that you need to be aware of for this exam.

Further Reading

To learn more about the topics that were covered in this chapter, take a look at the following resources:

Exam Readiness Drill – Chapter Review Questions

Apart from a solid understanding of key concepts, being able to think quickly under time pressure is a skill that will help you ace your certification exam. That is why working on these skills early on in your learning journey is key.

Chapter review questions are designed to improve your test-taking skills progressively with each chapter you learn and review your understanding of key concepts in the chapter at the same time. You’ll find these at the end of each chapter.

How to Access these Resources

To learn how to access these resources, head over to the chapter titled Chapter 14, Accessing the Online Practice Resources.

To open the Chapter Review Questions for this chapter, perform the following steps:

  1. Click the link –

    Alternatively, you can scan the following QR code (Figure 2.24):

Figure 2.24 – QR code that opens Chapter Review Questions for logged-in users

Figure 2.24 – QR code that opens Chapter Review Questions for logged-in users

  1. Once you log in, you’ll see a page similar to the one shown in Figure 2.25:
Figure 2.25 – Chapter Review Questions for Chapter 2

Figure 2.25 – Chapter Review Questions for Chapter 2

  1. Once ready, start the following practice drills, re-attempting the quiz multiple times.

Exam Readiness Drill

For the first three attempts, don’t worry about the time limit.


The first time, aim for at least 40%. Look at the answers you got wrong and read the relevant sections in the chapter again to fix your learning gaps.


The second time, aim for at least 60%. Look at the answers you got wrong and read the relevant sections in the chapter again to fix any remaining learning gaps.


The third time, aim for at least 75%. Once you score 75% or more, you start working on your timing.


You may take more than three attempts to reach 75%. That’s okay. Just review the relevant sections in the chapter till you get there.

Working On Timing

Target: Your aim is to keep the score the same while trying to answer these questions as quickly as possible. Here’s an example of how your next attempts should look like:



Time Taken

Attempt 5


21 mins 30 seconds

Attempt 6


18 mins 34 seconds

Attempt 7


14 mins 44 seconds

Table 2.1 – Sample timing practice drills on the online platform


The time limits shown in the above table are just examples. Set your own time limits with each attempt based on the time limit of the quiz on the website.

With each new attempt, your score should stay above 75% while your “time taken” to complete should “decrease”. Repeat as many attempts as you want till you feel confident dealing with the time pressure.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Written by Microsoft technical trainers, to help you explore exam topics in a structured way
  • Understand the "why", and not just "how" behind design and solution decisions
  • Learn Azure development principles with online exam preparation materials
  • Purchase of this book unlocks access to web-based exam prep resources including mock exams, flashcards, exam tips, and the eBook pdf


Get ready to delve into Microsoft Azure and build efficient cloud-based solutions with this updated second edition. Authored by seasoned Microsoft trainers, Paul Ivey and Alex Ivanov, this book offers a structured approach to mastering the AZ-204 exam topics while focusing on the intricacies of Azure development. You’ll familiarize yourself with cloud fundamentals, understanding the core concepts of Azure and various cloud models. Next, you’ll gain insights into Azure App Service web apps, containers and container-related services in Azure, Azure Functions, and solutions using Cosmos DB and Azure Blob Storage. Later, you'll learn how to secure your cloud solutions effectively as well as how to implement message- and event-based solutions and caching. You’ll also explore how to monitor and troubleshoot your solutions effectively. To build on your skills, you’ll get hands-on with monitoring, troubleshooting, and optimizing Azure applications, ensuring peak performance and reliability. Moving ahead, you’ll be able to connect seamlessly to third-party services, harnessing the power of API management, event-based solutions, and message-based solutions. By the end of this MS Azure book, you'll not only be well-prepared to pass the AZ-204 exam but also be equipped with practical skills to excel in Azure development projects.

What you will learn

Identify cloud models and services in Azure Develop secure Azure web apps and host containerized solutions in Azure Implement serverless solutions with Azure Functions Utilize Cosmos DB for scalable data storage Optimize Azure Blob storage for efficiency Securely store secrets and configuration settings centrally Ensure web application security with Microsoft Entra ID authentication Monitor and troubleshoot Azure solutions

Product Details

Country selected

Publication date : May 16, 2024
Length 428 pages
Edition : 2nd Edition
Language : English
ISBN-13 : 9781835085295
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : May 16, 2024
Length 428 pages
Edition : 2nd Edition
Language : English
ISBN-13 : 9781835085295
Languages :

Table of Contents

16 Chapters
Preface Chevron down icon Chevron up icon
1. Chapter 1: Azure and Cloud Fundamentals Chevron down icon Chevron up icon
2. Chapter 2: Implementing Azure App Service Web Apps Chevron down icon Chevron up icon
3. Chapter 3: Implementing Containerized Solutions Chevron down icon Chevron up icon
4. Chapter 4: Implementing Azure Functions Chevron down icon Chevron up icon
5. Chapter 5: Developing Solutions That Use Cosmos DB Storage Chevron down icon Chevron up icon
6. Chapter 6: Developing Solutions That Use Azure Blob Storage Chevron down icon Chevron up icon
7. Chapter 7: Implementing User Authentication and Authorization Chevron down icon Chevron up icon
8. Chapter 8: Implementing Secure Azure Solutions Chevron down icon Chevron up icon
9. Chapter 9: Integrating Caching and Content Delivery within Solutions Chevron down icon Chevron up icon
10. Chapter 10: Monitoring and Troubleshooting Solutions by Using Application Insights Chevron down icon Chevron up icon
11. Chapter 11: Implementing API Management Chevron down icon Chevron up icon
12. Chapter 12: Developing Event-Based Solutions Chevron down icon Chevron up icon
13. Chapter 13: Developing Message-Based Solutions Chevron down icon Chevron up icon
14. Chapter 14: Accessing the Online Practice Resources Chevron down icon Chevron up icon
15. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Top Reviews
No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial


How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to
  • To contact us directly if a problem is not resolved, use
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.