In this chapter, we will cover the following topics:
Setting up solutions and projects to work with Cloud Services
Debugging a Cloud Service locally with either Emulator or Emulator Express
Publishing a Cloud Service with options from Visual Studio
Debugging a Cloud Service remotely with Visual Studio
Configuring the service model for a Cloud Service
Providing a custom domain name for a Cloud Service
Implementing HTTPS in a web role
Using local storage in an instance
Hosting multiple web sites in a web role
Using startup tasks in a Cloud Service role
Handling changes to the configuration and topology of a Cloud Service
Managing upgrades and changes to a Cloud Service
Configuring diagnostics in Cloud Services
Cloud Services are classified typically as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). In the IaaS model, the core service provided is a Virtual Machine (VM) with a guest OS. The customer is responsible for everything about the guest OS, including hardening it and adding any required software. In the PaaS model, the core service provided is a VM with a hardened guest OS and an application-hosting environment. The customer is responsible only for the service injected into this environment. In the SaaS model, a service is exposed over the Internet, and the customer merely has to access it.
Microsoft Azure Cloud Services is the paradigm of the Platform-as-a-Service (PaaS) model of cloud computing. A Cloud Service can be developed and deployed to any of the desired Microsoft Azure datacenters (regions) located across the world. A service hosted in Microsoft Azure can leverage the high scalability and reduced administrative benefits of the PaaS model.
In later chapters, we do not see how Azure also offers an IaaS alternative, Microsoft Azure Virtual Machines, to let customers get the ability to deploy customized solutions in fully customized environments. Either way, the benefits of using PaaS are strongly evident compared to IaaS. In PaaS, we can reduce the governance of the whole system, focusing only on technology and processes, instead of managing the IT, as we did earlier. PaaS enforces the use of best practices throughout the development process, forcing us to take the right decisions in terms of design patterns and architectural choices.
From an IT architect's perspective, using PaaS is similar to trusting a black-box model. We know that the input is our code, which might be written with some environmental constraints or specific features, and the output is the running application on top of instances, virtual machines, or, generically, something that is managed, in the case of Azure, by Microsoft.
A Cloud Service provides the management and security boundaries for a set of roles. It is a management boundary because a Cloud Service is deployed, started, stopped, and deleted as a unit. A Cloud Service represents a security boundary because roles can expose input endpoints to the public internet, and they can also expose internal endpoints that are visible only to other roles in the service. We will see how roles work in the Configuring the service model for a Cloud Service recipe.
Roles are the scalability unit for a Cloud Service, as they provide vertical scaling by increasing the instance size and horizontal scaling by increasing the number of instances. Each role is deployed as one or more instances. The number of deployed instances for a role scales independent of other roles, as we will see in the Handling changes to the configuration and topology of a Cloud Service recipe. For example, one role could have two instances deployed, while another could have 200 instances. Furthermore, the compute capacity (or size) of each deployed instance is specified at the role level so that all instances of a role are the same size, though instances of different roles might have different sizes.
The application functionality of a role is deployed to individual instances that provide the compute capability for the Cloud Service. Each instance is hosted on its own VM. An instance is stateless because any changes made to later deployment will not survive an instance failure and will be lost. Note that the word role is used frequently where the word instance should be used.
A central driver of interest in cloud computing has been the realization that horizontal scalability, by adding commodity servers, is significantly more cost effective than vertical scalability achieved through increasing the power of a single server. Just like other cloud platforms, the Microsoft Azure platform emphasizes horizontal scalability rather than vertical scalability. The ability to increase and decrease the number of deployed instances to match the workload is described as elasticity.
Microsoft Azure supports two types of roles: web roles and worker roles. The web and worker roles are central to the PaaS model of Microsoft Azure.
A web role hosts websites using the complete Internet Information Services (IIS). It can host multiple websites with a single endpoint, using host headers to distinguish them, as we will see in the Hosting multiple websites in a web role recipe. However, this deployment strategy is gradually going into disuse due to an emerging powerful PaaS service that is called Microsoft Azure Web Sites. An instance of a web role implements two processes: the running code that interacts with the Azure fabric and the process that runs IIS.
A worker role hosts a long-running service and essentially replicates the functionality of a Windows service. Otherwise, the only real difference between a worker role and web role is that a web role hosts IIS, while the worker role does not. Furthermore, a worker role can also be used to host web servers other than IIS; in fact, worker roles are suggested by Microsoft when someone needs to deploy some kind of software that is not designed to run on the default Microsoft web stack. For example, if we want to run a Java application, my worker role should load a process of the JEE Application Server (that is, Glassfish or JBoss) and load the related files needed by the application to run, into it. This deployment model should be helped by some components, often called accelerators, that encapsulate the logic to install, run, and deploy the third-party stack (such as the Java one, for example) in a box, in a stateless fashion.
Visual Studio is a central theme in this chapter as well as for the whole book. In the Setting up solutions and projects to work with Cloud Services recipe, we will see the basics of the Visual Studio integration, while in the Debugging a Cloud Service locally with either Emulator or Emulator Express and Debugging a Cloud Service remotely with Visual Studio recipes, we will see something more advanced.
Microsoft Azure Cloud Service is the paradigm for PaaS. As such, Microsoft Azure provides a high-level, application-hosting environment, which is modeled on services, roles, and instances. The specification of the roles in the service is referred to as the service model.
As we mentioned earlier, Microsoft Azure supports two types of roles: web roles and worker roles. We said that a web role is essentially a worker role plus IIS because they comply with the same running model where there is an entry point that interacts with the Azure environment and our custom code, regardless of whether it is a web app or batch process.
Microsoft Azure provides us with a comprehensive set of command-line tools to package our solutions to be deployed on the Cloud, as they should be wrapped into a single deployment unit that is called a package. A package is like a ZIP file with encryption. It contains every project component of the Cloud Service, so it is the self-containing artifact that goes onto the Azure platform to instruct it about setting up machines and code. This package, along with its configuration, represents the whole Cloud Service that Azure Fabric (the big brain of the Azure data center) can deploy wherever it wants. In terms of black-box reasoning, the input of a Cloud Service is package plus configuration, which we can produce using command-line tools or just using Visual Studio, starting from our classic .NET binaries.
The corresponding Visual Studio artifact for the package mentioned earlier is a project template called Microsoft Azure Cloud Service. It is a .ccproj
project that wraps the .NET projects in order to produce the resulting compressed unit of deployment. Later in the chapter, we call this project wrapper to underline its purpose in the deployment process.
In fact, Visual Studio lets us automatically create the correct environment to deploy web and worker roles, without facing the complexity of the command-line tools, by installing the Microsoft Azure Tools for Visual Studio that, at the time of writing this book, are available in Version 2.3.
In this recipe, we'll learn how to choose the service model for a Cloud Service and create a project environment for it in Visual Studio.
For this recipe, we use Visual Studio 2013 with Microsoft Azure Tools installed. The goal of the recipe is to produce a sample package for the Cloud Service, from new or existing projects.
A Cloud Service is a composition of two scenarios: new or existing projects. We will split our recipe into two branches to support the two scenarios:
Open Visual Studio, and in the menu bar, navigate to File | New | Project.
From the installed templates, choose either the Visual Basic, Visual C#, or Visual F# node, and then, choose the cloud node. You will be prompted later about the actual programming languages you will use in your services.
Select Microsoft Azure Cloud Service.
In the Name text box, enter the name (
WAHelloWorld
) of your wrapper project. Remember that this name is only used to represent our service container, not a real web project itself.By choosing this project template, VS opens the New Microsoft Azure Cloud Service windows.
If you want to create just the wrapper of your existing web/worker project, press OK and go ahead to Branch 2. You will add your binaries to the wrapper project later. If you want to create the wrapper and your web/worker projects from scratch, read the Branch 1 section.
Use the arrow keys to add/remove the projects that will represent your web/worker roles on the right-hand side. You can choose the following two projects:
The ASP.NET web role
The worker role
On the right-hand side pane, there will be the WebRole1 and WorkerRole1 entries. Customize the names of the two projects in
WebHelloWorld
andWorkerHelloWorld
, respectively, and go ahead.Customize the project templates you added, with the windows that Visual Studio displays later.
In the Solution Explorer, right-click on the solution and choose Add | Existing Project.
In the browsing window, locate your project and select the Visual Studio project file. Now, your existing project is linked to the new solution.
In the Solution Explorer, right-click on the
Roles
folder in the wrapper project and navigate to Add | Web Role Project in solution or Worker Role Project in solution.In the Associate with Role Project window, choose the project(s) to add and confirm. Now, the project should appear under the
Roles
folder of the wrapper.Tip
To be a compatible web role project, a project should be a valid web application, regardless of the engine it runs on. On the other hand, every class library could be potentially a worker role project. However, to let Visual Studio add your existing class library, you must edit the project file (
csproj
in C#) as outlined at the end of the recipe.Right click on the wrapper project and choose Package.
When prompted with the Package Microsoft Azure Application window, leave the default settings and click on Package.
After few seconds of VS operations in the background, you will be prompted with a local filesystem folder with the output of the packaging process.
From steps 1 to 5, we created the Microsoft Azure Cloud Service project, the project wrapper that represents the unit of deployment for Azure. As mentioned earlier in the note, be sure to have installed the SDK and the latest VS Tools required to complete the recipe first.
In step 6, we chose between starting from scratch and using existing projects.
In steps 7 and 8, we added two projects, one web role and one worker role, from the choices of the wizard. We could also add another project template such as the following:
The WCF service web role
The cache worker role
The worker role with Service Bus Queue
In step 9, for each .NET project that we added in the step 7, Visual Studio launches the Project Template wizard. In the case of the WebHelloWorld
one, VS opens the new ASP.NET project window that lets us choose the ASP.NET engine to reference in the project. In the case of WorkerHelloWorld
, VS simply adds a class library to the solution.
From steps 10 to 13, we showed how to import an existing project into the solution and then into the wrapper project. Finally, in step 15, we built the package using the default settings. Actually, we would choose which service configuration to use to create the configuration file and which build configuration to use to build the assemblies.
In the note of step 13, we mentioned that every class library could be used in a wrapper project. In fact, the Azure runtime only needs a class library, and it searches in the Dynamic Link Library DLL for a class that inherits the RoleEntryPoint
abstract class as follows:
public class WorkerRole:RoleEntryPoint { }
This requirement is enforced only at runtime. At compile time, however, Visual Studio does not block us if we miss the RoleEntryPoint
implementation.
However, users who try to add their class library to the wrapper could be shown a grayed option due to the inability of Visual Studio to distinguish a valid worker role from a generic class library.
So, we need to perform a tweak on the project file:
Locate the project file (
*.csproj
, for example) of the class library to add it.Open it with Notepad.
Before the first
</PropertyGroup>
closing tag, add the following line:<RoleType>Worker</RoleType>
Save the file, and in Visual Studio, reload the project.
Have a look at these MSDN links to get additional information:
Information on the latest Azure SDKs with release notes at http://msdn.microsoft.com/en-us/library/dn627519.aspx
How to upgrade an existing project with the latest version of Azure Tools at http://msdn.microsoft.com/en-us/library/jj131257.aspx
As Cloud Services are more like a black-box runtime about which we don't (and shouldn't) need to know anything. However, we should know how they behave different from (or according to) a common IIS installed on a common Windows Server virtual machine.
Tip
The very first versions of Cloud Services had strict restrictions on the interactions with OS, permitted by applications. They had to run in a mode called Microsoft Azure Partial Trust, which isolated an application from the underlying operating system, to prevent the defects and bugs in the code from causing failures in the entire OS. Today, there is no longer an option to isolate an application as a result of developers' continuous requests to let them have control of the VMs under their services.
Despite the capability to access the underlying operating system, it is strongly recommended that you use only the application resources and not rely on false friends such as filesystem, folders structures, and so on. In fact, due to the nature of the PaaS model, the configuration of the operating system as well as the operating system itself could change without any control by the user, but for maintenance or deprecation purposes.
Finally, in the real Azure environment, there are some features available through the Service Runtime API, which is available only when the code is running on Azure or to let developers test their code locally, onto a local emulator shipped with the Microsoft Azure SDK.
The Emulator creates a logical representation of the Azure topology into the machine that runs it. If our service was composed of two web roles and one worker role, for example, the Emulator will start three separate processes in which the application code will behave exactly as it runs on the real platform (with some soft exceptions, which are not discussed in this book). As it needs to open network ports and change some OS settings, the Emulator needs to be started as an administrator. Hence, to start it by simply debugging in Visual Studio, VS itself should be started as an admin. Emulator Express, on its side, runs in a user mode that does not require elevation but has some limitations in terms of emulation coverage.
In this recipe we will explore the debugging options of a Cloud Service using the (full) Emulator and Emulator Express, using the previously created Cloud Service, WAHelloWorld
.
In order to explain the differences between the two emulators, we divided it into three parts. Start an elevated Visual Studio window (as an administrator), and proceed as follows:
In the Solution Explorer, right-click on the Azure wrapper project (Cloud Service) and choose Properties (or just select the project and press Alt + Enter).
In the page that just opened, select the Web tab.
In the Local Development Server options group, choose Use IIS Express.
In the Emulator options group, choose Use Full Emulator.
Locate the
Roles
folder under theWAHelloWorld
wrapper project.Right-click on the
WorkerHelloWorld
project and go to the properties page.In the Configuration tab, locate the Instance count box and set the value to
2
.Go to the
WorkerHelloWorld
project, and open theWorkerRole.cs
file.Locate the lines with the
while
(true) block and put a breakpoint on the following code:Trace.TraceInformation("Working", "Information");
Right-click on the Azure wrapper project (
WAHelloWorld
), and click on Set as StartUp Project.Press F5 or select Debug. Start debugging on the main menu.
Wait for a few seconds while you are shown the Starting the Microsoft Azure Debugging Environment window.
After about 10 seconds, VS breaks on the tracing line twice, once per running worker.
During debugging, locate the Microsoft Azure Emulator icon in the system tray.
Right-click on it and choose Show Compute Emulator UI.
In the Microsoft Azure Compute Emulator (full) window, expand all the items in the tree and click on one of the two green balls.
Read the log of the instance, which includes the text sent by our code in the tracing line used earlier.
Shut down the deployment by stopping debugging, closing the browser, or clicking on the stop icon in the Emulator UI on the current deployment.
In part one, from steps 1 to 4, we chose which Emulator to use in debugging. In the Cloud Service properties, we also find these tabs:
Application: This shows the current version of the SDK used by the project
Build Events: This lets you customize the pre/post build actions
Development: This lets you choose some options related to debugging and emulator
Web: This lets you choose which Emulator (Express or Full) to use and which IIS server set to run
In part two, from steps 5 to 7, we customized the number of simultaneous running instances of the WorkerHelloWorld
code. On the configuration page, there is a lot of information and a lot of custom points to edit, which we will see later in the chapter.
In step 9, we set a breakpoint on a line of code previously created by the VS wizard process, as shown in the previous recipe. In step 11, VS prepared the project package and started the Emulator.
Tip
If you use Remote Desktop Services (RDS) in conjunction with Visual Studio to develop your Microsoft Azure projects, it is not necessary that only one instance of the Full Emulator will run on a machine at a point of time. Instead, multiple instances of the Emulator Express can run on a machine at the same time, letting you and your team debug on the same machine remotely.
In step 12, Azure Emulator picked the configuration from the wrapper project and set up virtual instances, deploying our custom code to them. The Emulator started a runner for each instance declared: one for the WebHelloWorld
site and two for the WorkerHelloWorld
worker.
In step 13, the code stopped twice, as there were two instances deployed to the Emulator. If we deployed 10 instances, the debugger would stop 10 times.
Tip
If you want to perform advanced debugging by temporarily freezing concurrent worker threads, you can use the Threads window. You can access this window by navigating to Debug | Windows | Threads. Locate the thread ID; then, right-click on it and choose Freeze.
In part 3, in steps 14, 15, and 16, we opened the Compute Emulator UI. In the system tray icon context menu, we also noticed the capability to open the Storage Emulator UI and to shut down both the emulators (compute/storage). This feature will be very useful in the upcoming chapters when we talk about storage.
In the final steps, we use the Emulator UI to monitor the parallel workers. In the main windows, we can see the deployment slot with the roles, each one with its own instances. If there are different behaviors between instances, we can monitor the output in the terminal-like window, where it redirects all the tracing output of the worker thread (in both web and worker roles).
As the emulator not only emulates the compute engine (web and worker roles) but also the storage engines (which we will see in the upcoming chapters), it is in fact composed of the following:
Compute Emulator
Storage Emulator
These two are always referred to as just one, the Azure Emulator. It is also possible to prevent the Storage Emulator from being started during debugging by performing the following steps:
Open the Properties page of the Azure wrapper project.
In the Development tab, set the Start Microsoft Azure storage emulator option to False.
Tip
When debugging your code in a (full) Emulator environment with multiple instances running at the same time, be aware that your code will run in the same AppDomain. This is the main difference between the real and Emulator environment (where the code runs on different machines), and this could cause runtime unexpected behaviors. If you use one or more static classes with static fields, this would result in an unavoidable behavior of shared memory between virtual instances. To actually test this sort of scenario, you should write custom code to differentiate virtual instances, relying on something that is related to the instance name (for example, through the Service Runtime API or using the thread references).
Have a look at the following MSDN links to get additional information:
More information about running the Emulator at http://msdn.microsoft.com/en-us/library/azure/hh403990.aspx
Differences between the "real" Azure and the emulated environment at http://msdn.microsoft.com/en-us/library/azure/gg432960.aspx
Microsoft Azure always shipped with an SDK for developers and a complementary toolset for Visual Studio integration. In earlier versions of the official SDK, many features were only available through the online portal at https://manage.windowsazure.com. At the time of writing this book, Microsoft released its major SDK update that lets developers quickly manage almost everything of the cloud infrastructure directly from Visual Studio. Hence, it is now simpler to get on Azure and run a Cloud Service compared to the past.
When we publish a Cloud Service into Azure, we should know some foundation concepts in advance, in order to understand the ecosystem better. First, to deploy a Cloud Service, a deployment slot should be created. This could be done through the portal or from Visual Studio, and it consists of creating a DNS name in [myCS].cloudapp.net
form. This DNS, under the hood, is linked to a load balancer that actually redirects the Internet traffic to the CS, to each instance of our service, choosing it with a round-robin algorithm. This means that regardless of the topology, we are deploying to Azure in this Cloud Service. Every web endpoint that we are publishing stands behind this layer of balancing to provide the system with transparent scaling capabilities.
During the deployment, we also decide the options that CS should be deployed with:
Production/staging slot
Remote desktop support
Diagnostics collection
Remote debugging
IntelliTrace support
Incremental/simultaneous update
After we perform the deployment, our service, comprehensive of all the roles of the package, as defined in the Setting up solutions and projects to work with Cloud Services recipe, is running in the cloud and is accessible through the DNS name Azure provided us.
For this recipe, we can create a sample solution on the fly by navigating to File | New project | Cloud wizard or using the WAHelloWorld
project created earlier.
We are going to see how to deploy a Cloud Service package to the cloud. We can do this by performing the following steps:
Right-click on the Azure wrapper project (
WAHelloWorld
) and select Publish.In the Publish Microsoft Azure Application window, click on Sign In to authenticate.
In the browser window, enter the credentials of the Microsoft Account (former Live ID) associated with the Azure account and continue.
In the Choose your subscription dropdown, choose the subscription that will host your service.
If your subscription is empty or it does not contain a Cloud Service, you will be prompted with the Create Microsoft Azure Services window.
Enter a name into the Name box, and choose a location for your service.
In the Cloud Service dropdown, choose the Cloud Service to which you want your service to be deployed.
In the Environment dropdown, choose Production.
In the Service configuration dropdown, choose Cloud.
In the Build configuration dropdown, choose the correct build configuration, according to the ones available in your service (in the WAHelloWorld example, choose Release).
Check on the Enable Remote Desktop for all roles checkbox.
In the Remote Desktop Configuration window, specify a username, a password, and a date for account expiration. Then, click on OK.
Specify
WAHelloWorld Version 1
in the Deployment label box.If your subscription is empty or it does not contain a storage account, you will be prompted with the Create Microsoft Azure Services window.
Enter a name into the Name box, and choose a location for your service.
In the Storage account dropdown, choose the storage account where Visual Studio will upload the package file, letting Azure deploy it from the Blob storage.
Uncheck every checkbox except the Deployment update one, then click on Settings.
In the Deployment settings window, select Incremental update.
Check the If deployment can't be updated, do a full deployment checkbox and confirm.
Next, in the Summary page of the wizard, check all the information and then save your publish profile file.
Click on Publish.
After few or several minutes (it depends on the size of your deployment), check the Microsoft Azure Activity Log tab in VS for completion of the deployment process.
At completion, if your service has a role with a valid HTTP endpoint, navigate to it at
http://[nameOfCloudService].cloudapp.net
.
Up to step 3, we link VS with our Microsoft Account (formerly, Live ID). As a Microsoft account could be the administrator and co-administrator of multiple Azure subscriptions at the same time, after the authentication, the dropdown mentioned in step 4 could be a pretty long list of entries.
In step 5, we create a Cloud Service on the Azure platform; this means that we create the deployment slots (one for production, one for staging) to run our service. Every time we create a service in the Azure platform, we must cope with the localization of the service, in terms of the choice of a datacenter. There are actually about 13 datacenters, plus one in China:
Europe North: Ireland
Europe West: Netherlands
US Central: Iowa
US East: Virginia
US East 2: Virginia
US West: California
US North Central: Illinois
US South Central: Texas
Asia Pacific East: Hong Kong
Asia Pacific Southeast: Singapore
Japan East: Saitama Prefecture
Japan West: Osaka Prefecture
Brazil South: Sao Paulo State
In steps 7 and 8, we chose which slot to use for deployment and which configuration to publish for it. For each new Cloud Service project, VS creates two service configuration files: a local one and a cloud one.
In step 9, we choose the build configuration. This step is related to the actual build settings of our solution. In the new solution, there are just two configurations (debug/release), but the list could be longer according to real-life projects and complexity.
In steps 10 and 11, we configured the remote desktop. Though this is not recommended, this could let us connect to each instance to troubleshoot any issues and master the OS configuration of an Azure Virtual Machine image. Due to security implications in this process, a self-signed certificate is generated automatically to establish the TLS connection. It is also possible to provide our own valid certificate by clicking on More Options in the Remote Desktop Configuration window.
Tip
We learned that in Azure, we have many instances behind a single load balancer (therefore, a single IP) to RDP the instances we must proceed one-by-one with different combinations of ip:port.
Steps 12 and 13 were straightforward. They indicated the name of the deployment for further administrative tasks.
In steps 14 and 15, we chose a storage account. As we never talked about this feature earlier in the book, just keep in mind that a Cloud Service could be connected to a storage account to provide it with logging and diagnostics data.
From steps 16 to 18, we set up the options for update. As Azure could have a previous online version of our service, we had to decide how to manage the update. By unchecking the Deployment update checkbox, we completely bypassed the update process, telling Azure to delete the old deployment because a new one was going to arrive. Otherwise, we can specify how to update the service, gracefully (incremental update) or suddenly (simultaneously update). In the incremental update, Azure updates a service instance by instance, theoretically without service interruption. On the other hand, Azure updates every instance at the same time, causing an interruption of the service. In some cases, a deployment could not be updated. If this is the case, then perform a delete-and-replace operation.
In step 19, we saved the publish settings to use them later, thus avoiding repeating all the steps, by directly clicking on Publish from the summary pane. In step 21, we recognized the VS-integrated progress window, refreshed periodically from an Azure service. We can also use it to stop/cancel pending operations, as it reflects the portal operations.
If we checked the Enable Remote Desktop for all roles checkbox during the publishing process, it is possible to connect to each instance directly from Visual Studio using the following steps:
Locate the Microsoft Azure node in the Server Explorer.
Locate the Cloud Services subnode and the Cloud Service you want to connect to (in the example,
WAHelloWorld
).Expand the node, and select the production or staging deployment.
Expand the node and the role of interest; it will show a list of the deployed instance.
Right-click on the selected instance (from instance 0 to instance "N"), and choose Connect using Remote Desktop….
Confirm the settings in the Remote Desktop Connection window and enter the VM. If prompted with a credential set (in most cases, only for the password), type the ones you gave in the publishing process.
You are now connected to a real running instance of Azure.
We cannot actually rely on the information we gain by browsing the VM, as it is subject to change frequently without control by the user.
Have a look at the following MSDN links to get additional information:
Complete recap of how to publish from Visual Studio at http://msdn.microsoft.com/en-us/library/ee460772.aspx
How to constantly maintain the same IP of the CS endpoint at http://msdn.microsoft.com/en-us/library/jj614593.aspx
From the beginning of the Azure era, developers have been asking to be provided with the capability of live debugging solutions in the cloud. As it is not a simple feature, Microsoft figured it out only in 2013. However, now, we do have the strong capability to remotely debug our Cloud Service from Visual Studio to enhance the testing experience and extend it to the live application.
To get ready, we need a ready-to-publish application with at least one valid role. Follow steps 1 to 9 in the previous recipe and proceed with the instructions in the next recipe.
We will configure a Cloud Service to be attached with a remote debugger; then, we will proceed with the debug session, using the following steps:
When we follow steps 1 to 9 of the previous recipe, we will see the Publish Microsoft Azure Application window.
In Common Settings, choose Debug as the build configuration.
Select the Advanced Settings tab.
Check the Enable Remote Debugger option for all roles.
Complete the publish process and wait for it to finish.
Locate a part of your code that is now running in Azure and set a breakpoint.
Locate the Microsoft Azure node in the Server Explorer and find the Cloud Service you want to debug.
Expand the node and select the deployment slot first; then, right-click on the instance to connect to (that is, Instance 0 under Production).
Click on Attach Debugger.
Perform the appropriate actions on the running code (firing events, for example) to cover the code where the breakpoint is set.
From steps 1 to 4, we republished the Cloud Service by enabling the capability to remote debug the running code.
From steps 5 to 10, we executed an attach to process-like operation, connecting to a remote Azure instance instead of a local process.
Tip
In step 7, we can also decide to debug the entire service, instead of a single instance. This capability is to be intended as an attach-to-each-role process, with advantages (that is, the first instance that meets the conditions will break the debugger) and disadvantages (that is, if each instance is frozen by the debugger, there will be potentially no thread free to serve the legitimate requests).
As remote debugging is not considered a best practice (there is also the Emulator to test our code), there are some constraints you should know about and deal with:
Depending on your Internet connection's quality, debugging will be slower or faster due to the amount of data exchanged between the VMs and the host.
Remember to softly use the debugging windows (Immediate, Watch, and Locals) in order to prevent VS from freezing (due to the network transfers).
Attaching the debugger to a single instance is preferable. Attaching it to the whole service (despite there being a limitation of 25 instances per role if remote debugging is enabled) is considered much slower.
To enable remote debugging, the remote VM uses some ports (30400-30424 and 31400-31424 for the time being), so avoid using them in the application as this will result in an error.
Until the Remote Debug feature was available, the only supported method to debug remotely was by collecting the IntelliTrace logs in the live instances and downloading them later in Visual Studio to analyze them. This method is quite complex, and it is not covered in this book.
Have a look at the following MSDN links to get additional information:
How to debug Virtual Machines from VS at http://msdn.microsoft.com/en-us/library/ff683670.aspx
How to collect the IntelliTrace data of the remote Cloud Services at http://msdn.microsoft.com/en-us/library/ff683671.aspx
The service model for a Cloud Service in Microsoft Azure is specified in two XML files: the service definition file, ServiceDefinition.csdef
, and the service configuration file, ServiceConfiguration.cscfg
. These files are part of the Microsoft Azure project.
The service definition file specifies the roles used in the Cloud Service, up to 25 in a single definition. For each role, the service definition file specifies the following:
The instance size
The available endpoints
The public key certificates
The pluggable modules used in the role
The startup tasks
The local resources
The runtime execution context
The multisite support
The file contents of the role
The following code snippet is an example of the skeleton of the service definition document:
<ServiceDefinition name="<service-name>" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" upgradeDomainCount="<number-of-upgrade-domains>" schemaVersion="<version>"> <LoadBalancerProbes> </LoadBalancerProbes> <WebRole …> </WebRole> <WorkerRole …> </WorkerRole> <NetworkTrafficRules> </NetworkTrafficRules> </ServiceDefinition>
We can mix up to 25 roles from both the web role and worker role types. In the past, there was also a third kind of supported role, the VM Role, which is now deprecated.
All instances of a role have the same size, chosen from standard sizes (A0-A04), memory intensive sizes (A5-A7), and compute intensive sizes (A8-A9). Each role may specify a number of input endpoints, internal endpoints, and instance-input endpoints. Input endpoints are accessible over the Internet and are load balanced, using a round-robin algorithm, across all instances of the role:
<InputEndpoint name="PublicWWW" protocol="http" port="80" />
Internal endpoints are accessible only by instances of any role in the Cloud Service. They are not load balanced:
<InternalEndpoint name="InternalService" protocol="tcp" />
Instance-input endpoints define a mapping between a public port and a single instance under the load balancer. An instance-input endpoint is linked to a specific role instance, using a port-forwarding technique on the load balancer. Onto it, we must open a range of ports through the AllocatePublicPortFrom
section:
<InstanceInputEndpoint name="InstanceLevelService" protocol="tcp" localPort="10100"> <AllocatePublicPortFrom> <FixedPortRange max="10105" min="10101" /> </AllocatePublicPortFrom> </InstanceInputEndpoint>
An X.509 public key certificate can be uploaded to a Cloud Service either directly on the Microsoft Azure Portal or using the Microsoft Azure Service Management REST API. The service definition file specifies which public key certificates, if any, are to be deployed with the role as well as the certificate store they are put in. A public key certificate can be used to configure an HTTPS endpoint but can also be accessed from code:
<Certificate name="CertificateForSSL" storeLocation="LocalMachine" storeName="My" />
Pluggable modules instruct Azure on how to set up the role. Microsoft Azure tooling for Visual Studio can automatically add/remove modules in order to enable/disable services as follows:
Diagnostics to inject Microsoft Azure Diagnostics
Remote access to inject remote desktop capability
Remote forwarder to inject the forwarding capability used to support remote desktop
Caching to inject the In-Role caching capability
Tip
Though In-Role caching is not covered in this book, there is a chapter about In-Memory Caching, using the Microsoft Azure Managed Cache service.
The following configuration XML code enables the additional modules:
<Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> <Import moduleName="Caching" /> </Imports>
Startup tasks are scripts or executables that run each time an instance starts, and they modify the runtime environment of the instance, up to and including the installation of the required software:
<Startup> <Task commandLine="run.cmd" taskType="foreground" executionContext="elevated"> <Environment> <Variable name="A" value="B" /> </Environment> </Task> </Startup>
The local resources section specifies how to reserve an isolated storage in the instance, for temporary data, accessible through an API instead of direct access to the filesystem:
<LocalResources> <LocalStorage name="DiagnosticStore" sizeInMB="20000" cleanOnRoleRecycle="false" /> <LocalStorage name="TempStorage" sizeInMB="10000" /> </LocalResources>
The runtime execution context specifies whether the role runs with limited privileges (default) or with elevated privileges that provide full administrative capabilities. Note that in a web role that is running full IIS, the runtime execution context applies only to the web role and does not affect IIS. This runs in a separate process with restricted privileges:
<Runtime executionContext="elevated" />
In a web role that is running full IIS, the site's element in the service definition file contains the IIS configuration for the role. It specifies the endpoint bindings, virtual applications, virtual directories, and host headers for the various websites hosted by the web role. The Hosting multiple websites in a web role recipe contains more information about this configuration. Refer to the following code:
<Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> </Bindings> </Site> </Sites>
The contents section specifies whether static contents are copied from an application folder to a destination folder on the Azure virtual machine, relative to the %ROLEROOT%\Approot
folder:
<Contents> <Content destination="MyFolder"> <SourceDirectory path="FolderA"/> </Content> </Contents>
The service definition file is uploaded to Microsoft Azure as part of the Microsoft Azure package.
The service configuration file specifies the number of instances of each role. It also specifies the values of any custom configuration settings as well as those for any pluggable modules imported in the service definition file.
Applications developed using the .NET framework typically store application configuration settings in an app.config
or web.config
file. However, in Cloud Services, we can mix several applications (roles), so a uniform and central point of configuration is needed. Runtime code can still use these files; however, changes to these files require the redeployment of the entire service package. Microsoft Azure allows custom configuration settings to be specified in the service configuration file where they can be modified without redeploying the application. Any service configuration setting that could be changed while the Cloud Service is running should be stored in the service configuration file. These custom configuration settings must be declared in the service definition file:
<ConfigurationSettings> <Setting name="MySetting" /> </ConfigurationSettings>
The Microsoft Azure SDK provides a RoleEnvironment.GetConfigurationSetting()
method that can be used to access the values of custom configuration settings. There is also CloudConfigurationManager.GetSetting()
of the Microsoft.WindowsAzure.Configuration
assembly that checks in-service configuration first, and if no Azure environment is found, it goes to the local configuration file.
The service configuration file is uploaded separately from the Microsoft Azure package and can be modified independently of it. Changes to the service configuration file can be implemented either directly on the Microsoft Azure Portal or by upgrading the Cloud Service. The service configuration can also be upgraded using the Microsoft Azure Service Management REST API.
The customization of the service configuration file is limited almost to the role instance count, the actual values of the settings, and the certificate thumbprints:
<Role name="WorkerHelloWorld"> <Instances count="2" /> <ConfigurationSettings> <Setting name="MySetting" value="Value" /> </ConfigurationSettings> <Certificates> <Certificate name="CertificateForSSL" thumbprint="D3E008E45ADCC328CE6BE2AB9AACE2D13F294838" thumbprintAlgorithm="sha1" /> </Certificates> </Role>
The handling of service upgrades is described in the Managing upgrades and changes to a Cloud Service and Handling changes to the configuration and topology of a Cloud Service recipes.
In this recipe, we'll learn how to configure the service model for a sample application.
To use this recipe, we need to have created a Microsoft Azure Cloud Service and deployed an application to it, as described in the Publishing a Cloud Service with options from Visual Studio recipe.
We are going to see how to implement a real service definition file, based on the following scenario, taken from the WAHelloWorld sample. Suppose we have a Cloud Service with two roles (a web role and a worker one). The web role has a medium instance size; it uses the Diagnostics module, has a local storage of 10 GB, has two public endpoints (one at port 80
and another at port 8080
), and has a setting value. The worker role is small, and it has an input endpoint to let the various instances communicate together.
For the web role, we proceed as follows:
Open the
ServiceDefinition.csdef
file in Visual Studio.Inside the
<ServiceDefinition>
root element, create a<WebRole>
item:<WebRole name="WebHelloWorld" vmsize="Medium"></WebRole>
Inside the
WebRole
tag just created, add an<Endpoints>
tag with twoInputEndpoint
tags, one for each public endpoint:<Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> <InputEndpoint name="Endpoint2" protocol="http" port="8080" /> </Endpoints>
Inside the
WebRole
tag, create aSites
element with the correct binding to the web application in the solution:<Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> <Binding name="Endpoint2" endpointName="Endpoint2" /> </Bindings> </Site> </Sites>
Inside the
WebRole
tag, declare the usage of theDiagnostics
module:<Imports> <Import moduleName="Diagnostics" /> </Imports>
Inside the
WebRole
tag, declare a local storage element of 10 GB:<LocalResources> <LocalStorage name="MyStorage" cleanOnRoleRecycle="true" sizeInMB="10240" /> </LocalResources>
Finally, declare the
Settings
section and a setting inside theWebRole
tag:<ConfigurationSettings> <Setting name="MySetting" /> </ConfigurationSettings>
Create a
WorkerRole
section like the following:<WorkerRole name="WorkerHelloWorld" vmsize="Small"> ...
Declare
InternalEndpoint
inside a newEndpoints
section:<Endpoints> <InternalEndpoint name="Internal" protocol="tcp" /> </Endpoints>
In the corresponding
ServiceConfiguration.cscfg
file, configure the instance count as follows:<Role name="WebHelloWorld"> <Instances count="1" /> </Role> <Role name="WorkerHelloWorld"> <Instances count="2" /> </Role>
Provide a value called
MySetting
for the configuration setting :<ConfigurationSettings> <Setting name="MySetting" value="Test"/> </ConfigurationSettings>
Save the file, and check the Visual Studio Error List window to solve any errors.
In step 2, we put an XML tag to declare a WebRole
tag. The name of the WebRole
tag must be the name of a valid web application project inside the solution that contains the cloud project itself. In the WebRole
tag, we also specify the instance size, choosing among the ones in the following table (there are more sizes available actually):
Size |
CPU |
Memory |
---|---|---|
ExtraSmall |
Shared |
768 MB |
Small |
1 |
1.75 GB |
Medium |
2 |
3.5 GB |
Large |
4 |
7 GB |
ExtraLarge |
8 |
14 GB |
A5 |
2 |
14 GB |
A6 |
4 |
28 GB |
A7 |
8 |
56 GB |
In step 3, we declared two HTTP-based endpoints on ports 80
and 8080
, respectively. Intend this configuration as a load balancer firewall/forward configuration. Saying "There's an endpoint" does not mean there is a real service under the hood, which replies to the request made to the endpoints (except for the default one on port 80
).
In step 4, we bound the WebHelloWorld web application to both the endpoints declared earlier. It is also possible to specify additional configurations regarding virtual directories and virtual applications.
In step 5, we simply told Azure to inject the Diagnostics module into the VM that runs our service. As said earlier, other modules can be injected here.
In step 6, we told Azure to allocate an amount of 10 GB of space on a folder located somewhere on the virtual machine. As this folder will be accessed through an API, it doesn't matter where it's located. What we have to know is the meaning of the cleanOnRoleRecycle
attribute. If it is true, we agree that isolated storage won't be retained across role recycles; if it is false, we ask it to preserve the data (if possible) instead.
In step 7, we declared the presence of a setting value but not the setting value itself, which is shown instead in the service configuration in step 11.
In step 8, we repeated the process for the worker role, but as it does not run IIS, we don't declare any sites. Instead, due to the initial goal, we declare an internal endpoint. In step 9 in fact, we said that the VM will have an open TCP port. It will be our code's responsibility to actually bind a service to this port.
Tip
In the InternalEndpoint
tag, we can specify a fixed port number. In the example given earlier, there is no port so that Azure can decide which port to allocate. We can use the ServiceRuntime
API as the local storage to find out the information at runtime.
Finally, we populate the service configuration with the actual values for the parameters specified in the service definition. One of these is the instance count (for both the roles) and the configuration setting value for the web role.
Is there more to the service definition document? First, for example, the capability to influence the update process of our services/roles/instances. Let's introduce the concepts of fault domain and update domain. Microsoft Azure assures that if two or more instances are deployed, it will put them onto an isolated hardware to reduce as much as it can the possibility of a downtime due to a failure. This concept is also known as Fault Domain, as Azure creates instances in separate areas to increase the availability. Therefore, an Update Domain is about how Azure manages the update flow on our instances, taking them offline one by one or group by group to reduce, again, the possibility of a downtime. Think about Upgrade Domain as groups of instances, which have the default value of 5
. This means that if five or fewer instances are deployed, they will be updated one by one. If there are more than five instances, the default behavior makes five groups and updates the instances of a group altogether.
Tip
It is not always necessary to update instances one by one, or it is often not feasible to update the system in parts. Despite the occurrence of a downtime, systems often bring online new databases and new logic that modify actual data. Bringing new instances online one by one could lead to the same time coexisting on a different version of data/code that runs on the system. In this case, a simultaneous upgrade as well as the related downtime should be taken into consideration. During development, it is advisable to keep a single instance deployed, to save time during upgrades; however, during testing, it is recommended that you scale out and verify that the application is behaving correctly.
We can suggest a different value for Azure for the Upgrade Domain behavior; we can suggest up to a value of 20. The higher the value, the lower the impact of the upgrade on the entire infrastructure:
<ServiceDefinition name="WAHelloWorld" upgradeDomainCount="20"…
Tip
Consider letting Azure decide about the Upgrade Domains due to the nature of the PaaS in the event of a breaking change happening to the platform in the future. Designing a workflow based on an Azure constraint is not recommended. Instead, design your update to be resilient without telling Azure anything.
Finally, instances will know about the change that occurred in the topology (upgrades and configuration) that walks the upgrade domains. This means that the instances will know about the change one by one, only when it is their respective turn to change. This is the default behavior.
There is more information on topology changes in the Handling changes to the configuration and topology of a Cloud Service recipe
The complete reference to the Service Definition schema at http://msdn.microsoft.com/en-us/library/ee758711.aspx
A Cloud Service can expose an input endpoint to the Internet. This endpoint has a load balanced Virtual IP (VIP) address, which, for the time being, could change at each deployment.
Each VIP has an associated domain of the [servicednsprefix].cloudapp.net
form. The servicednsprefix
name is specified when the Cloud Service is created, and it is not changeable following the creation. A Cloud Service might be reached over the Internet at servicednsprefix.cloudapp.net
. All Cloud Services exist under the cloudapp.net
domain.
The DNS system supports a CNAME record that maps one domain to another. This allows, for example, www.servicename.com
to be mapped to servicednsprefix.cloudapp.net
. The DNS system also supports an A record that maps a domain to a fixed IP address. Unfortunately, reliable use of an A record is not recommended with a Cloud Service, because the IP address can change if the Cloud Service is deleted and redeployed.
It is not possible to acquire a public key certificate for the cloudapp.net
domain as Microsoft controls it. Consequently, a CNAME is needed to map a custom domain to a cloudapp.net
domain when HTTPS is used. We will see how to do this in the Implementing HTTPS in a web role recipe. In this recipe, we'll learn how to use CNAME to map a custom domain to a Cloud Service domain.
To use this recipe we need to control a custom domain (for example, customdomain.com
) and must have created a Cloud Service (for example, theservice.cloudapp.net
).
We are going to see how to use CNAME to map a custom domain to a Cloud Service using the following steps:
Go to the dashboard of your DNS provider.
On the CNAME management page of your DNS provider, add a new CNAME that maps
www.customdomain.com
totheservice.cloudapp.net
.If your DNS provider supports the domain-forwarding feature, forward
customdomain.com
towww.customdomain.com
.Wait for some time as the propagation of the DNS is recorded all around the world.
In step 1, we navigated to the dashboard of the DNS service. Each DNS service operates on its own, so the interface might vary a lot from one vendor to another. In step 2, we told the DNS to forward the DNS request to theservice.cloudapp.net
if the DNS www.customdomain.com
is requested. In turn, when the Azure DNS is being requested from a remote client with the theservice.cloudapp.net
record, it returns the actual IP address of the load balancer of the Cloud Service.
Finally, in step 3, we configured (if possible) forwarding so that the customdomain.com
root/naked name is forwarded automatically to www.customdomain.com
.
A DNS change could take from a few minutes to a few days, depending on the service provider. Expect the average time to wait until the DNS comes online to be about few hours.
Sometimes (we read it as always), it is necessary to test the development environment while calling the production URL into the browser. This is possible by locally mapping (on the development workstation) the domain name to the loopback IP address.
The equivalent of a CNAME mapping in the development environment is a hosts
file entry that maps servicename.com
to 127.0.0.1
. The hosts
file is located in the %SystemRoot%\system32\drivers\etc
folder. For example, adding the following entry to the hosts
file maps servicename.com
to 127.0.0.1
:
127.0.0.1 servicename.com
Note that we need to remember to remove this entry from the hosts
file on the development machine after the application is deployed to the Microsoft Azure data center. Otherwise, we will not be able to access the real servicename.com
domain from the development machine.
We can also map the remote Azure service with this technique as follows:
Discover the current IP address of the service by pinging it:
ping theservice.cloudapp.net
Add a line into the
hosts
file that maps the IP just discovered to the domain name.
A Microsoft Azure web role can be configured to expose an HTTPS endpoint for a website. This requires an X.509 public key certificate to be uploaded as a service certificate to the Cloud Service and the web role to be configured to use it.
The following steps are used to implement HTTPS for a web role:
Acquire a public key certificate for the custom domain of the web role
Upload the certificate to the Cloud Service
Add the certificate to the web role configuration
Configure the website endpoint to use the certificate
The use of HTTPS requires the website to be configured to use a public key certificate. It is not possible to acquire a public key certificate for the cloudapp.net
domain as Microsoft owns this domain. Consequently, a custom domain must be used when exposing an HTTPS endpoint. The Providing a custom domain name for a Cloud Service recipe shows how to map a custom domain to the cloudapp.net
domain. For production use, a Certification Authority (CA) should issue the certificate to ensure that its root certificate is widely available. For test purposes, a self-signed certificate is sufficient.
The certificate must be uploaded to the Cloud Service using either the Microsoft Azure Portal or the Microsoft Azure Service Management REST API. Note that this upload is to the Certificates section for the Cloud Service and not to the Management Certificates section for the Microsoft Azure subscription. As a service certificate must contain both public and private keys, it is uploaded as a password-protected PFX file.
The configuration for the certificate is split between the service definition file, ServiceDefinition.csdef
, and the service configuration file, ServiceConfiguration.cscfg
. The logical definition and deployment location of the certificate is specified in the service definition file. The thumbprint of the actual certificate is specified in the service configuration file so that the certificate can be renewed or replaced without redeploying the Cloud Service. In both cases, for each web role, there is a hierarchy that comprises a Certificates
child to the WebRole
element, which, in turn, includes a set of one or more Certificate
elements, each referring to a specific certificate.
In this recipe, we'll learn how to implement HTTPS in a web role.
We are going to see how to implement an HTTPS endpoint in a web role only on the 443
port, using a test (self-signed) certificate.
The first stage is creating a test certificate and uploading it to the Cloud Service using the following steps:
Use the Server Certificates section of IIS 8 to create a self-signed certificate and give it a friendly name of
www.myservice.com
.Open the Microsoft Management console by typing
mmc
in the Run windows of the Start menu, and use the certificate snap-in, specifying the local machine level.In the Personal/Certificates branch, right-click on the certificate with the friendly name of
www.myservice.com
and navigate to All Tasks | Export to open the Certificate Export Wizard.Complete the wizard by choosing to export the private key (and otherwise accepting default values) and providing a password and a location for the PFX file.
On the Microsoft Azure Portal, select the Certificates section for the Cloud Service and click on Add certificate.
Upload the public key certificate by providing the location for the PFX file and its password.
The next stage is configuring a Cloud Service to use the certificate. We can do this by performing the following steps:
Use Visual Studio to create an empty cloud project.
Add a web role to the project (accept the default name of WebRole1).
Right-click on the WebRole1 item under the
Roles
folder of the cloud project; then go to the Properties page and click on the Certificates tab.Click on Add Certificate, provide a name, select a Store Location, and a Store Name.
Click on the … icon to browse the store and look for the
www.myservice.com
certificate. Then confirm it.Go to the Endpoints tab of the Properties page you are on.
Modify
Endpoint1
(the default one) to listen on port443
.Choose https as the protocol.
Specify the certificate declared at step 4 in the SSL Certificate Name.
Build the application and deploy it into the Cloud Service.
Use a browser to access the web role using HTTPS.
Choose to ignore the certificate error caused by our use of a test certificate, and view the certificate.
From steps 1 to 6, we created and uploaded our test certificate. We need to export the certificate as a password-protected PFX file so that it contains both the public and private keys for the certificate.
In steps 7 and 8, we created a cloud project with a web role.
From steps 9 to 11, we specified the linkage between web role bindings and the certificate. In step 10, we specified the certificate store on each instance into which the Azure fabric deploys the certificate.
In step 13, we modified the default endpoint to listen as an HTTPS endpoint, using the certificate, on port 443
. In step 15, we specified the certificate to the endpoint.
In step 16, we built the application and deployed it into the Cloud Service. We verified that we could use HTTPS in steps 17 and 18. We are using a test certificate for which there is no root certificate in the browser. This consequently causes the browser to issue a warning. For demonstration purposes, we ignored the error and looked at the certificate properties to confirm that it was the test certificate.
We can use IIS to generate a
Certificate Signing Request (CSR), which we can send to a CA. We do this by opening the Server Certificates section of IIS and clicking on Create Certificate Request. When generating the request, we specify the fully qualified domain name for the custom domain, for example, www.ourcustomdomain.com
, in the Common Name field. After the CA issues the certificate, we click on the Complete Certificate Request in the Server Certificates section of IIS to import the certificate into the personal certificate store of the local machine level.
From there, we can upload and deploy the CA-issued certificate by starting at step 2 of the recipe.
We can invoke the makecert
command from the Visual Studio command prompt, as follows, to create a test certificate and install it in the personal branch of the local machine level of the certificate store:
C:\Users\Administrator>makecert -r -pe -sky exchange -a sha1 -len 2048 -sr localmachine -ss my -n "CN=www.ourservice.com"
The minimum required bit length for Azure is 2048, and this test certificate has a subject name of www.ourservice.com
.
The Microsoft Azure Fabric Controller deploys an instance of a Microsoft Azure role onto a virtual machine (VM) as three Virtual Hard Disks (VHD). The guest OS image is deployed to the D
drive, the role image is deployed to the E
or F
drive, while the C
drive contains the service configuration and the local storage available to the instance. Only code running with elevated privileges can write anywhere other than the local storage.
Tip
As Azure could change the way it manages the underlying operating system of Cloud Services, the information provided about filesystem topology could change suddenly with no obligation from Microsoft to explain it.
Each instance has read-write access to a reserved space on the C
drive. The amount of space available depends on the instance size and ranges from 20 GB for an Extra Small instance to 2,040 GB for an Extra Large instance. This storage space is reserved by being specified in the service definition file, ServiceDefinition.csdef
, for the service. Note that RoleEnvironment.GetLocalResource()
should be invoked to retrieve the actual path to local storage.
The LocalStorage
element for a role in the service definition file requires a name (Name
) and, optionally, the size in megabytes to be reserved (sizeInMb
). It also requires an indication of whether the local storage should be preserved when the role is recycled (cleanOnRoleRecycle
). This indication is only advisory, as the local storage is not copied if an instance is moved to a new VM.
Multiple local storage resources can be specified for a role as long as the total space allocated is less than the maximum amount available.
Tip
In fact, the purpose of the allocated space is just an upper-bound limit, as an exception is thrown only when a write operation actually exceeds the allowed maximum.
This allows different storage resources to be reserved for different purposes. Storage resources are identified by name.
The RoleEnvironment.GetLocalResource()
method can be invoked to retrieve the root path for a local resource with a specific name. The role instance can invoke arbitrary file and directory-management methods under this path.
In this recipe, we'll learn how to configure and use local storage in an instance.
We are going to access the local storage on an instance and create a file on it. We will write to the file and then read the contents from the file. We will do this using the following steps:
Use Visual Studio to create an empty cloud project.
Add a worker role to the project (accept the default name of
WorkerRole1
).Right-click on the WorkerRole1 item under the Roles folder of the cloud project. Then, go to the Properties page and click on the Local Storage tab.
Click on Add Local Storage and set Name as
WorkerStorage
, a size of10 MB
, and leave the Clean on role recycle box unchecked.Add a new class named
LocalStorageExample
to the project:Add the following
using
statements to the top of the class file:using Microsoft.WindowsAzure.ServiceRuntime; using System.IO;
Add the following private members to the class:
static String storageName = "WorkerStorage"; String fileName; LocalResource localResource = RoleEnvironment.GetLocalResource(storageName);
Add the following constructor to the class:
public LocalStorageExample(String fileName) { this.fileName = fileName; }
Add the following method, which writes to the local storage, to the class:
public void WriteToLocalStorage() { String path = Path.Combine(localResource.RootPath, fileName); FileStream writeFileStream = File.Create(path); using ( StreamWriter streamWriter =new StreamWriter( writeFileStream)) { streamWriter.Write("think but this and all is mended"); } }
Add the following method, which reads the file, to the class:
public void ReadFromLocalStorage() { String fileContent = string.Empty; String path = Path.Combine(localResource.RootPath, fileName); FileStream readFileStream = File.Open(path, FileMode.Open); using (StreamReader streamReader =new StreamReader(readFileStream)) { fileContent = streamReader.ReadToEnd(); } }
Add the following method, using the methods added earlier, to the class:
public static void UseLocalStorageExample() { String fileName = "WorkerRoleStorage.txt"; LocalStorageExample example =new LocalStorageExample(fileName); example.WriteToLocalStorage(); example.ReadFromLocalStorage(); }
Add the following code at the start of the
Run()
method inWorkerRole.cs
:LocalStorageExample.UseLocalStorageExample();
In steps 1 and 2, we created a cloud project with a worker role.
In step 3, we used the GUI to locate the local storage section of the project properties. In step 4, we added the definition of the local storage to the service definition file for the Cloud Service. We provided a name by which it can be referenced and a size. We also specified that the content of local storage should be preserved through an instance recycle.
In steps 5 and 6, we set up the LocalStorageExample
class. In step 7, we added some private members to store the filename and the local storage resource. We initialized the filename in the constructor that we added in step 8.
In step 9, we added a method that created a file and added some text to it. In step 10, we opened the file and read the text.
In step 11, we added a method that invoked the other methods in the class. In step 12, we invoked this method.
Have a look at the following MSDN link to get additional information:
Additional details about local storage behavior and process model at http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Microsoft released Microsoft Azure as a production service in February 2010. A common complaint was that it was too expensive to develop small websites because a web role could support only a single website. The cause of this limitation was that a web role hosted a website using a hosted web core rather than the full IIS.
With the Microsoft Azure SDK v1.3 release, Microsoft Azure added support for full IIS for web roles. This means that a single web role can host multiple websites. However, all of these websites share the same Virtual IP address, and a CNAME record must be used to map the domain name of the website to the servicename.cloudapp.net
URL for the web role. Each website is then distinguished inside IIS by its distinct host header.
The Providing a custom domain name for a Cloud Service recipe shows how to use a CNAME record to map a custom domain to a Cloud Service domain. Note that full IIS is also available on worker roles.
Tip
The approach described in this recipe, as it is still valid, is probably not the best you can do with Azure. If you need to host multiple websites in a single role (and in a single unit of scale), the Microsoft Azure Websites could be the best solution to accomplish this. We will talk about Websites in a dedicated chapter, and it is the newest and probably the most advanced PaaS on the market.
The Sites
element in the ServiceDefinition.csdef
service definition file is used to configure multiple websites. This element contains one child Site
element for each website hosted by the web role. Each Site
element has two attributes: name, which distinguishes the configuration, and physicalDirectory
, which specifies the physical directory for the website. Note that multiple websites can reference the same physical directory. Each Site
element has a Bindings
child element that contains a set of Binding
child elements, each of which identifies an endpoint used by the website and the host header used to distinguish the website. Each endpoint must correspond to an input endpoint specified in the EndPoints
declaration for the web role. It is possible to define virtual applications and virtual directories for a website, using the VirtualApplication
and VirtualDirectory
elements, respectively. This configuration is a subset of the standard IIS configuration.
The following example shows a fragment of a service definition file for a web role that hosts two websites:
<WebRole name="MultipleWebsites"> <Sites> <Site name="WebsiteOne" physicalDirectory="..\Web"> <Bindings> <Binding name="HttpIn" endpointName="HttpIn"hostHeader="www.websiteone.com" /> </Bindings> </Site> <Site name="WebsiteTwo" physicalDirectory="..\Web"> <VirtualApplication name="Payment"physicalDirectory="..\..\Payment"> <VirtualDirectory name="Scripts"physicalDirectory="..\Web\Scripts" /> </VirtualApplication> <Bindings> <Binding name="HttpIn" endpointName="HttpIn"hostHeader="www.websitetwo.com" /> <Binding name="HttpsIn" endpointName="HttpsIn"hostHeader="www.websitetwo.com" /> </Bindings> </Site> </Sites> <Endpoints> <InputEndpoint name="HttpIn" protocol="http"port="80" /> <InputEndpoint name="HttpsIn" protocol="https"port="443" /> </Endpoints> <ConfigurationSettings /> </WebRole>
This configuration specifies that the web role hosts two websites: www.websiteone.com
and www.websitetwo.com
. They share the same physical directory, but www.websitetwo.com
also uses a virtual application with its own virtual directory. Both websites are accessible using HTTP, but www.websitetwo.com
also exposes an HTTPS endpoint.
In this recipe, we'll learn how to host multiple websites in a single Microsoft Azure web role.
We are going to see how to implement the two websites in a Cloud Service. We do this as follows:
Use Visual Studio to create an empty cloud project.
Add a web role to the project (accept the default name of
WebRole1
).
The changes from steps 3 to 8 affect the ServiceDefinition.csdef
service definition file:
Set the
name
attribute of theSite
element toWebSiteOne
.Add a
physicalDirectory
attribute, with the"..\..\..\WebRole1"
value, to theSite
element.Add a
hostHeader
attribute, with thewww.websiteone.com
value, to theBinding
element for theSite
element.Copy the entire
Site
element and paste it under itself.Change the
name
attribute of the newSite
element toWebsiteTwo
.Change the
hostHeader
attribute of the newSite
element towww.websitetwo.com
.Add the following entries to the
hosts
file present in the%SystemRoot%\system32\drivers\etc
folder:127.0.0.1 www.websiteone.com 127.0.0.1 www.websitetwo.com
Build and run the Cloud Service.
Change the URL in the browser to
www.websiteone.com
, and refresh the browser.Change the URL in the browser to
www.websitetwo.com
, and refresh the browser.
On completing the steps, the WebRole
element in the ServiceDefinition.csdef
file should be as follows:
<WebRole name="WebRole1"> <Sites> <Site name="WebsiteOne" physicalDirectory="..\..\..\WebRole1"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1"hostHeader="www.websiteone.com"/> </Bindings> </Site> <Site name="WebsiteTwo" physicalDirectory="..\..\..\WebRole1"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1"hostHeader="www.websitetwo.com"/> </Bindings> </Site> </Sites> <Endpoints> <InputEndpoint name="Endpoint1"protocol="http" port="80" /> </Endpoints> <Imports> <Import moduleName="Diagnostics" /> </Imports> </WebRole>
In steps 1 and 2, we created a Cloud project with a web role.
In steps 3 and 4, we configured the Site
element for the first website. In step 3, we provide a distinct name for the element, and in step 4, we specified the physical directory for the website.
In step 5, we configured the Binding
element for the Site
element by specifying the host header we use to distinguish the website.
In step 6, we created the Site
element for the second website. In steps 7 and 8, we completed the configuration of the second website by providing a name for its configuration and specifying the host header we use to distinguish the website. Note that in this example, we used the same physical directory for both websites.
In step 9, we modified the hosts
file so that we can use the configured host headers as URLs.
We built and ran the Cloud Service in step 10. We will encounter an error in the browser as there is no default website at 127.0.0.1:81
(or whichever port the Microsoft Azure Compute Emulator has assigned to the Cloud Service). In steps 11 and 12, we confirmed this by replacing 127.0.0.1
in the browser URL with the URLs we configured as host headers for the two websites.
When we use this Cloud Service, we must use CNAME records to map the two domains to the ourservice.cloudapp.net
URL of our Cloud Service. Just as we cannot access the Cloud Service locally at 127.0.0.1
, we cannot access the Cloud Service at ourservice.cloudapp.net
. We will see how to use CNAME to do this mapping in the Providing a custom domain name for a Cloud Service recipe.
Have a look at the following MSDN links and blog posts to get additional information:
Further information about hosting multiple websites in a web role at http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Blog post about custom build and CSPACK usage for advanced scenarios at http://michaelcollier.wordpress.com/2013/01/14/multiple-sites-in-a-web-role/
Microsoft Azure provides a locked-down environment for websites hosted in IIS (web roles) and application services (worker roles). While this hardening significantly eases administration, it also limits the ability to perform certain tasks such as installing software or writing to the registry. Another problem is that any changes to an instance are lost whenever the instance is reimaged or moved to a different server.
The service definition provides the solution to this problem by allowing the creation of startup tasks, which are script files or executable programs that are invoked each time an instance is started. Startup tasks allow a temporary escape from the restrictions of the locked-down web role and worker role while retaining the benefits of these roles.
A startup task must be robust against errors because a failure could cause the instance to recycle. In particular, the effect of a startup task must be idempotent. As a startup task is invoked each time an instance starts, it must not fail when performed repeatedly. For example, when a startup task is used to install software, any subsequent attempt to reinstall the software must be handled gracefully.
Startup tasks are specified with the Startup
element in the ServiceDefinition.csdef
service definition file. This is a child element of the WebRole
or WorkerRole
element. The child elements in the Startup
element comprise a sequence of one or more individual Task
elements, each specifying a single startup task. The following example shows the definition of a single startup task and includes all the attributes for a Task
element:
<Startup> <TaskcommandLine="Startup.cmd"executionContext="elevated"taskType="simple" /> </Startup>
The commandLine
attribute specifies a script or executable and its location relative to the %RoleRoot%\AppRoot\bin
folder for the role. The executionContext
element takes one of two values: limited
, to indicate the startup task runs with the same privileges as the role, or elevated
, to indicate the startup task runs with full administrator privileges. It is the capability provided by elevated startup tasks that gives them their power. There are three types of startup tasks, which are as follows:
Simple: This indicates that the system cannot invoke additional startup tasks until this one completes.
Background: This initiates the startup task in the background. This is useful in the case of a long-running task, the delay in which could cause the instance to appear unresponsive.
Foreground: This resembles a background startup task, except that the instance cannot be recycled until the startup task completes. This can cause problems if something goes wrong with the startup task.
Windows PowerShell 2 is installed on Microsoft Azure roles that run guest OS 2.x or higher. This provides a powerful scripting language that is ideal for scripting startup tasks.
Tip
The guest OS is the nickname for the version set of Windows Server that runs Azure instances. For the time being, we have four guest OS families:
Guest OS family 4, which is based on Windows Server 2012 R2 and supports the .NET Framework 4.0, 4.5, and 4.5.1.
Guest OS family 3, which is based on Windows Server 2012 and supports the .NET Framework 4.0 and 4.5.
Guest OS family 2, which is based on Windows Server 2008 R2 SP1 and supports the .NET Framework 3.5 and 4.0.
Guest OS family 1 (retired in 2014), which is based on Windows Server 2008 SP2 and supports the .NET Framework 3.5 and 4.0. It does not support Version 4.5 or later.
A PowerShell script named StartupTask.ps1
is invoked from the startup task command file as follows:
C:\Users\Administrator>PowerShell -ExecutionPolicy Unrestricted .\StartupTask.ps1
The ExecutionPolicy
parameter specifies that StartupTask.ps1
can be invoked even though it is unsigned.
In startup tasks, we can use AppCmd
to manage IIS. We can also use the
WebPICmdLine
command-line tool, WebPICmdLine.exe
, to access the functionality of the Microsoft Web Platform Installer. This allows us to install Microsoft Web Platform components, which includes, for example, PHP.
We are going to use a startup task that uses AppCmd
to modify the default idle timeout for IIS application pools. We will do this using the following steps:
Use Visual Studio to create an empty cloud project.
Add a web role to the project (accept the default name of
WebRole1
).Add the
StartupTask.cmd
text file name to the root directory of the web role project.Set its Copy To Output Directory property to Copy always.
Insert the following text in the ASCII-encoded file:
%SystemRoot%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:0.01:00:00 exit /b 0
Add the following, as a child of the
WebRole
element, toServiceDefinition.csdef
:<Startup> <Task commandLine="StartupTask.cmd"executionContext="elevated" taskType="simple"/> </Startup>
Build and deploy the application into the Cloud Service.
Open IIS Manager, select Application Pools, right-click on any application pool, and select Advanced Settings…. Verify that the Idle Timeout (minutes) setting is
60
minutes for the application pool.
In steps 1 and 2, we created a cloud project with a web role. In steps 3 and 4, we added the command file for the startup task to the project and ensured that the build copied the file to the appropriate location in the Microsoft Azure package. In step 5, we added a command to the file that set the idle timeout to 1
hour for IIS application pools. The exit
command ended the batch file with a return code of 0
.
In step 6, we added the startup task to the service definition file. We set the execution context of the startup task to elevated
so that it had the privilege required to modify IIS settings.
In step 7, we built and deployed the application into a Cloud Service. We verified that the startup task worked in step 8.
Note that the web or worker role itself can run with elevated privileges. In a web role, full IIS runs its own process that continues to have limited privileges; only the role-entry code (in WebRole.cs
) runs with elevated privileges. This privilege elevation is achieved by adding the following as a child element of the WebRole
or WorkerRole
element in the ServiceDefinition.csdef
service definition file:
<Runtime executionContext="elevated"/>
The default value for executionContext
is limited
.
Having done this, we can set the application pool idle timeout in code by invoking the following from the OnStart()
method for the web role:
private void SetIdleTimeout(TimeSpan timeout) { using (ServerManager serverManager = new ServerManager()) { serverManager.ApplicationPoolDefaults.ProcessModel.IdleTimeout = timeout; serverManager.CommitChanges(); } }
The ServerManager
class is in the Microsoft.Web.Administrator
namespace, which is contained in the following assembly: %SystemRoot%\System32\inetsrv\Microsoft.Web.Administration.dll
.
When developing startup tasks, it can be useful to log the output of commands to a known location for further analysis. When using the development environment, another trick is to set the startup task script to the following code:
start /wait cmd
This produces a command window in which we can invoke the desired startup command and see any errors or log them with the DOS redirect (">>
"). The /wait
switch blocks the caller until the prompt completes.
Have a look at the following MSDN link to get additional information:
Best practices for startup tasks (http://msdn.microsoft.com/en-us/library/jj129545.aspx)
A Microsoft Azure Cloud Service has to detect and respond to changes to its service configuration. Two types of changes are exposed to the service: changes to the ConfigurationSettings
element of the ServiceConfiguration.cscfg
service configuration file and changes to the service topology. The latter refers to changes in the number of instances of the various roles that comprise the service.
The RoleEnvironment
class exposes six events to which a role can register a callback method to be notified about these changes:
Changing
Changed
SimultaneousChanging
SimultaneousChanged
Stopping
StatusCheck
The Changing
event is raised before the change is applied to the role. For configuration setting changes, the RoleEnvironmentChangingEventArgs
parameter to the callback method identifies the existing value of any configuration setting being changed. For a service topology change, the argument specifies the names of any roles whose instance count is changing. The RoleEnvironmentChangingEventArgs
parameter has a Cancel
property that can be set to true
to recycle an instance in response to specific configuration setting or topology changes.
The Changed
event is raised after the change is applied to the role. As for the previous event, for configuration setting changes, the RoleEnvironmentChangedEventArgs
parameter to the callback method identifies the new value of any changed configuration setting. For a service topology change, the argument specifies the names of any roles whose instance count has changed. Note that the Changed
event is not raised on any instance recycled in the Changing
event.
The
SimulteneousChanging
and SimultaneousChanged
events behave exactly like the normal
events, but they are called only during a simultaneous update.
Tip
These events fire only if we have the topologyChangeDiscovery
attribute to Blast
in service definition file, for example, <ServiceDefinition name="WAHelloWorld" topologyChangeDiscovery="Blast>
as mentioned in the Configuring the service model for a Cloud Service recipe. These events cannot be canceled, and the role will not restart when these events are received. This is to prevent all roles from recycling at the same time.
We will talk about this kind of update in the Publishing a Cloud Service with options from Visual Studio recipe.
The
Stopping
event is raised on an instance being stopped. The OnStop()
method is also invoked. Either of them can be used to implement an orderly shutdown of the instance. However, this must completed within 5 minutes. In a web role, the Application_End()
method is invoked before the Stopping
event is raised and the OnStop()
method is invoked. It can also be used for shutdown code.
Tip
Microsoft Azure takes the instance out of the rotation of the load balancer, and then, it fires the stopping event. This ensures that no shutdown code can execute while legal requests are coming from the Internet.
The StatusCheck
event is raised every 15 seconds. The RoleInstanceStatusCheckEventArgs
parameter to the callback method for this event specifies the status of the instance as either Ready
or Busy
. The callback method can respond to the StatusCheck
event by invoking the SetBusy()
method on the parameter to indicate that the instance should be taken out of the load-balancer rotation temporarily. This is useful if the instance is so busy that it is unable to process additional inbound requests.
In this recipe, we'll learn how to manage service configuration and topology changes to a Cloud Service.
We are going to configure callback methods for four of the six RoleEnvironment
events. We will do this by performing the following steps:
Use Visual Studio to create an empty cloud project.
Add a worker role to the project (accept the default name of
WorkerRole1
).Add the following to the
ConfigurationSettings
element ofServiceDefinition.csdef
:<Setting name="EnvironmentChangeString"/> <Setting name="SettingRequiringRecycle"/>
Add the following to the
ConfigurationSettings
element ofServiceConfiguration.cscfg
:<Setting name="EnvironmentChangeString"value="OriginalValue"/> <Setting name="SettingRequiringRecycle"value="OriginalValue"/>
Add a new class named
EnvironmentChangeExample
to the project.Add the following
using
statements to the top of the class file:using Microsoft.WindowsAzure.ServiceRuntime; using System.Collections.ObjectModel; using System.Diagnostics;
Add the following callback method to the class:
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) { Boolean recycle = false; foreach (RoleEnvironmentChange change in e.Changes) { RoleEnvironmentTopologyChange topologyChange =change as RoleEnvironmentTopologyChange; if (topologyChange != null) { String roleName = topologyChange.RoleName; ReadOnlyCollection<RoleInstance> oldInstances =RoleEnvironment.Roles[roleName].Instances; } RoleEnvironmentConfigurationSettingChange settingChange= change as RoleEnvironmentConfigurationSettingChange; if (settingChange != null) { String settingName =settingChange.ConfigurationSettingName; String oldValue =RoleEnvironment.GetConfigurationSettingValue(settingName); recycle |= settingName == "SettingRequiringRecycle"; } } // Recycle when e.Cancel = true; e.Cancel = recycle; }
Add the following callback method to the class:
private static void RoleEnvironmentChanged(object sender,RoleEnvironmentChangedEventArgs e) { foreach (RoleEnvironmentChange change in e.Changes) { RoleEnvironmentTopologyChange topologyChange =change as RoleEnvironmentTopologyChange; if (topologyChange != null) { String roleName = topologyChange.RoleName; ReadOnlyCollection<RoleInstance> newInstances =RoleEnvironment.Roles[roleName].Instances; } RoleEnvironmentConfigurationSettingChange settingChange= change as RoleEnvironmentConfigurationSettingChange; if (settingChange != null) { String settingName =settingChange.ConfigurationSettingName; String newValue =RoleEnvironment.GetConfigurationSettingValue(settingName); } } }
Add the following callback method to the class:
private static void RoleEnvironmentStatusCheck(object sender,RoleInstanceStatusCheckEventArgs e) { RoleInstanceStatus status = e.Status; // Uncomment next line to take instance out of the// load balancer rotation. //e.SetBusy(); }
Add the following callback method to the class:
private static void RoleEnvironmentStopping(object sender,RoleEnvironmentStoppingEventArgs e) { Trace.TraceInformation("In RoleEnvironmentStopping"); }
Add the following method, associating the callback methods with the
RoleEnvironment
events, to the class:public static void UseEnvironmentChangeExample() { RoleEnvironment.Changing += RoleEnvironmentChanging; RoleEnvironment.Changed += RoleEnvironmentChanged; RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck; RoleEnvironment.Stopping += RoleEnvironmentStopping; }
If the application is deployed to the local Compute Emulator, the
ServiceConfiguration.cscfg
file can be modified. It can then be applied to the running service using the following command in the Microsoft Azure SDK command prompt:csrun /update:{DEPLOYMENT_ID};ServiceConfiguration.cscfg
If the application is deployed to the cloud, the service configuration can be modified directly on the Microsoft Azure Portal.
In steps 1 and 2, we created a cloud project with a worker role. In steps 3 and 4, we added two configuration settings to the service definition file and provided initial values for them in the service configuration file.
In steps 5 and 6, we created a class to house our callback methods.
In step 7, we added a callback method for the
RoleEnvironment.Changing
event. This method iterates over the list of changes, looking for any topology or configuration settings changes. In the latter case, we specifically look for changes to the SettingRequiringRecycle
setting, and on detecting one, we initiate a recycle of the instance.
In step 8, we added a callback method for the RoleEnvironment.Changed
event. We iterate over the list of changes and look at any topology changes and configuration settings changes.
Tip
In both the previous steps, we respectively get oldValue
and newValue
without using them. This is, for example, to get the settings value before and after the changes are made, to eventually use them in a certain situation. However, these events are intended to be used to be notified when particular settings are changed, regardless of which is the actual value before or after the change itself.
In step 9, we added a callback method for the RoleEnvironment.StatusCheck
event. We look at the current status of the instance and leave the SetBusy()
call commented out, which would take the instance out of the load balancer rotation.
In step 10, we added a callback method for the RoleEnvironment.Stopping
event. In this callback, we used
Trace.TraceInformation()
to log the invocation of the method.
In step 11, we added a method that associated the callback methods with the appropriate event.
In step 12, we saw how to modify the service configuration in the development environment. We must replace {DEPLOYMENT_ID}
with the deployment ID of the current deployment. The deployment ID in the Computer Emulator is a number that is incremented with each deployment. It is displayed on the Compute Emulator UI. In step 13, we saw how to modify the service configuration in a cloud deployment.
The RoleEntryPoint
class also exposes the following virtual methods that allow various changes to be handled:
RoleEntryPoint.OnStart()
RoleEntryPoint.OnStop()
RoleEntryPoint.Run()
These virtual methods are invoked when an instance is started, stopped, or when it reaches a Ready
state. An instance of a worker role is recycled whenever the Run()
method exits.
The csrun
command in the Microsoft Azure SDK can be used to test configuration changes in the development fabric. The service configuration file can be modified, and csrun
can be invoked to apply the change. Note that it is not possible to test topology changes that reduce the number of instances. However, when the Cloud Service is started without debugging, it is possible to increase the number of instances by modifying the service configuration file and using csrun
.
As both RoleEnvironmentChanging
and RoleEnvironmentChanged
use the RoleEnvironment
APIs to check collections, we can also simplify the code in steps 7 and 8 with new LINQ-based implementations as follows:
private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) { var oldInstances = e.Changes.OfType<RoleEnvironmentTopologyChange>() .SelectMany(p => RoleEnvironment.Roles[p.RoleName].Instances); var oldValues = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>() .ToDictionary(p => p.ConfigurationSettingName,p=>RoleEnvironment .GetConfigurationSettingValue(p.ConfigurationSettingName)); e.Cancel =oldValues.Any(p=>p.Key=="SettingRequiringRecycle"); }
In the code mentioned earlier, we group the old changing instances and the old settings' key-value pairs. In the last line, we recycle the SettingRequiringRecycle
setting, if there is any.
Step 8 can be modified as mentioned earlier, but by finding new instances and settings' values instead of old ones, while asking the RoleEnvironment
APIs about them.
Have a look at the following MSDN blog post to get additional information:
Architecture of the Microsoft Azure Role model http://blogs.msdn.com/b/kwill/archive/2011/05/05/windows-azure-role-architecture.aspx
Microsoft Azure instances and the guest OS they reside in have to be upgraded occasionally. The Cloud Service might need a new software deployment or a configuration change. The guest OS might need a patch or an upgrade to a new version. To ensure that a Cloud Service can remain online 24/7 (an SLA of 99.95%), Microsoft Azure provides an upgrade capability that allows upgrades to be performed without stopping the Cloud Service completely as long as each role in the service has two or more instances.
Microsoft Azure supports two types of upgrade: in-place upgrade and Virtual IP swap. An in-place upgrade applies changes to the configuration and code of existing virtual machines (VM) that host instances of the Cloud Service. A VIP swap modifies the load-balancer configuration so that the VIP address of the production deployment is pointed at the instances that are currently in the staging slot, and the VIP address of the staging deployment is pointed at the instances currently in the production slot.
There are two types of in-place upgrades: configuration change and deployment upgrade. A configuration change can be applied on the Microsoft Azure Portal by editing the existing configuration directly on the portal. A configuration change or a deployment upgrade can be performed on the Microsoft Azure Portal by uploading a replacement service configuration file, ServiceConfiguration.cscfg
, or by directly modifying the settings in the Configure tab of the Cloud Service web page. They can also be performed by invoking the appropriate operations in the Microsoft Azure Service Management REST API. By repeating the Publishing a Cloud Service with options from Visual Studio recipe, a deployment upgrade could be initiated directly from Visual Studio. Note that it is possible to do an in-place upgrade of an individual role in an application package.
A configuration change supports only modifications to the service configuration file, which includes changing the guest OS; changing the value of configuration settings such as connection strings, and changing the actual X.509 certificates used by the Cloud Service. Note that a configuration change cannot be used to change the names of configuration settings as they are specified in the service definition file.
A deployment upgrade supports changes to the application package as well as all the changes allowed in a configuration change. Additionally, a deployment upgrade supports some modifications to the ServiceDefinition.csdef
service definition file. These modifications include changing the following:
Role type
The local resource definitions
The available configuration settings
The certificates defined for the Cloud Service
A Cloud Service has an associated set of upgrade domains that control the phasing of upgrades during an in-place upgrade. The instances of a role are distributed evenly among upgrade domains. During an in-place upgrade, all the instances in a single upgrade domain are stopped, reconfigured, and then restarted. This process continues with one upgrade domain at a time until all the upgrade domains have been upgraded. This phasing ensures that the Cloud Service remains available during an in-place upgrade, albeit with roles being served by fewer instances than usual. By default, there are five upgrade domains for a Cloud Service, although this number can be reduced/increased in the service definition file.
The only distinction between the production and staging slots of a Cloud Service is that the load balancer forwards any network traffic that arrives at the service VIP address to the production slot and any network traffic that arrives at the staging VIP address to the staging slot. In a VIP swap, the production and staging slots to which the load balancer forwards network traffic are swapped. This has no effect on the actual VMs running the service; it is entirely a matter of where inbound network traffic is forwarded to. A VIP swap affects the entire service simultaneously and does not use upgrade domains. Nevertheless, as a Cloud Service is a distributed system, there might be a small overlap during a VIP swap, where inbound traffic is forwarded to some instances that run the old version of the service and some instances that run the new version. The only way to guarantee that old and new versions are never simultaneously in production is to stop the Cloud Service while performing the upgrade.
Note that Microsoft occasionally has to upgrade the root OS of a server that hosts an instance. This type of upgrade is always automatic, and Microsoft provides no ability for it to be performed manually.
In this recipe, we'll learn how to upgrade a deployment to a Cloud Service.
We need to deploy an application to the production and staging slots of a Cloud Service. We could use, for example, the Cloud Service we created in the Using startup tasks in a Microsoft Azure role recipe.
We are going to use the Microsoft Azure Portal to perform an in-place upgrade, a VIP swap, and a manual guest OS upgrade. In this stage, we will perform an in-place upgrade of the production deployment using the following steps:
On the Microsoft Azure Portal, go to the Dashboard tab of the Cloud Service in the Cloud Services section and then choose the Production slot.
Click on Update and provide a deployment label, a package location, and a configuration location.
Choose the roles to update (this is optional) or click on All to update the entire service.
Click on either the Allow the update if role sizes change or if the number of roles change or Update the deployment even if one or more roles contain a single instance boxes and then confirm.
Repeat steps 1 to 4 in the staging slot.
Click on the Swap button to perform the VIP swap.
In the Configure tab of either the production or staging slot of the Cloud Service, locate the Operating system section.
Select the desired OS Family and OS Version, and press Save.
We can perform in-place upgrades of the production and staging slots independently of each other. In step 1, we indicated that we wanted to perform an in-place upgrade of the production slot. In step 2, we specified the details of the upgrade, such as the label and a location for the upgraded application package and service configuration file.
In step 3, we chose which roles have to be upgraded, as it is common to have just a few parts of the entire solution modified by an upgrade. In step 4, we told Azure to continue the upgrade even under those circumstances. Those checkboxes are intended as a double check to avoid unwanted downtime.
We can perform a VIP swap only if there is a Cloud Service deployed to the staging slot, and we ensured this in step 5. We initiated the VIP swap in step 6.
We can perform guest OS upgrades of the production and staging slots independently of each other. In step 7, we located the desired slot to upgrade the guest OS in. We initiated the guest OS upgrade in step 8.
In Chapter 7, Managing Azure Resources with the Azure Management Libraries, we will see how to use the Microsoft Azure Management libraries to manage deployments, including performing upgrades.
Have a look at the following MSDN links to get additional information:
Advanced scenarios about updating an Azure Cloud Service at http://msdn.microsoft.com/en-us/library/azure/hh472157.aspx
Swapping deployments using Management REST APIs at http://msdn.microsoft.com/en-us/library/azure/ee460814.aspx
An Azure Cloud Service might comprise multiple instances of multiple roles. These instances all run in a remote Azure data center, typically 24/7. The ability to monitor these instances nonintrusively is essential both in detecting failure and in capacity planning.
Diagnostic data can be used to identify problems with a Cloud Service. The ability to view the data from several sources and across different instances eases the task of identifying a problem. The process to configure Azure Diagnostics is at the role level, but the diagnostics configuration is performed at the instance level. For each instance, a configuration file is stored in a XML blob in a container named wad-control-container
located in the storage service account configured for Azure Diagnostics.
Tip
A best practice from both security and performance perspectives would be to host application data and diagnostic data in separate storage service accounts. Actually, there is no need for application data and diagnostics data to be located in the same storage service account.
Azure Diagnostics supports the following diagnostic data:
Application logs: This captures information written to a trace listener
Event logs: This captures the events from any configured Windows Event Log
Performance counters: This captures the data of any configured performance counters
Infrastructure logs: This captures diagnostic data produced by the Diagnostics process itself
Azure Diagnostics also supports file-based data sources. It copies new files of a specified directory to blobs in a specified container in the Azure Blob Service. The data captured by IIS Logs, IIS Failed Request Logs, and Crash Dumps is self-evident. With the custom directories data source, Azure Diagnostics supports the association of any directory on the instance. This allows for the coherent integration of third-party logs.
The Diagnostics Agent service is included as Active
by default for each new Visual Studio Azure Service project.
Tip
The Diagnostics Agent would collect and transfer a user-defined set of logs. The process does not add so much overhead to the normal
operations, but the more logs collected, the more delays in the running machines.
Then, it is started automatically when a role instance starts, provided the Diagnostics module has been imported into the role. This requires the placement of a file named diagnostics.wadcfg
in a specific location in the role package. When an instance is started for the first time, the Diagnostic Agent reads the file and initializes the diagnostic configuration for the instance in wad-control-container
with it. Initial configuration typically occurs in Visual Studio at design time, while further changes could be made either from Visual Studio by the Management API or manually.
Tip
In the past, Diagnostic initialization has been made by user code. This is not recommended due to the high volume of hardcoded directives. If needed, the class responsible for this is the DiagnosticsMonitorConfiguration
class.
Azure Diagnostics supports the use of Trace
to log messages. Methods of the System.Diagnostics.Trace
class can be used to write error, warning, and informational messages (the Compute Emulator in the development environment adds an additional trace listener so that trace messages can be displayed in the Compute Emulator UI).
Azure Diagnostics captures diagnostic information for an instance, keeps it in a local buffer, and, periodically, it persists this data to the Azure Storage service. The Azure Diagnostics tables can be queried just like any other table in the Table service. The Diagnostics Agent persists the data mentioned earlier according to the following tables' mapping:
Application logs:
WADLogsTable
Event logs:
WADWindowsEventLogsTable
Performance counters:
WADPerformanceCountersTable
Infrastructure logs:
WADDiagnosticInfrastructureLogsTable
As the only index on a table is on PartitionKey
and RowKey
, it is important that PartitionKey
rather than Timestamp
or EventTickCount
be used for time-dependent queries.
In this recipe, we see how to configure and use Diagnostic features in the role environment, collecting every available log data source and tracing info.
This recipe assumes that we have an empty Cloud Service and an empty Storage account. To create the first one, go to the Azure Portal and follow the wizards without deploying anything in it. To create the second one, follow the instructions of the Managing the Azure Storage Service recipe in Chapter 3, Getting Storage with Blobs in Azure.
We are going to create a simple worker role-triggering diagnostics collection using the following steps:
In Visual Studio, create a new Azure Cloud Service with a worker role named
Worker
.Right-click on the Worker item in the Roles folder of the created project and select Properties.
In the Configuration tab, perform the following actions:
Verify that the Enable Diagnostics checkbox is checked
Select Custom plan
In the Specify the storage account credentials for the Diagnostics results field, enter the connection string of the Diagnostics storage account (by clicking the … (more) button, a wizard could help to build this string)
To customize the data collected by the Diagnostic service, click on the Edit button of the Custom plan option selected earlier.
In the Diagnostics configuration window, select the logging mix, for example:
Application logs: Verbose level, 1-minute transfer, 1024 MB buffer size
Event logs: Application + System + Security with verbose level, 1-minute transfer, 1024 MB buffer size
Performance counters: 1-minute transfer, 1 MB buffer size, and "% processor time" metric
Infrastructure logs: Verbose level, 1-minute transfer, no buffer size
In the
WorkerRole.cs
file, in theRun()
method, write this code:while (true) { Thread.Sleep(1000); DateTime now = DateTime.Now; Trace.TraceInformation("Information: "+now); Trace.TraceError("Error: " + now); Trace.TraceWarning("Warning: " + now); }
Right-click on the Cloud Service project and Publish it using the following steps:
Select the proper subscription.
In Common Settings, select the previously created empty Cloud Service.
In Advanced Settings, select the previously created Storage Account.
Confirm, click on Publish, and wait for a few minutes.
In the Server Explorer window of Visual Studio, expand the Azure node and locate the proper storage account of the storage subnode.
Expand the tables node, and for each table found, right-click on it and select View Table.
From steps 1 to 3, we prepared the wrapper project to hold the worker role and configure it. By enabling the Diagnostics feature, an Import
directive was placed into the service definition file. By selecting Custom plan, we told Azure to collect user-defined data into the storage account we finally specified.
In steps 4 and 5, we customized the collected data, telling the platform what to log and when to transfer to storage.
Tip
Azure instances are stateless, meaning that an instance could be taken down and be replaced by a new one seamlessly. Storing logs in the VM leads to some design issues. What happens if the instance has been recycled? How do you read service-wide logs centrally? This is why a transfer phase is involved in log capturing.
After saving in step 6, we added some tracing code in step 7 and published the Cloud Service as shown in step 8.
Tip
A more sophisticated way to trace messages is to use trace sources and trace switches to control the capture of messages. Typically, this control can be configured through the app.config
file for an application.
In steps 9 and 10, we used the built-in features of the Azure SDK integration for Visual Studio to browse through the storage account elected for the Diagnostics collection.
While each table contains some properties specific to the data being logged, all of them contain the following properties:
EventTickCount
DeploymentId
Role
RoleInstance
The EventTickCount
property is Int64
, which represents the time in which the event was generated, to an accuracy of 100 nanoseconds. The DeploymentId
property identifies the specific deployment, while the Role
and RoleInstance
properties specify the role instance that generated the event.
The WADPerformanceCountersTable
table, for example, contains the following additional properties:
CounterName
CounterValue
Tip
When browsing through the collected data, note that the tables are partitioned by minute. Specifically, when a record is inserted in a table,
PartitionKey
is set to the tick count of the current UTC time with the seconds discarded, with the entire value prepended by a0
. Discarding the seconds has the effect of setting the last eight characters ofPartitionKey
to0
. TheRowKey
property combines the deployment ID, the role name, and the instance ID, along with a key to ensure uniqueness.Timestamp
represents the time the event was inserted in the table.
Once the deployment has been made, the Diagnostics configuration can be edited easily from Visual Studio as follows:
In the Cloud Services node of the Azure main node in the Server Explorer windows, select the previously created Cloud Service.
Expand the node and select the desired role in the desired slot (staging or production).
By selecting Update Diagnostics Settings option, we can change the Diagnostics configuration at runtime.
As mentioned earlier, we can transfer entire directories into the selected Storage Account, for instance, integrate a third-party tool by logging directly in the filesystem. To do this, we can open the diagnostics.wadcfg
file and add this code in the <Directories>
tag:
<DataSources> <DirectoryConfiguration container="wad-mylog" directoryQuotaInMB="128"> <Absolute expandEnvironment="true" path="%SystemRoot%\myTool\logs" /> </DirectoryConfiguration> </DataSources>
Azure has an integrated alerting system to notify users with particular events. Despite it is not only related to Cloud Services, the following are some steps to enable it for the previously created one:
In the Azure Portal, go to the Management Services section and click on the Alter tab
Add a new rule, specifying the following:
The name of the rule
Service type: Cloud Service
Service name: the one previously created
Cloud Service deployment: Production
Cloud Service role: Worker
In the second step, choose CPU Percentage as the metric to monitor
Set the greater than condition to
70%
with the remaining default values
This alert will notify the user that creates it and, optionally, the service administrator and co-administrators.
Have a look at the following MSDN links to get additional information:
Best practice about Diagnostics and Cloud Service development at http://msdn.microsoft.com/en-us/library/azure/hh771389.aspx
More about Diagnostics collection, blobs, and Q&A at http://msdn.microsoft.com/librar y/azure/dn186185.aspx