Windows Azure Service Bus: Key Features

Exclusive offer: get 50% off this eBook here
Windows Azure programming patterns for Start-ups

Windows Azure programming patterns for Start-ups — Save 50%

A step-by-step guide to create easy solutions to build your business using Windows Azure services with this book and ebook.

$26.99    $13.50
by Riccardo Becker | December 2012 | Enterprise Articles

Windows Azure Service Bus offers features that are not offered by any other cloud platform on the market. One important feature is the Service Bus that enables you to connect your on-premise services to Windows Azure services and beyond. The Access Control Service enables you to easily authenticate users without having to write complex authentication code ourselves. By using Windows Identity Framework (WIF) and supporting identity providers such as Live ID, Yahoo, and Facebook, it will be easy to use these identity providers as the main authentication mechanism in our own services.

In this article by Riccardo Becker, author of Windows Azure programming patterns for Start-ups, we will provide a systematic guide on how to integrate with Facebook. AppFabric also contains AppFabric applications, an easy way to develop and deploy composite applications. Another interesting feature is the caching feature that AppFabric offers.

(For more resources related to this topic, see here.)

Service Bus

The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both.

Getting started

To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed.

Queues

Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service.

Working with queues

To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue:

There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value.

Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set.

Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill.

When a message expires or when the limit of the queue size is reached, it will be deadlettered . This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well.

The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message.

A sample scenario

Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role.

See the following diagram for an overview of this scenario:

In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace.

Preparing the project

In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-06\ref. Next, add the following using statements to your file:

using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging;

Your project is now ready to make use of Service Bus queues.

In the configuration settings of the web role project hosting the WCF services, add a new configuration setting named ServiceBusQueue with the following value:

"Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>"

The properties of the queue you configured in the Windows Azure portal can also be set programmatically.

Sending messages

Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute.

The following code snippet shows how to send a message of type BrokerMessage:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender("geotopiaqueue"); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile }));

As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet.

Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage.

The situation is:

  • A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading.
  • A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction.

Receiving messages

On the other side of the Service Bus queue resides our worker role, Geotopia. Processor, which performs the following tasks:

  • It grabs the messages from the queue
  • Sends the message straight to a table in Windows Azure Storage for auditing purposes
  • Creates a geotopic that can be subscribed to

The following code snippet shows how to perform these three tasks:

MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver("geotopiaqueue "); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); }

Cross-domain communication

We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere.

As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted.

Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues.

Comparison

The following table shows the similarities and differences between Windows Azure and Service Bus queues:

Criteria

Windows Azure queue

Service Bus queue

Ordering guarantee

No, but based on best effort first-in, first out

First-in, first-out

Delivery guarantee

At least once

At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation.

Transaction support

No

Yes, by using TransactionScope

Receive Mode

Peek & Lease

Peek & Lock

Receive & Delete

Lease/Lock duration

Between 30 seconds and 7 days

Between 60 seconds and 5 minutes

Lease/Lock granularity

Message level

Queue level

Batched Receive

Yes, by using GetMessages(count)

Yes, by using the prefetch property or the use of transactions

Scheduled Delivery

Yes

Yes

Automatic dead lettering

No

Yes

In-place update

Yes

No

Duplicate detection

No

Yes

WCF integration

No

Yes, through WCF bindings

WF integration

Not standard; needs a customized activity

Yes, out-of-the-box activities

Message Size

Maximum 64 KB

Maximum 256 KB

Maximum queue size

100 TB, the limits of a storage account

1, 2, 3, 4, or 5 GB; configurable

Message TTL

Maximum 7 days

Unlimited

Number of queues

Unlimited

10,000 per service namespace

Mgmt protocol

REST over HTTP(S)

REST over HTTP(S)

Runtime protocol

REST over HTTP(S)

REST over HTTP(S)

Queue naming rules

Maximum of 63 characters

Maximum of 260 characters

Queue length function

Yes, value is approximate

Yes, exact value

Throughput

Maximum of 2,000 messages/second

Maximum of 2,000 messages/second

Authentication

Symmetric key

ACS claims

Role-based access control

No

Yes through ACS roles

Identity provider federation

No

Yes

Costs

$0.01 per 10,000 transactions

$ 0.01 per 10,000 transactions

Billable operations

Every call that touches "storage"'

Only Send and Receive operations

Storage costs

$0.14 per GB per month

None

ACS transaction costs

None, since ACS is not supported

$1.99 per 100,000 token requests

Background information

There are some additional characteristics of Service Bus queues that need your attention:

  • In order to guarantee the FIFO mechanism, you need to use messaging sessions.
  • Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one.
  • The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB.
  • Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught.
  • When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue.
  • Throttled requests (thus being rejected) are not billable.
  • ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution.

Topics and subscriptions

Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role.

Sending messages to a topic works in a similar way as sending messages to a Service Bus queue.

Preparing the project

In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot:

Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot:

Using filters

Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method.

In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet:

BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties["subject"] = subject;

The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads:

Uri uri = ServiceBusEnvironment.CreateServiceUri ("sb", "<yournamespace>", string.Empty); string name = "owner"; string key = "<yourkey>"; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri ("sb", "geotopiaservicebus", string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties["subject"] = "interestingsubject"; MessageSender sender = factory.CreateMessageSender("dataqueue"); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription("geotopiatopic", "SubscriptionOnMe", new SqlFilter("subject='interestingsubject'")); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver ("geotopiatopic/subscriptions/SubscriptionOnMe"); //it now only gets messages containing the property 'subject' //with the value 'interestingsubject' BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } }

Windows Azure Caching

Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology.

Windows Azure Caching offers two types of cache:

  • Caching deployed on a role
  • Shared caching

When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don't have a role running just for its memory.

Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment.

Windows Azure programming patterns for Start-ups A step-by-step guide to create easy solutions to build your business using Windows Azure services with this book and ebook.
Published: October 2012
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Caching capabilities

Both the shared and dedicated caching offer a rich feature set. The following table depicts this:

Feature

Explanation

ASP.NET 4.0 caching providers

When you build ASP.NET 4.0 applications and deploy them on Windows Azure, the platform will install caching providers for them. This enables your ASP.NET 4.0 applications to use caching easily.

Programming model

You can use the Microsoft.ApplicationServer.Caching namespace to perform CRUD operations on your cache. The application using the cache is responsible for populating and reloading the cache, as the programming model is based on the cache-aside pattern. This means that initially the cache is empty and will be populated during the lifetime of the application. The application checks whether the desired data is present. If not, the application reads it from (for example) a database and inserts it into the cache.

The caching mechanism deployed on one of your roles, whether dedicated or not, lives up to the high availability of Windows Azure. It saves copies of your items in cache, in case a role instance goes down.

Configuration model

Configuration of caching (server side) is not relevant in the case of shared caching, as this is the standard, out-of-the-box functionality that can only vary in size, namespace, and location.

It is possible to create named caches. Every single cache has its own configuration settings, so you can really fine-tune your caching requirements. All settings are stored in the service definition and service configuration files. As the settings of named caches are stored in JSON format, they are difficult to read.

If one of your roles wants to access Windows Azure Cache, it needs some configuration as well. A DataCacheFactory object is used to return the DataCache objects that represent the named caches. Client cache settings are stored in the designated app.config or web.config files.

A configuration sample is shown later on in this section, together with some code snippets.

Security model

The two types of caching (shared and role-based) have two different ways of handling security.

Role-based caching is secured by its endpoints, and only those which are allowed to use these endpoints are permitted to touch the cache. Shared caching is secured by the use of an authentication token.

Concurrency model

As multiple clients can access and modify cache items simultaneously, there are concurrency issues to take care of; both optimistic and pessimistic concurrency models are available.

In the optimistic concurrency model, updating any objects in the cache does not result in locks. Updating an item in the cache will only take place if Azure detects that the updated version is the same as the one that currently resides in the cache.

When you decide to use the pessimistic concurrency model, items are locked explicitly by the cache client. When an item is locked, other lock requests are rejected by the platform. Locks need to be released by the client or after some configurable time-out, in order to prevent eternal locking.

Regions and tagging

Cached items can be grouped together in a so-called region. Together with additional tagging of cached items, it is possible to search for tagged items within a certain region. Creating a region results in adding cache items to be stored on a single server (analogous to partitioning). If additional backup copies are enabled, the region with all its items is also saved on a different server, to maintain availability.

Notifications

It is possible to have your application notified by Windows Azure when cache operations occur. Cache notifications exist for both operations on regions and items. A notification is sent when CreateRegion, ClearRegion, or RemoveRegion is executed. The operations AddItem, ReplaceItem, and RemoveItem on cached items also cause notifications to be sent.

Notifications can be scoped on the cache, region, and item level. This means you can configure them to narrow the scope of notifications and only receive those that are relevant to your applications.

Notifications are polled by your application at a configurable interval.

Availability

To keep up the high availability you are used to on Windows Azure, configure your caching role(s) to maintain backup copies. This means that the platform replicates copies of your cache within your deployment across different fault domains.

Local caching

To minimize the number of roundtrips between cache clients and the Windows Azure cache, enable local caching. Local caching means that every cache clients maintains a reference to the item in-memory itself. Requesting that same item again will cause an object returned from the local cache instead of the role-based cache. Make sure you choose the right lifetime for your objects, otherwise you might work with outdated cached items.

Expiration and Eviction

Cache items can be removed explicitly or implicitly by expiration or eviction.

The process of expiration means that the caching facility removes items from the cache automatically. Items will be removed after their time-out value expires, but keep in mind that locked items will not be removed even if they pass their expiration date. Upon calling the Unlock method, it is possible to extend the expiration date of the cached item.

To ensure that there is sufficient memory available for caching purposes, the least recently used (LRU) eviction is supported. The process of eviction means that memory will be cleared and cached items will be evicted when certain memory thresholds are exceeded.

 By default, Shared Cache items expire after 48 hours. This behavior can be overridden by the overloads of the Add and Put methods.

Setting it up

To enable role-based caching, you need to configure it in Visual Studio. Open the Caching tab of the properties of your web or worker role (you decide which role is the caching one). Fill out the settings, as shown in the following screenshot:

The configuration settings in this example cause the following to happen:

  • Role-based caching is enabled.
  • The specific role will be a dedicated role just for caching.
  • Besides the default cache, there are two additional, named caches for different purposes. The first is a high-available cache for recently added geotopics with a sliding window. This means that every time an item is accessed, its expiration time is reset to the configured 10 minutes. For our geotopics, this is a good approach, since access to recently posted geotopics is heavy at first but will slow down as time passes by (and thus they will be removed from the cache eventually). The second named cache is specifically for profile pictures with a long time-to-live, as these pictures will not change too often.

Caching examples

In this section, several code snippets explain the use of Window Azure caching and clarify different features. Ensure that you get the right assemblies for Windows Azure Caching by running the following command in the Package Manager Console: Install-Package Microsoft.WindowsAzure.Caching. Running this command updates the designated config file for your project. Replace the [cache cluster role name] tag in the configuration file with the name of the role that hosts the cache.

Adding items to the cache

The following code snippet demonstrates how to access a named cache and how to add and retrieve items from it (you will see the use of tags and the sliding window):

DataCacheFactory cacheFactory = new DataCacheFactory(); DataCache geotopicsCache = cacheFactory.GetCache("RecentGeotopics"); //get reference to this named cache geotopicsCache.Clear(); //clear the whole cache DataCacheTag[] tags = new DataCacheTag[] { new DataCacheTag("subject"), new DataCacheTag("test")}; //add a short time to live item DataCacheItemVersion version = geotopicsCache.Add(geotopicID, new Geotopic(), TimeSpan.FromMinutes(1)/* overrides default 10 minutes */, tags); //add a default item geotopicsCache.Add("defaultTTL", new Geotopic()); //default 10 minutes //let time pass for some minutes DataCacheItem item = geotopicsCache.GetCacheItem(geotopicID); // returns null! DataCacheItem defaultItem = geotopicsCache.GetCacheItem("defaultTTL"); //sliding window shows up //versioning, optimistic locking geotopicsCache.Put("defaultTTL", new Geotopic(), defaultItem.Version); //will fail if versions are not equal!

Session state and output caching

Two interesting areas in which Windows Azure caching can be applied are caching the session state of ASP.NET applications and the caching of HTTP responses, for example, complete pages.

In order to use Windows Azure caching (that is, the role-based version), to maintain the session state, you need to add the following code snippet to the web.config file for your web application:

<sessionState mode="Custom" customProvider="AppFabricCacheSessionStor eProvider"> <providers> <add name="AppFabricCacheSessionStoreProvider" type="Microsoft.Web.DistributedCache. DistributedCacheSessionStateStoreProvider, Microsoft.Web. DistributedCache" cacheName="default" useBlobMode="true" dataCacheClientName="default" /> </providers> </sessionState>

The preceding XML snippet causes your web application to use the default cache that you configured on one of your roles.

To enable output caching, add the following section to your web.config file:

<caching> <outputCache defaultProvider="DistributedCache"> <providers> <add name="DistributedCache" type="Microsoft.Web.DistributedCache. DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="default" dataCacheClientName="default" /> </providers> </outputCache> </caching>

This will enable output caching for your web application, and the default cache will be used for this. Specify a cache name, if you have set up a specific cache for output caching purposes. The pages to be cached determine how long they will remain in the cache and set the different version of the page, depending on the parameter combinations.

<%@ OutputCache Duration="60" VaryByParam="*" %>

Windows Azure Connect

Windows Azure Connect is a mechanism you can use to set up IPsec connections between machines in your own domain (on-premise) and web or worker roles running on Windows Azure. If these connections are set up, you can address your role instances as if they were in your own network/domain. This feature enables you to accomplish the following tasks:

  • Managing and administering web and worker roles with existing management tools
  • Building a distributed application where Windows Azure roles work seamlessly together with your on-premise resources, such as printers, databases, legacy systems, or other critical resources that play a viable role within your distributed application
  • Domain authentication, name resolution, or other domain-wide actions

Consider the following scenario. The Geotopia worker role is built and configured to access a Microsoft SQL Server database on-premise. This is the only part of the whole solution outside the cloud. The reason for this requirement is that the local SQL Server database is used for data warehouse purposes and towards the goal of keeping this useful, analytical data inside my own environment.

Setting it up

To set up and configure Windows Azure Connect, go to the Windows Azure portal and select Virtual network in the left corner of the screen. You can find the Windows Azure Connect overview under the Connect tab, as shown in the following screenshot:

To activate Windows Azure Connect on a machine or on a virtual machine, you need to select Install Local Endpoint from the menu. You will now get a secure link with an activation token. Copy this link and run it in a browser. Running the executable file that is presented will install the local endpoint. As it contains an activation token, you cannot save the file and run it later. Windows Azure Connect will start automatically but will not operate yet, since you need to configure it first.

Enabling a web role with Connect

To connect a web role with a resource within your domain or computer, you need to configure the web role as well.

  1. In the Connect menu, select the Get Activation Token option. You should see a screen similar to the one shown in the following screenshot:

  2. Next, copy the activation token to your clipboard.

  3. Open your web role project properties and select the Virtual Network tab. Select Activate Windows Azure Connect and paste your activation code, as shown in the following screenshot:

After saving these changes, you will notice that the service definition and configuration files have changed and settings with respect to Connect are added. Publish the Cloud project(s), which will appear in the Connect node in the Windows Azure portal. The name that is shown there is the machine name with the typical "RD" prefix

Managing Connect

Our next task is to configure the network connectivity policy. To enable connectivity between the local machine and the web and worker roles, we need to create a new endpoint group.

As we only need the Processor worker role to connect to the local SQL Server instance, we just add that particular role and skip the web role in the Azure roles or endpoint groups section. In the Connect from section, you can add the machines you installed as local endpoints. Click on Create and the group is created, and you will see that the computer is grouped together with the configured role; right-click on the Windows Azure Connect icon in the icon tray and select Refresh Policy. The Windows Azure role is automatically updated with the new policy, and this process will occur every 5 minutes.

If you redeploy your role, Windows Azure Connect will see this change and enable connectivity as soon as the role is available again. Scaling up your roles is also handled by this feature and will ensure that the new role instances are part of the group as well. Obviously, scaling down will also result in the removal of those instances from the group.

Testing connectivity

After following the preceding steps, you should have IP level connectivity between your configured roles (instances) and your local machine(s). It is now possible to communicate over IPv6, which is provided by Connect and secured with IPsec. This enables communication through firewalls and NATs.

By default, Windows Azure roles do not allow incoming ping requests. To allow your role to accept these requests, you can add a .cmd file to your role project. Perform the following steps to accomplish this goal:

  1. Add a file with the extension .cmd to your Visual Studio project.
  2. Add the following two lines to the file and save it:

    netsh advfirewall firewall add rule="Allow ping" dir=in action=allow enable=yes protocol=icmpv6:128, any exit /b 0

  3. Add a section to the service definition file that will create a start-up task:

    <Startup> <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple"/> </Startup>

  4. Verify that the Copy Always setting is selected for the .cmd file to ensure that the file is always copied and that the Publish process copies the file to the root folder for the deployment.
  5. Deploy the solution to Windows Azure.

After deployment, the start-up process of the role is executed, and pinging the role is allowed. In addition, logging in remotely on the worker role instance and pinging the local machine from there is possible.

Other Connect capabilities

Common scenarios are the ones described earlier, building hybrid solutions where cloud and on-premise resources are brought together. Besides this, it is also possible to join roles to an on-premise AD domain. Also for administrating purposes, you could use Connect to enable the use of remote event viewing or for other remoting purposes.

Another neat application of Windows Azure Connect is to connect your devices in a group to enable your laptop, home server, and desktop at work to be connected and to enable the sharing of data and the use of your home printer or other peripherals.

Access Control Service

The Access Control Service (ACS) feature of AppFabric is a service that enables an easy way to authenticate and authorize users who want to make use of your services. It isolates authentication and authorization logic from your core code and relieves you from the burden of maintaining your own identity store. ACS simplifies the process of integrating known identity providers with your solution. This section shows how to set up ACS and how to use Facebook as your main identity provider.

Getting started

In order to make use of ACS, you need to do some setting up in the Windows Azure portal.

To prepare your account to make use of ACS, follow the ensuing steps to set it up:

  1. Browse to your Windows Azure portal environment and select the Service Bus, Access Control & Caching tab from the ribbon.
  2. Select Access Control and click on New to create a new Access Control namespace.
  3. Fill out the details and find a unique namespace identifier, as shown in the following screenshot:

  4. Select Create Namespace, and after some time, your namespace is provisioned.
  5. Now that your namespace is created, you need to manage it. Select the newly created namespace and select Access Control Service.

  6. There are some important parts on this screen to enable ACS and make use of it. Now that your ACS is enabled, it is time to register your application on Facebook.

As we want Geotopia to be a Facebook-enabled application that uses Facebook as its identity provider, we need to register Geotopia at the www.Facebook.com/developers website. Registering your application there will result in AppID and AppSecret.

The registration screen looks similar to the one shown in the following screenshot:

Now that we have our ACS set up and have created a Facebook application, it is time to bring these two together and enable our Geotopia application to allow users to log in with their Facebook credentials.

Adding an identity provider

First, we need to add an identity provider by using the screen shown in the following screenshot. Click on Identity Providers and select Add.

Select the Facebook application option and click on Next. This will take you to a screen where you need the information you got by registering your application on the www.Facebook.com/developers site.

Adding a relying party

Since our Geotopia application relies on claims, it is called a relying party. A relying party is an application that is claims-aware. Adding a relying party is done by configuration in the Windows Azure portal and is shown in the following screenshot:

Leave the Mode option as is, but setting the Realm and Return URL is very important to be able to incorporate federated authentication by using ACS. Configuring the relying party here means that our Geotopia application trusts an ACS service namespace.

The Realm property tells us for which URI the tokens issued by the ACS are valid. In our case, since we are still building the application, we refer to localhost, to be able to test locally. Setting Realm to http://localhost:7777 implies that, only for the given URI, the claims that ACS issues are valid. Setting Return URL defines where the tokens for the relying party are returned.

Optionally, you can also define an Error URL. ACS redirects users to this specific URL in case an error occurs.

Enter your specific details in the screen, as shown in the previous screenshot.

Application integration

The next thing we need to do is to recover the WS-Federation metadata location. We need this property later on when we enable our application with ACS and integrate it with Facebook. You can find this value in the Development — Application tab.

In the Rules tab, it is possible to transform the claims we receive from Facebook to our own format.

Integrating with Facebook

All the prerequisites are set right now. We added a Facebook application and got AppID and AppSecret. Next, we configured ACS, created a namespace, and added a relying party and Facebook as identity provider. The next step is to change our Geotopia application and enable it to make use of Facebook's login mechanism and get the claims in our application.

Make sure you have the Windows Identity Framework runtime installed, together with the Windows Identity SDK. We need these for the next steps.

To incorporate Facebook integration in our Geotopia application, perform the following steps:

  1. Add an STS Reference to your web role web application by right-clicking on the project.
  2. The Federation Utility wizard starts automatically; fill out the appropriate values specific to your application, as shown in the following screenshot:

  3. The application configuration location is automatically filled and points to your web.config file. For Application URI, you need to enter the details as configured in the Relying Party tab of the Windows Azure portal, as shown in the Adding a relying party section in this article. Click on Next, and you will be notified about the fact that your application is not hosted on a secure HTTPS connection, but for now, we can ignore this message.
  4. The wizard takes you to the next screen, where you need to select your Security Token Service. Since we already have one (ACS will provide us with tokens), enter the value from the Application Integration tab, as shown in the following screenshot:

  5. Click on Next and confirm the screens about certificate chain validation, security token encryption, and offered claims with the default values. Finally, click on Finish on the Summary screen. You can schedule a task here that refreshes the metadata document every day.
  6. The FederationMetadata.xml file is added to your solution in a separate solution folder.

The magic happens when you run your cloud solution. The Silverlight application is launched and hosted on http://localhost:7777, as configured in the project properties. Make sure this address is identical to the one that is configured as the Realm in the relying party. Running your application takes you to the login page for Facebook, and after logging in, your application asks for your permission about you and your friends list.

Allow this, and you will be taken back to the return URL configured in the relying party configuration.

Using FederatedAuthentication

There is also a way to programmatically influence the way the callback mechanism is executed. You can do this by using the static class named FederatedAuthentication from the Microsoft.IdentityModel.Web namespace in your global.asax file.

The following code snippet demonstrates how to implement this:

protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); FederatedAuthentication.ServiceConfigurationCreated += OnServiceConfigurationCreated; } private void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs e) { FederatedAuthentication.WSFederationAuthenticationModule. SignedIn += new EventHandler(WSFederationAuthenticationModule_SignedIn); } private void WSFederationAuthenticationModule_SignedIn(object sender, EventArgs e) { HttpContext.Current.Response.Redirect("/Facebook/Index"); }

The SignedIn event is fired when you log in through the Facebook login mechanism. The Redirect method calls the Index action in the MVC3 FacebookController class. As we defined the e-mail,user_about_me, read_friendlists, and publish_stream application permissions in the configuration of the identity provider, it is possible to get information from the logged in user and display it in our Geotopia web application and also post a message on the wall of the user. In our scenario, we create a separate Facebook view that displays information about the user and their friends (we want them to be invited to Geotopia as well!) and the ability to post geotopics in the Geotopia application and also as a post on the wall. The next section demonstrates how this can be achieved.

Displaying information about me

After we are redirected to the Facebook/Index URL in the WSFederationAuthenticationModule_SignedIn event, we call the GraphAPI from Facebook. For more information on the GraphAPI, please refer to https://developers.Facebook.com/docs/reference/api/.

To use the Graph API from Facebook, you need the access token. This can be retrieved as shown in the following code snippet:

var claimsPrincipal = Thread.CurrentPrincipal as IClaimsPrincipal; var claimsIdentity = (IClaimsIdentity)claimsPrincipal.Identity; var accessToken = (from claim in claimsIdentity.Claims where claim.ClaimType == "http://www.Facebook.com/claims/AccessToken" select (string)claim.Value).FirstOrDefault(); //Now that we have the access token we can make webcalls to get // information about "me". private Hashtable GraphAPI(Uri uri, string accessToken) { UriBuilder builder = new UriBuilder(uri); if (!string.IsNullOrEmpty(builder.Query)) { builder.Query += "&"; } builder.Query += "access_token=" + accessToken; JavaScriptSerializer jsSerializer = new JavaScriptSerializer(); using (WebClient client = new WebClient()) { string data = client.DownloadString(builder.ToString()); return (jsSerializer.Deserialize(data, typeof(Hashtable)) as Hashtable); } }

The Index method of the Facebook controller returns the Facebook view and passes on the information about "me" to be displayed

return View(GraphAPI(new Uri("https://graph.Facebook.com/me"), accessToken));

Windows Azure programming patterns for Start-ups A step-by-step guide to create easy solutions to build your business using Windows Azure services with this book and ebook.
Published: October 2012
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

Traffic Manager

The Windows Azure traffic Manager (WATM) enables you to configure and control how user traffic is distributed to your hosted services. You can use the traffic Manager to create an application that services users all around the world while still upholding performance and availability and being robust and resilient. Based on a policy that you configure, the WATM routes traffic to the correct hosted service. Under the hood, DNS is used to route traffic to the correct service, and the WATM is not an additional entity that sits in the middle of all that user traffic. The WATM is enabled and configured from the Windows Azure portal. In the following diagram, the process of routing traffic is displayed:

The detailed flow is as follows:

  • The user browses to the appropriate domain name (www.geotopia.com).Obviously, this domain name needs to be reserved at some domain name registrar.
  • Our DNS record for the Geotopia domain refers to the Traffic Manager domain which is configured in the Windows Azure portal.
  • Based on the applied policy (load balance method and monitoring status), the Traffic Manager returns the IP address of the chosen hosted service to the user.
  • The user calls the hosted service directly, using its IP address. The domain and IP address are cached on the client, so the user keeps interacting with the hosted service until the local DNS cache expires.
  • If the DNS cache expires, the whole process starts again and may result in another IP address!

Setting it up

Setting up the WATM is done by configuration in the Windows Azure portal.

  1. Browse to your portal and click on the Virtual Network tab on the ribbon, which will open the following screen:

  2. From the preceding screen, you can click on Create to set up a new policy for WATM, as follows:

    This screen allows you to set up a new policy for Traffic Manager. In our scenario, we are interested in a failover load balancing method. This enables Geotopia to become a robust and resilient solution that has high availability. When either one of my hosted services in the North Europe region or the West US region is unavailable for any reason, WATM ensures that traffic is rerouted to the other hosted service, allowing my application to be available even in case of a major event. The rerouting is based on the next highest service in the list in case a service fails. In our case, we have the solution running in two different datacenters and if either one of them goes down, the next will service the user request. Copy the rest of the settings as shown in the previous screenshot. Click on Create to actually create and set up the policy.

    After creating the policy, you can review the policy in the Traffic Manager | Policies tab.

Round robin

Besides a failover balancing method, there is also the round robin way of load balancing. Choosing this method means that all traffic is equally distributed over the hosted services that are included in the policy.

What happens when a round robin policy is set up?

  1. A user accesses the domain, www.geotopia.com. The configured Traffic Manager actually receives the incoming request.
  2. In the round robin policy, a list of hosted services is created. The Traffic Manager keeps track of the service that received the last request.
  3. The Traffic Manager sends the next hosted service in line back to the client.
  4. The Traffic Manager sends the next hosted service in line back to the client.
  5. The next request follows this sequence again.

Performance

The performance load balancing policy routes the traffic to the closest hosted service. The Traffic Manager knows the origin of the request. To define what the closest hosted service is, a network performance table containing round-trip times is created and maintained. The table is updated at fixed intervals.

The following events take place when a performance policy is created:

  1. Traffic Manager determines the round trip times between different locations in the world and the Windows Azure datacenters. All this happens under the hood and cannot be infl uenced. These round-trip times are kept in a network performance table.
  2. A user accesses the domain, and therefore, Traffic Manager receives the request.
  3. Traffic Manager determines, by querying the network performance table, the best performing hosted service to handle this specific request. The best performing hosted service is the one with the lowest round-trip time, not necessarily the closest one.
  4. Traffic Manager returns the DNS name of the hosted service with the best round-trip time.
  5. The client calls the hosted service that is chosen by the Traffic Manager.

Keep in mind that the time-to-live on a client machine determines how long the DNS entries are cached. As long as the cache is not expired, requests will be sent to the same hosted service (since the IP address of the hosted service is resolved from the local DNS cache).

Failover

The following events take place when a failover policy is created:

  1. A failover policy routes the traffic to the next cloud service in line. It iterates a table from the top down, containing all the cloud services that are part of the policy. It continues to iterate until it finds a service that is not offl ine. The Traffic Manager receives a request from a user that browses to the Geotopia portal.
  2. The Traffic Manager iterates the ordered list of the cloud services being part of the policy and determines which is the first in the list that is online.
  3. The DNS entry of the first online cloud service is returned to the user.
  4. The client calls the IP address of the first-in-line, online cloud service.

Testing the policies

In order to test different policies, you need to take a few steps. In case of the failover scenario, perform the following steps:

  1. Bring up all your hosted services. In my scenario, we have two hosted services—one running in North Europe and the other running in West US. This is the ideal scenario where every run is normal.
  2. Open a command prompt and use the nslookup command to verify the primary hosted service in use:

  3. Now, bring down the primary hosted service. You can do this by stopping the primary hosted service in the Windows Azure portal.
  4. Now use the nslookup command again, and the result should point to the next hosted service in line, in this scenario, to the one deployed in the West US region. This is displayed in the following screenshot:

  5. To test the WATM policy based on round robin, follow the exact steps as described earlier, but then without bringing down one of the services.
  6. When the TTL expires, the nslookup command will return a different hosted service than before:

Testing your Traffic Manager based on performance is a bit tougher to accomplish. You need to set up different clients all around the world (running in Azure, of course!) to simulate diverse user traffic that all call the hosted service through www.geotopia.com. There are third-party tools available that can support you with doing this.

Failover scenario

Bringing the WATM together with SQL Azure Data Sync offers a great combination for failover. In the following diagram, you can see these two brought together:

The complete set of Geotopia services contained in a hosted service is deployed in different regions with at least two instances of every role to uphold the basic SLA that the platform offers. Every hosted service has its own connection string and connects to its own, co-located SQL Azure database to minimize latency and reduce bandwidth costs.

There are an equal number of SQL Azure databases, all deployed in the same datacenters as their accompanying hosted service. The SQL Azure databases are kept in sync by using SQL Azure Data Sync.

It is possible to define three different policies to support all scenarios:

  • Failover: To make sure that Geotopia is never unreachable in case of a major event. Traffic is rerouted to the next hosted service in line to handle user requests, until the hosted service is back up again. Data Sync will keep the databases synchronized.
  • Round robin: To distribute traffic equally around hosted services.

  • Performance: To get the most out of the application by having the Traffic Manager select the best performing hosted service based on network performance.

Summary

In this article, we saw that AppFabric offers some very interesting features. We also saw how to set up Service Bus queuing and how to send and receive messages to and from it. In addition, topics and subscriptions were explained, together with some code snippets.

We learned how we can add caching capabilities to our application quickly and how to fine-tune this. We demonstrated the configuration of Windows Azure caching and saw how to programmatically use caching features.

The next subject covered was the Windows Azure Connect feature. This is an interesting method to build hybrid cloud solutions that mix web and worker roles together with local, on-premise servers, virtual machines, or anything else that has an IP address.

The next subject covered was the Windows Azure Connect feature. This is an interesting method to build hybrid cloud solutions that mix web and worker roles together with local, on-premise servers, virtual machines, or anything else that has an IP address.

Finally, we went through the Windows Azure Traffic Manager and saw how some interesting scenarios can be created to offer the best for our clients and uphold the high performance and failover standards we have for our applications.

Resources for Article :


Further resources on this subject:


About the Author :


Riccardo Becker

Riccardo Becker works full-time as a Principal IT Architect for Logica, in the Netherlands. He holds several certifications, and his background in computing goes way back to 1998, when he started working with good old' Visual Basic 5.0 (or was it 6.0?). Ever since, he fulfilled several roles, such as Developer, Lead Developer, Architect, Project Leader, Practice Manager, and recently, he decided to accept the role of Principal IT Architect, in which he focuses on innovation, cutting-edge technology, and specifically on Windows Azure and cloud computing in general.

In 2007, he joined the Microsoft LEAP program, where he got a peek at the move Microsoft was about to make on their road to the cloud. Pat Helland gave him that insight, and since the first release of Windows Azure on PDC 2008, he started to focus on it, keeping track of the progress and the maturity of the platform. In the past few years, he has also done a lot of work on incubation with his employer, raising awareness on cloud computing in general and Windows Azure.

Books From Packt


Software Testing with Visual Studio Team System 2008
Software Testing with Visual Studio Team System 2008

Learning SQL Server 2008 Reporting Services
Learning SQL Server 2008 Reporting Services

Creating your MySQL Database: Practical Design Tips and Techniques
Creating your MySQL Database: Practical Design Tips and Techniques

Microsoft Silverlight 5 and Windows Azure Enterprise Integration
Microsoft Silverlight 5 and Windows Azure Enterprise Integration

Applied Architecture Patterns on the Microsoft Platform
Applied Architecture Patterns on the Microsoft Platform

Microsoft Windows Azure Development Cookbook
Microsoft Windows Azure Development Cookbook

Microsoft SQL Azure Enterprise Application Development
Microsoft SQL Azure Enterprise Application Development

Microsoft Azure: Enterprise Application Development
Microsoft Azure: Enterprise Application Development


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software