Enhancements to ASP.NET

Put Visual Studio and .NET together and the results are empowering. With over 40 recipes in this Cookbook you can learn to integrate them both to achieve unparalleled results in applications that are in tune with modern technologies.

(For more resources related to this topic, see here.)

Understanding major performance boosters in ASP.NET web applications

Performance is one of the primary goals for any system. For a server, the throughput /time actually specifies the performance of the hardware or software in the system. It is important to increase performance and decrease the amount of hardware used for the throughput. There must be a balance between the two.

Performance is one of the key elements of web development. In the last phase of ASP.NET 4.5, performance was one of the key concerns for Microsoft. They made a few major changes to the ASP.NET system to make it more performant.

Performance comes directly, starting from the CPU utilization all the way back to the actual code you write. Each CPU cycle you consume for producing your response will affect performance. Consuming a large number of CPU cycles will lead you to add more and more CPUs to avoid site unavailability. As we are moving more and more towards the cloud system, performance is directly related to the cost. Here, CPU cycles costs money. Hence to make a more cost effective system running on the cloud, unnecessary CPU cycles should always be avoided.

.NET 4.5 addresses the problem to its core to support background GC. The background GC for the server introduces support for concurrent collection without blocking threads; hence the performance of the site is not compromised because of garbage collection as well. The multicore JIT in addition improves the start-up time of pages without additional work.

By the way, some of the improvements in technology can be really tangible to developers as well as end users. They can be categorized as follows.

  • CPU and JIT improvements
  • ASP.NET feature improvements

The first category is generally intangible while the second case is tangible. The CPU and JIT improvements, as we have already discussed, are actually related to server performance. JIT compilations are not tangible to the developers which means they will automatically work on the system rather than any code change while the second category is actually related to code. We will focus here mainly on the tangible improvements in this recipe.

Getting ready

To get started, let us start Visual Studio 2012 and create an ASP.NET project. If you are opening the project for the first time, you can choose the ASP.NET Web Forms Application project template. Visual Studio 2012 comes with a cool template which virtually creates the layout of a blank site. Just create the project and run it and you will be presented with a blank site with all the default behaviors you need. This is done without writing a single line of code.

Now, if you look into Solution Explorer, the project is separated into folders, each representing its identification. For instance, the Scripts folder includes all the JavaScript associated with the site. You can also see the Themes folder in Content, which includes the CSS files. Generally, for production-level sites, we have large numbers of JavaScript and CSS files that are sometimes very big and they download to the client browser when the site is initially loaded. We specify the file path using the script or link path.

If you are familiar with web clients you will know that, the websites request these files in a separate request after getting the HTTP response for the page. As most browsers don't support parallel download, the download of each file adds up to the response. Even during the completion of each download there is a pause, which we call network latency. So, if you see the entire page response of a website, you will see that a large amount of response time is actually consumed by the download of these external files rather than the actual website response.

Let us create a page on the website and add few JavaScript files and see the response time using Fiddler:

The preceding screenshot shows how the browser requests the resources. Just notice, the first request is the actual request made by the client that takes half of a second, but the second half of that second is consumed by the requests made by the response from the server. The server responds with a number of CSS and JavaScript requests in the header, which have eventually been called to the same web server one by one. Sometimes, if the JavaScript is heavy, it takes a lot of time to load these individual files to the client which results in delay in response time for the web page. It is the same with images too. Even though external files are downloaded in separate streams, big images hurt the performance of the web page as well:

Here you can see that the source of the file contains the call to a number of files that corresponds to each request. When the HTML is processed on the browser, it invokes each of these file requests one by one and replaces the document with the reference of those files. As I have already told you, making more and more resources reduces the performance of a page. This huge number of requests makes the website very slow. The screenshot depicts the actual number of requests, the bytes sent and received, and the performance in seconds. If you look at some big applications, the performance of the page is reduced by a lot more than this.

To address this problem we take the following two approaches:

  • Minimizing the size of JavaScript and CSS by removing the whitespaces, newlines, tab spaces, and so on, or omitting out the unnecessary content
  • Bundling all the files into one file of the same MIME type to reduce the requests made by the browser

ASP.NET addresses both of these problems and introduces a new feature that can both minimize the content of the JavaScript and CSS files as well as bundle all the JavaScript or CSS files together to produce one single file request from the site.

To use this feature you need to first install the package. Open Visual Studio 2012, select View | Other Windows | Package Manager Console as shown in the following screenshot.

Package Manager Console will open the PowerShell window for package management inside Visual Studio:

Once the package manager is loaded, type the following command.

Install-Package Microsoft.Web.Optimization

This will load the optimizations inside Visual Studio.

On the other hand, rather than opening Package Manager Console, you can also open Nuget package manager by right-clicking on the references folder of the project and select Add Library Package Reference. This will produce a nice dialog box to select and install the appropriate package.

In this recipe, we are going to cover how to take the benefits of bundling and minification of website contents in .NET 4.5.

How to do it...

  1. In order to move ahead with the recipe, we will use a blank web solution instead of the template web solution that I have created just now. To do this, start Visual Studio and select the ASP.NET Empty Web Application template.
  2. The project will be created without any pages but with a web.config file (a web.config file is similar to app.config but works on web environments).
  3. Add a new page to the project and name it as home.aspx. Leave it as it is, and go ahead by adding a folder to the solution and naming it as Resources.
  4. Inside resources, create two folders, one for JavaScript named js and another for stylesheets name css.
  5. Create a few JavaScript files inside js folder and a few CSS files inside the css folder. Once you finish the folder structure will look like below:

    Now let us add the files on the home page. Just drag-and-drop the js files one by one into the head section of the page. The page IDE will produce the appropriate tags for scripts and CSS automatically.

  6. Now run the project. You will see that the CSS and the JavaScript are appropriately loaded. To check, try using Fiddler.
  7. When you select the source of a page, you will see links that points to the JavaScript, CSS, or other resource files. These files are directly linked to the source and hence if we navigate to these files, it will show the raw content of the JavaScript.
  8. Open Fiddler and refresh the page keeping it open in Internet Explorer in the debug mode. You will see that the browser invokes four requests. Three of them are for external files and one for the actual HTML file. The Fiddler shows how the timeframe of the request is maintained. The first request being for the home.aspx file while the others are automatically invoked by the browser to get the js and css files. You can also take a note at the total size of the whole combined request for the page.
  9. Let's close the browser and remove references to the js and css files from the head tag, where you have dragged and added the following code to reference folders rather than individual files:

    <script src = "Resources/js"></script>
    <link rel="stylesheet" href="Resources/css" />

  10. Open the Global.asax file (add if not already added) and write the following line in Application_Start:

    void Application_Start(object sender, EventArgs e) { //Adds the default behavior BundleTable.Bundles.EnableDefaultBundles(); }

    Once the line has been added, you can now run the project and see the output.

  11. If you now see the result in Fiddler, you will see all the files inside the scripts are clubbed into a single file and the whole file gets downloaded in a single request. If you have a large number of files, the bundling will show considerable performance gain for the web page.
  12. Bundling is not the only performance gain that we have achieved using Optimization. Press F5 to run the application and try to look into the actual file that has been downloaded as js and css. You will see that the bundle has been minified already by disregarding comments, blank spaces, new lines, and so on. Hence, the size of the bundles has also been reduced physically.
  13. You can also add your custom BundleTable entries. Generally, we add them inside the Application_Start section of the Global.asax file, like so:

    Bundle mybundle = new Bundle("~/mycustombundle", typeof(JsMinify)); mybundle.AddFile("~/Resources/Main.js"); mybundle.AddFile("~/Resources/Sub1.js"); mybundle.AddDirectory("/Resources/Files", "*.js", false); BundleTable.Bundles.Add(mybundle);

    The preceding code creates a new bundle for the application that can be referenced later on. We can use AddFile to add individual files to the Bundle or we can also use AddDirectory to specify a whole directory for a particular search pattern. The last argument for AddDirectory specifies whether it needs to search for a subdirectory or not. JsMinify is the default Rule processor for the JavaScript files. Similar to JsMinify is a class called CssMinfy that acts as a default rule for CSS minification.

  14. You can reference your custom bundle directly inside your page using the following directive:

    <script src = "mycustombundle" type="text/javascript" />

    You will notice that the directive appropriately points to the custom bundle that has been created.

How it works...

Bundling and minification works with the introduction of the System.Web.Optimization namespace. BundleTable is a new class inside this namespace that keeps track of all the bundles that have been created in the solution. It maintains a list of all the Bundle objects, that is, list of JavaScript or CSS files, in a key-value pair collection. Once the request for a bundle is made, HttpRuntime dynamically combines the files and/or directories associated with the bundle into a single file response.

Let us consider some other types that helps in transformation.

  • BundleResponse: This class represents the response after the resources are bundled and minified. So BundleResponse keeps track of the actual response of the combined file.
  • IBundleTransform: This type specifies the contract for transformation. Its main idea is to provide the transformation for a particular resource. JsMinfy or CssMinify are the default implementations of IBundleTransform.
  • Bundle: The class represents a resource bundle with a list of files or directories.

The IBundleTransform type specifies the rule for producing the BundleResponse class. To implement custom rules for a bundle, we need to implement this interface.

public class MyCustomTransform : IBundleTransform { public void Process(BundleResponse bundleresponse) { // write logic to Bundle and minfy… } }

Here, the BundleResponse class is the actual response stream where we need to write the minified output to.

Basically, the application uses the default BundleHandler class to initiate the transform. BundleHandler is an IHttpHandler that uses ProcessRequest to get the response for the request from the browser. The process is summarized as follows:

  • HttpRuntime calls the default BundleHandler.ProcessRequest method to handle the bundling and minification request initiated by the browser.
  • ProcessRequest gets the appropriate bundle from the BundleTable class and calls Bundle.ProcessRequest.
  • The Bundle.ProcessRequest method first retrieves the bundle's Url and invokes Bundle.GetBundleResponse.
  • GetBundleResponse first performs a cache lookup. If there is no cache available, it calls GenerateBundleResponse.
  • The GenerateBundleResponse method creates an instance of BundleResponse, sets the files to be processed in correct order, and finally invokes IBundleTransform.Process.
  • The response is then written to BundleResponse from this method and the output is thrown back to the client.

The preceding flow diagram summarizes how the transformation is handled by ASP.NET. The final call to IBundleTransform returns the response back to the browser.

There's more...

Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task.

How to configure the compilation of pages in ASP.NET websites

Compilation also plays a very vital role in the performance of websites. As we have already mentioned, we have background GC available to the servers with .NET 4.5 releases, which means when GC starts collecting unreferenced objects, there will be no suspension of executing threads on the server. The GC can start collecting in the background. The support of multicore JIT will increase the performance of non-JITed files as well.

By default, .NET 4.5 supports multicore JIT. If you want to disable this option, you can use the following code.

<system.web> <compilation profileGuidedOptimizations="None" /> </system.web>

This configuration will disable the support of spreading the JIT into multiple cores.

The server enables a Prefetcher technology, similar to what Windows uses, to reduce the disk read cost of paging during application startup. The Prefetcher is enabled by default, you can also disable this using the following code:

<system.web> <compilation enablePrefetchOptimization ="false" /> </system.web>

This settings will disable the Prefetcher technology on the ASP.NET site. You can also configure your server to directly manipulate the amount of GC:

<runtime> <performanceScenario value="HighDensityWebHosting" />

The preceding configuration will make the website a high density website. This will reduce the amount of memory consumed per session.

What is unobtrusive validation

Validation plays a very vital role for any application that employs user input. We generally use ASP.NET data validators to specify validation for a particular control. The validation forms the basis of any input. People use validator controls available in ASP.NET (which include RequiredFieldValidator, RangeValidator, and so on) to validate the controls when either a page is submitted or when the control loses its focus or on any event the validator is associated with. Validators being most popular server-side controls that handles client-side validation by producing an inline JavaScript block inside the actual page that specifies each validator. Let us take an instance:

<asp:TextBox ID="Username" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ErrorMessage="Username is required!" ControlToValidate="Username" runat="server"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ErrorMessage="Username can only contain letters!" ControlToValidate="Username" ValidationExpression="^[A-Za-z]+$" runat="server"></asp:RegularExpressionValidator>

The validator handles both the client-side and server-side validations. When the preceding lines are rendered in the browser, it produces a mess of inline JavaScript.

.NET 4.5 uses unobtrusive validation. That means the inline JavaScript is replaced by the data attributes in the HTML.

This is a normal HTML-only code and hence performs better than the inline HTML and is also very understandable, neat, and clean.

You can also turn off the default behavior of the application just by adding the line in Application_Start of the Global.asax file:

void Application_Start(object sender, EventArgs e) { //Disable UnobtrusiveValidation application wide ValidationSettings.UnobtrusiveValidationMode = UnobtrusiveValidationMode.None; }

The preceding code will disable the feature for the application.

Applying appSettings configuration key values

Microsoft has implemented the ASP.NET web application engine in such a way that most of its configurations can be overridden by the developers while developing applications. There is a special configuration file named Machine.config that provides the default configuration of each of the config sections present for every application. web.config is specific to an application hosted on IIS. The IIS reads through the configuration of each directory to apply for the pages inside it.

As configuring a web application is such a basic thing for any application, there is always a need to have a template for a specific set of configuration without rewriting the whole section inside web.config again. There are some specific requirements from the developers perspective that could be easily customizable without changing too much on the config.ASP.NET 4.5 introduces magic strings that could be used as configuration key values in the appSettings element that could give special meaning to the configuration. For instance, if you want the default built-in JavaScript encoding to encode & character, you might use the following:

<appSettings> <add key="aspnet:JavaScriptDoNotEncodeAmpersand" value="false" /> </appSettings>

This will ensure that & character is encoded as "\u0026" which is the JavaScript-escaped form of that character. When the value is true, the default JavaScript string will not encode &.

On the other hand, if you need to allow ScriptResource.axd to serve arbitrary static files other than JavaScript, you can use another magic appSettings key to handle this:

<appSettings> <add key="aspnet:ScriptResourceAllowNonJsFiles" value="false" /> </appSettings>

The configuration will ensure that ScriptResource.axd will not serve any file other than the .js extension even if the web page has a markup .

Similar to this, you can also enable UnobtrusiveValidationMode on the website using a separate magic string on appSetting too:

<appSettings> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings>

This configuration will make the application to render HTML5 data-attributes for validators.

There are a bunch of these appSettings key magic strings that you can use in your configuration to give special meaning to the web application. Refer to http://bit.ly/ASPNETMagicStrings for more information.

DLL intern in ASP.NET servers

Just like the reusability of string can be achieved using string intern tables, ASP.NET allows you to specify DLL intern that reduces the use of loading multiple DLLs into memory from different physical locations. The interning functionality introduced with ASP.NET reduces the RAM requirement and load time-even though the same DLL resides on multiple physical locations, they are loaded only once into the memory and the same memory is being shared by multiple processes. ASP.NET maintains symbolic links placed on the bin folder that map to a shared assembly. Sharing assemblies using a symbolic link requires a new tool named aspnet_intern.exe that lets you to create and manage the stored interned assemblies.

To make sure that the assemblies are interned, we need to run the following code on the source directory:

aspnet_inturn –mode exec –sourcedir "Temporary ASP.NET files" – interndir "c:\assemblies"

This code will put the shared assemblies placed inside assemblies directory interned to the temporary ASP.NET files. Thus, once a DLL is loaded into memory, it will be shared by other requests.

See also

How to work with statically-typed model binding in ASP.NET applications

Binding is a concept that attaches the source with the target such that when something is modified on the source, it automatically reflects to the target. The concept of binding is not new in the .NET framework. It was there from the beginning. On the server-side controls, when we set DataSource, we generally don't expect DataSource to automatically produce the output to be rendered onto the actual HTML. We expect to call a DataBind method corresponding to the control. Something magical happens in the background that generates the actual HTML from DataSource and produces the output. DataSource expects a collection of items where each of the items produces single entry on the control. For instance, if we pass a collection as the data source of a grid, the data bind will enumerate the collection and each entry in the collection will create a row of DataGrid. To evaluate each property from within the individual element, we use DataBinder.Eval that uses reflection to evaluate the contextual property with actual data.

Now we all know, DataBinder actually works on a string equivalent of the actual property, and you cannot get the error before you actually run the page. In case of model binding, the bound object generates the actual object. Model binding does have the information about the actual object for which the collection is made and can give you options such as IntelliSense or other advanced Visual Studio options to work with the item.

Getting ready

DataSource is a property of the Databound element that takes a collection and provides a mechanism to repeat its output replacing the contextual element of the collection with generated HTML. Each control generates HTML during the render phase of the ASP.NET page life cycle and returns the output to the client. The ASP.NET controls are built so elegantly that you can easily hook into its properties while the actual HTML is being created, and get the contextual controls that make up the HTML with the contextual data element as well. For a template control such as Repeater, each ItemTemplate property or the AlternateItemTemplate property exposes a data item in its callback when it is actually rendered. This is basically the contextual object of DataSource on the nth iteration. DataBinder.Eval is a special API that evaluates a property from any object using Reflection. It is totally a runtime evaluation and hence cannot determine any mistakes on designer during compile time. The contextual object also doesn't have any type-related information inherent inside the control.

With ASP.NET 4.5, the DataBound controls expose the contextual object as generic types so that the contextual object is always strongly typed. The control exposes the ItemType property, which can also be used inside the HTML designer to specify the type of the contextual element. The object is determined automatically by the Visual Studio IDE and it produces proper IntelliSense and provides compile-time error checking on the type defined by the control.

In this recipe we are going to see step by step how to create a control that is bound to a model and define the HTML using its inherent Item object.

How to do it...

  1. Open Visual Studio and start an ASP.NET Web Application project.
  2. Create a class called Customer to actually implement the model. For simplicity, we are just using a class as our model:

    public class Customer { public string CustomerId { get; set; } public string Name { get; set; } }

    The Customer class has two properties, one that holds the identifier of the customer and another the name of the customer.

  3. Now let us add an ASPX file and add a Repeater control. The Repeater control has a property called ModelType that needs the actual logical path of the model class. Here we pass the customer.
  4. Once ModelType is set for the Repeater control, you can directly use the contextual object inside ItemTemplate just by specifying it with the : Item syntax:

    <asp:Repeater runat="server" ID="rptCustomers" ItemType="SampleBinding.Customer"> <ItemTemplate> <span><%# :Item.Name %></span> </ItemTemplate> </asp:Repeater>

    Here in this Repeater control we have directly accessed the Name property from the Customer class. So here if we specify a list of Customer values to its data source, it will bind the contextual objects appropriately. The ItemType property is available to all DataBound controls.

  5. The Databound controls in ASP.NET 4.5 also support CRUD operations. The controls such as Gridview, FormView, and DetailsView expose properties to specify SelectMethod, InsertMethod, UpdateMethod, or DeleteMethod. These methods allow you to pass proper methods that in turn allow you to specify DML statements.
  6. Add a new page called Details.aspx and configure it as follows:

    <asp:DetailsView SelectMethod="dvDepartments_GetItem" ID="dvDepartments" UpdateMethod="dvDepartments_ UpdateItem" runat="server" InsertMethod="dvDepartments_ InsertItem" DeleteMethod="dvDepartments_DeleteItem" ModelType="ModelBindingSample.ModelDepartment" AutoGenerateInsertB utton="true"> </asp:DetailsView>

    Here in the preceding code, you can see that I have specified all the DML methods. The code behind will have all the methods and you need to properly specify the methods for each operation.

How it works...

Every collection control loops through the Datasource control and renders the output. The Bindable controls support collection to be passed to it, so that it can make a template of itself by individually running the same code over and over again with a contextual object passed for an index. The contextual element is present during the phase of rendering the HTML. ASP.NET 4.5 comes with the feature that allows you to define the type of an individual item of the collection such that the template forces this conversion, and the contextual item is made available to the template.

In other words, what we have been doing with Eval before, can now be done easily using the Item contextual object, which is of same type as we define in the ItemType property. The designer enumerates properties into an IntelliSense menu just like a C# code window to write code easier.

Each databound control in ASP.NET 4.5 allows CRUD operations. For every CRUD operation there is a specific event handler that can be configured to handle operations defined inside the control. You should remember that after each of these operations, the control actually calls DataBind again so that the data gets refreshed.

There's more...

ModelBinding is not the only thing that is important. Let us discuss some of the other important concepts deemed fit to this category.

ModelBinding with filter operations

ModelBinding in ASP.NET 4.5 has been enhanced pretty much to support most of the operations that we regularly need with our ASP.NET pages. Among the interesting features is the support of filters in selection of control. Let us use DetailsView to introduce this feature:

<asp:DropDownList ID="ddlDepartmentNames" runat="server" ItemType="ModelBindingSample.ModelDepartment" AutoPostBack="true" DataValueField="DepartmentId" DataTextField="DepartmentName" SelectMethod="GetDepartments"> </asp:DropDownList> <asp:DetailsView SelectMethod="dvDepartments_GetItems" ID="dvDepartments" UpdateMethod="dvDepartments_UpdateItem" runat="server" InsertMethod="dvDepartments_InsertItem" DeleteMethod="dvDepartments_DeleteItem" ItemType="ModelBindingSample.ModelCustomer" AutoGenerateIn sertButton="true"> </asp:DetailsView>

Here you can see the DropDownList control calls Getdepartments to generate the list of departments available. The DetailsView control on the other hand uses the ModelCustomer class to generate the customer list. SelectMethod allows you to bind the control with the data. Now to get the filter out of SelectMethod we use the following code:

public IQueryable<ModelCustomer> dvDepartments_GetItems([Control("ddlD epartmentNames")]string deptid) { // get customers for a specific id }

This method will be automatically called when the drop-down list changes its value. The departmentid of the selected DropDownItem control is automatically passed into the method and the result is bound to DetailsView automatically. Remember, the dvDepartments_GetItems method always passes a Nullable parameter. So, if departmentid is declared as integer, it would have been passed as int? rather than int. The attribute on the argument specifies the control, which defines the value for the query element. You need to pass IEnumerable (IQueryable in our case) of the items to be bound to the control.

You can also specify a filter using Querystring. You can use the following code:

public IQueryable<Customer> GetCustomers([QueryString]string departmentid) { return null; }

This code will take departmentid from the query string and load the DataBound control instead of the control specified within the page.

See also

Refer to the following links.

Introduction to HTML5 and CSS3 in ASP.NET applications

Web is the media that runs over the Internet. It's a service that has already has us in its grasp. Literally, if you think of the Web, the first thing that can come into your mind is everything about HTML, CSS, and JavaScript. The browsers are the user agents that are used to communicate with the Web. The Web has been there for almost a decade and is used mostly to serve information about business, communities, social networks, and virtually everything that you can think of. For such a long period of time, users primarily use websites to see text-based content with minimum UI experiences and texts that can easily be consumed by search engines. In those websites, all that the browsers do is send a request for a page and the server serves the client with the appropriate page which is later rendered on the browser. But with the introduction to modern HTMLs, websites are gradually adopting interactivity in terms of CSS, AJAX, iFrame, or are even using sandboxed applications with the use of Silverlight, Flash, and so on.

Silverlight and Adobe AIR (Flash) are specifically likely to be used when the requirement is great interactivity and rich clients. They totally look like desktop applications and interact with the user as much as they can. But the problems with a sandboxed application are that they are very slow and need every browser to install the appropriate plugin before they can actually navigate to the application. They are heavyweight and are not rendered by the browser engine.

Even though they are so popular these days, most of the development still employs the traditional approach of HTML and CSS. Most businesses cannot afford the long loading waits or even as we move along to the lines of devices, most of these do not support them. The long term user requirements made it important to take the traditional HTML and CSS further, ornamenting it in such a way that these ongoing requirements could easily be solved using traditional code. The popularity of the ASP.NET technology also points to the popularity of HTML. Even though we are dealing with server-side controls (in case of ASP.NET applications), internally everything renders HTML to the browser.

HTML5, which was introduced by W3C and drafted in June 2004, is going to be standardized in 2014 making most of the things that need desktop or sandboxed plugins easily carried out using HTML, CSS, and JavaScript. The long term requirement to have offline web, data storage, hardware access, or even working with graphics and multimedia is easily possible with the help of the HTML5 technology. So basically what we had to rely on (the sandbox browser plugins) is now going to be standardized. In this recipe, we are going to cover some of the interesting HTML5 features that need special attention.

Getting ready

HTML5 does not need the installation of any special SDK to be used. Most of the current browsers already support HTML5 and all the new browsers are going to support most of these features. The official logo of HTML5 has been considered as follows.

HTML5 has introduced a lot of new advanced features but yet it also tries to simplify things that we commonly don't need to know but often need to remember in order to write code. For instance, the DocType element of an HTML5 document has been simplified to the following one line:

<!DOCTYPE html>

So, for an HTML5 document, the document type that specifies the page is simply HTML. Similar to DocType, the character set for the page is also defined in very simple terms.

<meta charset="utf-8" />

The character set can be of any type. Here we specified the document to be UTF – 8. You do not need to specify the http-equiv attribute or content to define charset for the page in an HTML5 document according to the specification. Let us now jot down some of the interesting HTML5 items that we are going to take on in this recipe. Semantic tags, better markups, descriptive link relations, micro-data elements, new form types and field types, CSS enhancements and JavaScript enhancements.

Not all browsers presently support every feature defined in HTML5. There are Modernizr scripts that can help as cross-browser polyfills for all browsers. You can read more information about it from the following link: https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills

How to do it...

  1. The HTML5 syntax has been adorned with a lot of important tags which include header, nav, aside, figure, and footer syntaxes that help in defining better semantic meaning of the document:

    <body> <header> <hgroup> <h1>Page title</h1> <h2>Page subtitle</h2> </hgroup> </header> <nav> <ul> Specify navigation </ul> </nav> <section> <article> <header> <h1>Title</h1> </header> <section> Content for the section </section> </article> <article> <aside> Releated links </aside> <figure> <img src = "logo.jpg"/> <figcaption>Special HTML5 Logo</figcaption> </figure> <footer> Copyright © <time datetime="2010-11-08">2010</time>. </footer> </body>

    By reading the document, it clearly identifies the semantic meaning of the document. The header tag specifies the header information about the page. The nav tag defines the navigation panel. The Section tag is defined by articles and besides them, there are links. The img tag is adorned with the Figure tag and finally, the footer information is defined under the footer tag. A diagrammatic representation of the layout is shown as follows.

    The vocabulary of the page that has been previously defined by div and CSS classes are now maintained by the HTML itself and the whole document forms a meaning to the reader.

  2. HTML5 not only improves the semantic meaning of the document, it also adds new markup. For instance, take a look at the following code:

    <input list="options" type="text"/> <datalist id="options"> <option value="Abhishek"/> <option value="Abhijit"/> <option value="Abhik"/> </datalist>

    datalist specifies the autocomplete list for a control. A datalist item automatically pops up a menu while we type inside a textbox. The input tag specifies the list for autocomplete using the list attribute. Now if you start typing on the textbox, it specifies a list of items automatically.

    <details> <summary>HTML 5</summary> This is a sliding panel that comes when the HTML5 header is clicked </details>

    The preceding markup specifies a sliding panel container. We used to specify these using JavaScript, but now HTML5 comes with controls that handle these panels automatically.

  3. HTML5 comes with a progress bar. It supports the progress and meter tags that define the progress bar inside an HTML document:

    <meter min="0" max="100" low="40" high="90" optimum="100" value="91">A+</meter> <progress value="75" max="100">3/4 complete</progress>

    The progress shows 75 percent filled in and the meter shows a value of 91.

  4. HTML5 added a whole lot of new attributes to specify aria attributes and microdata for a block. For instance, consider the following code:

    <div itemscope itemtype="http://example.org/band"> <ul id="tv" role="tree" tabindex="0" aria-labelledby="node1"> <li role="treeitem" tabindex="-1" aria-expanded="true">Inside Node1</li> </li> </ul>

    Here, Itemscope defines the microdata and ul defines a tree with aria attributes. These data are helpful for different analyzers or even for automated tools or search engines about the document.

  5. There are new Form types that have been introduced with HTML5:

    <input type="email" value="some@email.com" /> <input type="date" min="2010-08-14" max="2011-08-14" value="2010-08-14"/> <input type="range" min="0" max="50" value="10" /> <input type="search" results="10" placeholder="Search..." /> <input type="tel" placeholder="(555) 555-5555" pattern="^\(?\d{3}\)?[-\s]\d{3}[-\s]\d{4}.*?$" /> <input type="color" placeholder="e.g. #bbbbbb" /> <input type="number" step="1" min="-5" max="10" value="0" />

    These inputs types give a special meaning to the form.

    The preceding figure shows how the new controls are laid out when placed inside a HTML document. The controls are email, date, range, search, tel, color, and number respectively.

  6. HTML5 supports vector drawing over the document. We can use a canvas to draw 2D as well as 3D graphics over the HTML document:

    <script> var canvasContext = document.getElementById("canvas"). getContext("2d"); canvasContext.fillRect(250, 25, 150, 100); canvasContext.beginPath(); canvasContext.arc(450, 110, 100, Math.PI * 1/2, Math.PI * 3/2); canvasContext.lineWidth = 15; canvasContext.lineCap = 'round'; canvasContext.strokeStyle = 'rgba(255, 127, 0, 0.5)'; canvasContext.stroke(); </script>

    Consider the following diagram.

    The preceding code creates an arc on the canvas and a rectangle filled with the color black as shown in the diagram. The canvas gives us the options to draw any shape within it using simple JavaScript.

  7. As the world is moving towards multimedia, HTML5 introduces audio and video tags that allow us to run audio and video inside the browser. We do not need any third-party library or plugin to run audio or video inside a browser:

    <audio id="audio" src = "sound.mp3" controls></audio> <video id="video" src = "movie.webm" autoplay controls></video>

    The audio tag runs the audio and the video tag runs the video inside the browser. When controls are specified, the player provides superior browser controls to the user.

  8. With CSS3 on the way, CSS has been improved greatly to enhance the HTML document styles. For instance, CSS constructs such as .row:nth-child(even) gives the programmer control to deal with a particular set of items on the document and the programmer gains more granular programmatic approach using CSS.

How it works...

HTML5 is the standardization to the web environments with W3C standards. The HTML5 specifications are still in the draft stage (a 900-page specification available at http://www.w3.org/html/wg/drafts/html/master/), but most modern browsers have already started supporting the features mentioned in the specifications. The standardization is due in 2014 and by then all browsers need to support HTML5 constructs.

Moreover, with the evolution of smart devices, mobile browsers are also getting support for HTML5 syntaxes. Most smart devices such as Android, iPhone, or Windows Phone now support HTML5 browsers and the HTML that runs over big devices can still show the content on those small browsers.

HTML5 improves the richness of the web applications and hence most people have already started shifting their websites to the future of the Web.

There's more...

HTML5 has introduced a lot of new enhancements which cannot be completed using one single recipe. Let us look into some more enhancements of HTML5, which are important to know.

How to work with web workers in HTML5

Web workers are one of the most awaited features of the entire HTML5 specification. Generally, if we think of the current environment, it is actually turning towards multicore machines. Today, it's verbal that every computer has at least two cores installed in their machine. Browsers are well capable of producing multiple threads that can run in parallel in different cores. But programmatically, we cannot have the flexibility in JavaScript to run parallel tasks in different cores yet.

Previously, developers used setTimeout, setInterval, or XMLHttprequst to actually create non-blocking calls. But these are not truly a concurrency. I mean, if you still put a long loop inside setTimeout, it will still hang the UI. Actually these works asynchronously take some of the UI threads time slices but they do not actually spawn a new thread to run the code.

As the world is moving towards client-side, rich user interfaces, we are prone to develop codes that are capable of computation on the client side itself. So going through the line, it is important that the browsers support multiple cores to be used up while executing a JavaScript.

Web workers are actually a JavaScript type that enable you to create multiple cores and run your JavaScript in a separate thread altogether, and communicate the UI thread using messages in a similar way as we do for other languages.

Let's look into the code to see how it works.

var worker = new Worker('task.js'); worker.onmessage = function(event) { alert(event.data); }; worker.postMessage('data');

Here we will load task.js from Worker.Worker is a type that invokes the code inside a JavaScript in a new thread. The start of the thread is called using postMessage on the worker type. Now we have already added a callback to the event onmessage such that when the JavaScript inside task.js invokes postMessage, this message is received by this callback. Inside task.js we wrote.

self.onmessage = function(event) { // Do some CPU intensive work. self.postMessage("recv'd: " + event.data); };

Here after some CPU-intensive work, we use self.postMessage to send the data we received from the UI thread and the onmessage event handler gets executed with message received data.

Working with Socket using HTML5

HTML5 supports full-duplex bidirectional sockets that run over the Web. The browsers are capable of invoking socket requests directly using HTML5. The important thing that you should note with sockets is that it sends only the data without the overload of HTTP headers and other HTTP elements that are associated with any requests. The bandwidth using sockets is dramatically reduced. To use sockets from the browsers, a new protocol has been specified by W3C as a part of the HTML5 specification. WebSocket is a new protocol that supports two-way communication between the client and the server in a single TCP channel.

To start socket server, we are going to use node.js for server side. Install node.js on the server side using the installer available at http://nodejs.org/dist/v0.6.6/node-v0.6.6.msi. Once you have installed node.js, start a server implementation of the socket.

var io = require('socket.io'); //Creates a HTTP Server var socket = io.listen(8124); //Bind the Connection Event //This is called during connection socket.sockets.on('connection',function(socket){ //This will be fired when data is received from client socket.on('message', function(msg){ console.log('Received from client ',msg); }); //Emit a message to client socket.emit('greet',{hello: 'world'}); //This will fire when the client has disconnected socket.on('disconnect', function(){ console.log('Server has disconnected'); }); });

In the preceding code, the server implementation has been made. The require('socket.io') code snippet specifies the include module header. socket.io provides all the APIs from node.js that are useful for socket implementation. The Connection event is fired on the server when any client connects with the server. We have used to listen at the port 8124 in the server. socket.emit specifies the emit statement or the response from the server when any message is received by it. Here during the greet event, we pass a JSON object to the client which has a property hello. And finally, the disconnect event is called when the client disconnects the socket.

Now to run this client implementation, we need to create a HTML file.

<html> <title>WebSocket Client Demo</title> <script src = "http://localhost:8124/socket.io/socket.io.js"></ script> <script> //Create a socket and connect to the server var socket = io.connect('http://localhost:8124/'); socket.on("connect",function(){ alert("Client has connected to the server"); }); socket.on('greet', function (data) { alert(data.hello); } ); </script </html>

Here we connect the server at 8124 port. The connect event is invoked first. We call an alert method when the client connects to the server. Finally, we also use the greet event to pass data from the server to the client. Here, if we run both the server and the client, we will see two alerts; one when the connection is made and the other alert to greet. The greet message passes a JSON object that greets with world.

The URL for the socket from the browser looks like so.

[scheme] '://' [host] '/' [namespace] '/' [protocol version] '/' [transport id] '/' [session id] '/' ( '?' [query] )

Here, we see.

  • Scheme: This can bear values such as http or https (for web sockets, the browser changes it to ws:// after the connection is established, it's an upgrade request)
  • host: This is the host name of the socket server
  • namespace: This is the Socket.IO namespace, the default being socket.io
  • protocol version: The client support default is 1
  • transport id: This is for the different supported transports which includes WebSockets, xhr-polling, and so on
  • session id: This is the web socket session's unique session ID for the client

Getting GeoLocation from the browser using HTML5

As we are getting inclined more and more towards devices, browsers are trying to do their best to actually implement features to suit these needs. HTML5 introduces GeoLocation APIs to the browser that enable you to get the location of your position directly using JavaScript.

In spite of it being very much primitive, browsers are capable of detecting the actual location of the user using either Wi-Fi, satellite, or other external sources if available. As a programmer, you just need to call the location API and everything is handled automatically by the browser.

As geolocation bears sensitive information, it is important to ask the user for permission. Let's look at the following code.

if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { var latLng = "{" + position.coords.latitude + "," + position.coords. longitude + "with accuracy: " + position.coords.accuracy; alert(latLng); }, errorHandler); }

Here in the code we first detect whether the geolocation API is available to the current browser. If it is available, we can use getCurrentPosition to get the location of the current device and the accuracy of the position as well.

We can also use navigator.geolocation.watchPosition to continue watching the device location at an interval when the device is moving from one place to another.


This article provided a detailed gist of the enhancements to ASP.NET with some real-world examples.

Resources for Article:

Further resources on this subject:

Books to Consider

Microsoft AJAX Library Essentials: Client-side ASP.NET AJAX 1.0 Explained
$ 12.00
RESTful Services with ASP.NET Web API [Video]
$ 34.00
comments powered by Disqus

An Introduction to 3D Printing

Explore the future of manufacturing and design  - read our guide to 3d printing for free