Best Practices for Modern Web Applications

Exclusive offer: get 50% off this eBook here
Mastering Grunt

Mastering Grunt — Save 50%

Master this powerful build automation tool to streamline your application development with this book and ebook

$13.99    $7.00
by Daniel Li | April 2014 | Open Source

In this article by Daniel Li, author of Mastering Grunt, we will cover the best practices for frontend development today. It will cover load time reduction, search engine optimization, form validation, and responsive design.

(For more resources related to this topic, see here.)

The importance of search engine optimization

Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time.

Item 1 – using keywords effectively

In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to:

  • Come up with a set of keywords that are pertinent to your topic

  • Research common search keywords related to your website

  • Take an intersection of these two sets of keywords and preemptively use them across the website

Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead.

However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website.

Item 2 – header tags are powerful

Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings.

Item 3 – make sure to have alternative attributes for images

Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website:

Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code:

<img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" />

This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results.

Item 4 – enforcing clean URLs

While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users.

The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL:

http://www.example.com/post/24/golden-dragon-review

The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users.

While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers.

Item 5 – backlink whenever safe and possible

Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result.

Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host.

Item 6 – handling HTTP status codes properly

Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do:

Status Code

Alias

Effect on SEO

200

Success

This loads the page and the content is contributed to SEO

301

Permanent redirect

This redirects the page and the redirected content is contributed to SEO

302

Temporary redirect

This redirects the page and the redirected content doesn't contribute to SEO

404

Client error

(not found)

This loads the page and the content does not contribute to SEO

500

Server error

This will not load the page and there is no content to contribute to SEO

In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking.

When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page.

Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank.

Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content.

The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test.

Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify.

Item 7 – making use of your robots.txt and site map files

Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories.

For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website:

User-agent: * Disallow: /music/

While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes:

<?xml version="1.0" encoding="utf-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset>

The attributes mentioned in the preceding code are explained in the following table:

Attribute

Meaning

loc

This stands for the URL location to be crawled

lastmod

This indicates the date on which the web page was last modified

changefreq

This indicates the page is modified and the number of times the crawler should visit as a result

priority

This indicates the web page's priority in comparison to the other web pages

Using Grunt to reinforce SEO practices

With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely.

Some examples of Grunt plugins that accomplish this need are:

Mastering Grunt Master this powerful build automation tool to streamline your application development with this book and ebook
Published: April 2014
eBook Price: $13.99
Book Price: $22.99
See more
Select your format and quantity:

Form validation in the modern web world

The first task a user often needs to do is sign up in order to access the full content of a website. With the growing number of web applications in the world today, this process should be frictionless and simple, guiding users through it. With new client-side libraries used exclusively for form validation, this process can be accomplished in a much easier fashion than before.

Item 8 – using client-side validation over error pages

In the past, when a user submitted invalid form data, it would be validated on the server side, and the user was redirected to an error page. Nowadays, client-side libraries allow web developers to embed dynamic validation on the client side, sending inline errors preemptively to users to notify them when the data is missing or invalid. This integration helps to ensure that data is validated before sending it to the server, which prevents any friction from completing the sign up process once the user submits their information. One such example is shown in the following screenshot:

The use of client-side form validation libraries easily improves one's user experience.

Item 9 – differentiating required and optional information

Occasionally, there will be inputs that do not store the required information. Examples include personal details such as phone numbers or postal codes or extraneous user information such as biographies. It is important to note that although this information is not required to be provided, it still can add unnecessary friction to your user's experience. Thus, it is worthwhile to evaluate whether or not this information even adds value to your experience as a product owner.

While listing required fields, it is important to label them with an identifier such as a red asterisk (*). A client-side library can then be used to validate that the field is filled out and is valid.

Item 10 – avoiding confusing fields

Certain fields are more complex than others when it comes down to validity. Thus, it is important to consider the steps that a user must take while filling out a field.

For instance, consider a field that requires a user's Twitter ID. The possible questions include:

  • What does an ID look like?

  • What constitutes a valid ID?

  • What steps does a user need to take to find and copy their ID in?

  • Does it need to be prefixed with a @ character, just like it's done on Twitter?

An easier approach to enter data to this field would be to request the user's Twitter URL instead, as shown in the following screenshot. This way, all they need to do is visit their Twitter feed, copy the associated URL, and paste it into the field. By adding an example URL into the input as a placeholder, you can help guide the user into understanding what is required.

Using placeholders in input boxes helps users identify the format for which their data should be input.

Lastly, as a developer, it is easy to fall into the trap of defining a class and populating a form with fields that represents the members of that class. It is important, however, to sit back and review the fields and figure out which ones need clarification for users who intend to use your form. Add the explanations, as necessary, to your inputs by placing them either on the side or as a tooltip.

Item 11 – using confirmation fields for pertinent data

While filling out their forms, users may often misspell their information. Certain data, such as passwords, cell phone numbers, or e-mails, may be required for the sign-up or sign-in process and must be correct. The best practice for validating this is often asking the user to type in their data twice, in two adjacent fields, to ensure that their information was not mistyped.

It is worth considering whether or not a field requires an adjacent confirmation partner. If the data can be changed upon login, the user can always ensure that it is accurate in the future. E-mail addresses and cell phone numbers often only require confirmation fields when they are strictly mandatory for sign-up, such as when used for e-mail or SMS validation. This is, however, all up to the web developer's discretion.

Item 12 – using custom inputs for complex data types

When accepting strings or numbers in inputs, it is fairly simple to validate and accept the data as is for use. For other data types, such as dates, URLs, or hexadecimal color codes, it is usually worthwhile to incorporate custom inputs via JavaScript libraries or HTML5 features to force a specific format.

For instance, it is possible for you to specify that an input must be in the format of MM/DD/YY while taking in a date. However, a user may accidentally disregard the format requirement and type it in as DD/MM/YY instead. If the date was March 4, 2014, both formats would pass a server-side validation.

While using a custom calendar input, the user simply needs to select the date on the widget, which alleviates the risk of false information. A great example of this plugin is the bootstrap-datepicker plugin, as shown in the following screenshot:

It can be found at https://github.com/eternicode/bootstrap-datepicker.

Item 13 – preventing autovalidation with CAPTCHAs

When running a website that provides a service, bots may be written by third parties to take advantage of it. These bots often automate the registration of accounts and website processes in order to phish user details, scrape website content, or take part in online polls.

Thankfully, this practice can be halted with the use of CAPTCHA. Although imperfect, CAPTCHAs can act as a significant enough barrier of entry for most bot scripters. By incorporating a modern CAPTCHA library such as reCAPTCHA, as shown in the following screenshot, web developers are now able to prevent the vast majority of bots from abusing their services:

More information can be found on reCAPTCHA's official website at recaptcha.net.

Item 14 – reinforcing data integrity with server-side validation

While accepting information through the client, it is important to ensure that the data is valid on the server side. A client-side form validation library may fail to enforce validity because it didn't load or had a prematurely exit. Additionally, hackers may choose to work around the client-side enforcement, by sending data as a direct HTTP request.

Server-side validation should have at least the same standards as the client-side one. Additionally, server-side validation should enforce type safety, ensuring that all number fields are indeed numbers, string fields are strings, and so forth. If possible, it is important to ensure that postal codes, phone numbers, and other pertinent data are real on the server side by conducting a registry lookup.

Using Grunt to automate form testing

The grunt-contrib-mocha plugin can easily be used to test both client and server validation through filling out forms and issuing direct HTTP requests. Along with the standard assertion library, this set of tools can test the following conditions:

  • Whether the client-side form validation library loads and operates correctly

  • Whether the required fields react when unfilled

  • Whether the fields with a specific format react when invalid

  • Whether the server-side validation returns the same result as the client-side configuration would

Designing interfaces for the mobile generation

As more people are adopting mobile technology in the form of phones and tablets, website designers have begun to take on new mediums in design and are changing their layouts and content accordingly. Presenting a redirect to the download link of a native mobile application is no longer the best practice, as users shift towards preferring the mobile web. Thus, it is ideal to follow the best practices involved with responsive web design throughout one's website. To accomplish this, many choose to use new frameworks such as Twitter, Bootstrap, or ZURB Foundation. This section will discuss these frameworks in more detail.

Item 15 – designing preemptively with mobile in mind

Unlike standard web design, responsive layouts must be implemented with multiple devices in mind. Thus, one cannot simply convert a simple web design to a responsive one without aggressively conducting changes.

Alternatively, a web designer may opt to display completely different designs across platforms using CSS3 media queries and breakpoints, as shown in the following code:

@media (max-width:767px) { .visible-mobile { // Styles for Smartphones and Smaller Tablets } } @media (min-width:768px) { .visible-desktop { // Styles for Larger Tablets and Desktops } }

In this case, the .visible-mobile and .visible-desktop classes help show the content based on the device that renders it. The following table shows other breakpoints of interest and their associated screen sizes:

Breakpoint

Screen sizes

< 480 px

Smaller smartphones

< 768 px

Small tablets and larger smartphones

> 768 px

Larger tablets and desktops

< 320 px

Older, low-resolution phones

> 1024 px

Wide desktops

It is important to note that mobile devices tend to load and render web content at a much slower pace than desktop computers. Additionally, larger mobile devices such as tablets often have a higher computing power than smartphones and are therefore capable of loading more content when required.

Responsive design, although conceptually simple, can be tedious to implement manually. External frameworks such as Twitter Bootstrap or ZURB Foundation have provided a multitude of CSS classes and JavaScript libraries for responsive design. Bootstrap 3 and Foundation both adopt a 12-column grid system, as shown in the following screenshot, which allows web designers to adjust content containers based on the device used to render it:

The 12-column grid system, found in both Foundation and Bootstrap, helps designers allocate space in a systematic fashion on desktop and mobile devices alike.

These frameworks can be found in the following links:

This information, along with the use of responsive design, should demonstrate the importance of distributing separate content and designs to different platforms. For instance, when strategizing what to place on a smartphone-rendered web design, it is important to convey the same message that would be presented on tablet or desktop devices but with less images and text. This way, users are able to enjoy a clean but effective experience that they would otherwise receive on a larger device.

Item 16 – lazy load content using JavaScript

When loading content for users, it is a common practice to load everything the user will ever need to see for a particular web page. It is possible to lazy load content through the use of streams, only loading images or files when a user has hit a certain area of the web page.

An example of this is a clothing website. On a mobile device, a web designer may decide to load all of the articles of clothing at once. Alternatively, he may choose to load the items as the user scrolls down on either a phone or tablet, loading them on-the-fly. This way, the initial page load of the web page is drastically reduced. It can be argued, however, that the lazy loading of content actually increases the overall page load time if used excessively, as it increases the amount of HTTP requests required. This method, however, is ubiquitous across so many mainstream apps that users expect such behavior. Thus, it can be argued to be an effective best practice.

Item 17 – defer parsing of JavaScript

When JavaScript is loaded through a web browser, it blocks all other resources from getting loaded until it is processed. Thus, JavaScript files are often considered the critical path for loading pages. There are simple ways to combat this such as appending JavaScript inclusion tags at the bottom of your HTML files. Unfortunately, these methods do not work well on mobile devices that are not yet optimized for JavaScript processing.

Thus, a solution that many web developers have adopted is deferring the processing of JavaScript. To accomplish this, you can include the JavaScript file by embedding it directly into the DOM within a script tag. Many libraries, such as deferred, Q, and bluebird help to enable the deferred JavaScript. You can find out more about them via the following links:

Using Grunt to reduce page load time

As with the concepts found in this article, concatenating CSS and JavaScript files using Grunt helps to reduce HTTP requests, increasing the page load speed. Furthermore, minifying the contents of these concatenated files is another option Grunt provides you with, reducing the file size of the contents being transferred across the network. Lastly, the use of grunt-rev, allows you to safely append a revision hash to your files, purging the cache that your users' web browsers stored for files. This way, you can safely allow clients to cache CSS, JavaScript, or image files, reducing page load time.

Summary

Over the years, the modern Web world and its best practices have evolved as users expected a better experience over time. As a result, tools such as Grunt have become robust and are available to developers around the world, which allows them to build their web applications in an automated fashion. Grunt and its various plugins provide immense functionality for the realms of search engine optimization, page load time, and user experience design, which presents strong arguments for its tool chain. This article allows you to fully understand the importance of Grunt in web development, yielding great gains over time.

Resources for Article:


Further resources on this subject:


Mastering Grunt Master this powerful build automation tool to streamline your application development with this book and ebook
Published: April 2014
eBook Price: $13.99
Book Price: $22.99
See more
Select your format and quantity:

About the Author :


Daniel Li

Daniel Li is currently an independent consultant for small- and medium-sized businesses, and resides in Waterloo, Ontario. Having gained experience at over a dozen institutions since 2009, he leverages his knowledge of Grunt.js and modern web development in writing this book. He has won over $20,000 in coding competitions since 2009, and most recently won the Kik Cup Hackathon in Fall 2013. His open source contributions over the last three years helped him earn a place as a finalist in Canada's Top 20 Under 20 2013 list. He occasionally answers questions on the collaborative question and answer website, stackoverflow.com, as a top 4 percent user. He has also authored Instant Brainshark, Packt Publishing.

Books From Packt


 Getting Started with Grunt: The JavaScript Task Runner
Getting Started with Grunt: The JavaScript Task Runner

Mastering Web Application Development with AngularJS
Mastering Web Application Development with AngularJS

Jasmine JavaScript Testing
Jasmine JavaScript Testing

Object-Oriented JavaScript - Second Edition
Object-Oriented JavaScript - Second Edition

 Moodle JavaScript Cookbook
Moodle JavaScript Cookbook

 JavaScript Testing Beginner's Guide
JavaScript Testing Beginner's Guide

Object-Oriented JavaScript
Object-Oriented JavaScript

JavaScript Unit Testing
JavaScript Unit Testing


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software