Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-selecting-attributes-should-know
Packt
16 Sep 2013
9 min read
Save for later

Selecting by attributes (Should know)

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting ready These selectors are easily recognizable because they are wrapped by square brackets (for example, [selector]). This type of selector is always used coupled with other, like those seen so far, although this can be implicit as we'll see in few moments. In my experience, you'll often use them with the Element selector, but this can vary based on your needs. How many and what are the selectors of this type? Glad you asked! Here is a table that gives you an overview: Name Syntax Description Contains [attribute*="value"] (for example input[name*="cod"]) Selects the elements that have the value specified as a substring of the given attribute. Contains Prefix [attribute|="value"] (for example, a[class|="audero-"]) Selects nodes with the given value equal or equal followed by a hyphen inside the specified attribute. Contains Word [attribute~="value"] (for example, span[data-level~="hard"]) Selects elements that have the specified attribute with a value equal to or containing the given value delimited by spaces. Ends With [attribute$="value"] (for example, div[class$="wrapper"]) Selects nodes having the value specified at the end of the given attribute's value. Equals [attribute="value"] (for example, p[draggable="true"]) Selects elements that have the specified attribute with a value equal to the given value delimited by spaces. This selector performs an exact match. Not Equal [attribute!="value"] (for example, a[target!="_blank"]) Selects elements that don't have the specified attribute or have it but with a value not equal to the given value delimited by spaces. Starts With [attribute^="value"] (for example, img[alt^="photo"]) Selects nodes having the value specified at the start of the given attribute's value. Has [attribute] (for example, input[placeholder]) Selects elements that have the attribute specified, regardless of its value. As you've seen in the several examples in the table, we've used all of these selectors with other ones. Recalling what I said few moments ago, sometimes you can have used them with an implicit selector. In fact, take the following example: $('[placeholder]') What's the "hidden" selector? If you guessed All, you can pat yourself on the back. You're really smart! In fact, it's equivalent to write: $('*[placeholder]') How to do it... There are quite a lot of Attribute selectors, therefore, we won't build an example for each of them, and I'm going to show you two demos. The first will teach you the use of the Attribute Contains Word selector to print on the console the value of the collected elements. The second will explain the use of the Attribute Has selector to print the value of the placeholder's attribute of the retrieved nodes. Let's write some code! To build the first example, follow these steps: Create a copy of the template.html file and rename it as contain-word-selector.html. Inside the <body> tag, add the following HTML markup: <h1>Rank</h1> <table> <thead> <th>Name</th> <th>Surname</th> <th>Points</th> </thead> <tbody> <tr> <td class="name">Aurelio</td> <td>De Rosa</td> <td class="highlight green">100</td> </tr> <tr> <td class="name">Nikhil</td> <td>Chinnari</td> <td class="highlight">200</td> </tr> <tr> <td class="name">Esha</td> <td>Thakker</td> <td class="red highlight void">50</td> </tr> </tbody> </table> Edit the <head> section adding the following lines just after the <title>: <style> .highlight { background-color: #FF0A27; } </style> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('table td[class~="highlight"]'); console.log($elements.length); }); </script> Save the file and open it with your favorite browser. To create the second example, performs the following steps instead: Create a copy of the template.html file and rename it as has-selector.html. Inside the <body> tag, add the following HTML markup: <form name="registration-form" id="registration-form" action="registration.php" method="post"> <input type="text" name="name" placeholder="Name" /> <input type="text" name="surname" placeholder="Surname" /> <input type="email" name="email" placeholder="Email" /> <input type="tel" name="phone-number" placeholder="Phone number" /> <input type="submit" value="Register" /> <input type="reset" value="Reset" /> </form> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('input[placeholder]'); for(var i = 0; i < $elements.length; i++) { console.log($elements[i].placeholder); } }); </script> Save the file and open it with your favorite browser. How it works... In the first example, we created a table with four rows, one for the header and three for the data, and three columns. We put some classes to several columns, and in particular, we used the class highlight. Then, we set the definition of this class so that an element having it assigned, will have a red background color. In the next step, we created our usual script (hey, this is still a article on jQuery, isn't it?) where we selected all of the <td> having the class highlight assigned that are descendants (in this case we could use the Child selector as well) of a <table>. Once done, we simply print the number of the collected elements. The console will confirm that, as you can see by yourself loading the page, that the matched elements are three. Well done! In the second step, we created a little registration form. It won't really work since the backend is totally missing, but it's good for our discussion. As you can see, our form takes advantage of some of the new features of HTML5, like the new <input> types e-mail and tel and the placeholder attribute. In our usual handler, we're picking up all of the <input>instance's in the page having the placeholder attribute specified and assigning them to a variable called $elements. We're prepending a dollar sign to the variable name to highlight that it stores a jQuery object. With the next block of code, we iterate over the object to access the elements by their index position. Then we log on the console the placeholder's value accessing it using the dot operator. As you can see, we accessed the property directly, without using a method. This happens because the collection's elements are plain DOM elements and not jQuery objects. If you replicated correctly the demo you should see this output in your console: In this article, we chose all of the page's <input> instance's, not just those inside the form since we haven't specified it. A better selection would be to restrict the selection using the form's id, that is very fast as we've already discussed. Thus, our selection will turn into: $('#registration-form input[placehoder]') We can have an even better selection using the jQuery's find() method that retrieves the descendants that matches the given selector: $('#registration-form').find('input[placehoder]') There's more... You can also use more than one attribute selector at once. Multiple attribute selector In case you need to select nodes that match two or more criteria, you can use the Multiple Attribute selector. You can chain how many selectors you liked, it isn't limited to two. Let's say that you want to select all of the <input> instance's of type email and have the placeholder attribute specified, you would need to write: $('input[type="email"][placeholder]') Not equal selector This selector isn't part of the CSS specifications, so it can't take advantage of the native querySelectorAll() method. The official documentation has a good hint to avoid the problem and have better performance: For better performance in modern browsers, use $("your-pure-css-selector").not('[name="value"]') instead. Using filter() and attr() jQuery really has a lot of methods to help you in your work and thanks to this, you can achieve the same task in a multitude of ways. While the attribute selectors are important, it can be worth to see how you could achieve the same result seen before using filter() and attr(). filter() is a function that accepts only one argument that can be of different types, but you'll usually see codes that pass a selector or a function. Its aim is to reduce a collection, iterating over it, and keeping only the elements that match the given parameter. The attr() method, instead, accepts up to two arguments and the first is usually an attribute name. We'll use it simply as a getter to retrieve the value of the elements' placeholder. To achieve our goal, replace the selection instruction with these lines: var $elements = $('#registration-form input').filter(function() { return ($(this).attr("placeholder") !== undefined) }); The main difference here is the anonymous function we passed to the filter() method. Inside the function, this refers to the current DOM element processed, so to be able to use jQuery's methods we need to wrap the element in a jQuery object. Some of you may guess why we haven't used the plain DOM elements accessing the placeholder attribute directly. The reason is that the result won't be the one expected. In fact, by doing so, you'll have an empty string as a value even if the placeholder attribute wasn't set for that element making the strict test against undefined useless. Summary Thus in this article we have learned how to select elements using the attributes. Resources for Article : Further resources on this subject: jQuery refresher [Article] Tips and Tricks for Working with jQuery and WordPress [Article] jQuery User Interface Plugins: Tooltip Plugins [Article]
Read more
  • 0
  • 0
  • 4099

article-image-creating-different-font-files-and-using-web-fonts
Packt
16 Sep 2013
12 min read
Save for later

Creating different font files and using web fonts

Packt
16 Sep 2013
12 min read
(For more resources related to this topic, see here.) Creating different font files In this recipe, we will learn how to create or get these fonts and how to generate the different formats for use in different browsers (Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font) is explained in this recipe. Getting ready To get the original file of the font created during this recipe in addition to the generated font formats and the full source code of the project FontCreation; please refer to the receipe2 project folder. How to do it... The following steps are preformed for creating different font files: Firstly, we will get an original TTF font file. There are two different ways to get fonts: The first method is by downloading one from specialized websites. Both free and commercial solutions can be found with a wide variety of beautiful fonts. The following are a few sites for downloading free fonts: Google fonts, Font squirrel, Dafont, ffonts, Jokal, fontzone, STIX, Fontex, and so on. Here are a few sites for downloading commercial fonts: Typekit, Font Deck, Font Spring, and so on. We will consider the example of Fontex, as shown in the following screenshot. There are a variety of free fonts. You can visit the website at http://www.fontex.org/. The second method is by creating your own font and then generating a TIFF file format. There are a lot of font generators on the Web. We can find online generators or follow the professionals by scanning handwritten typography and finally import it to Adobe Illustrator to change it into vector based letters or symbols. For newbies, I recommend trying Fontstruct (http://fontstruct.com). It is a WYSIWYG flash editor that will help you create your first font file, as shown in the following screenshot: As you can see, we were trying to create the S letter using a grid and some different forms. After completing the font creation, we can now preview it rather than download the TTF file. The file is in the receipe2 project folder. The following screenshot is an example of a font we have created on the run: Now we have to generate the rest of file formats in order to ensure maximum compatibility with common browsers. We highly recommend the use of Font squirrel web font generator (http://www.fontsquirrel.com/tools/webfont-generator). This online tool helps to create fonts for @font-face by generating different font formats. All we need to do is to upload the original file (optionally adding same font variants bold, italic, or bold-italic), select the output formats, add some optimizations, and finally download the package. It is shown in the following screenshot: The following code explains the how to use this font: <!DOCTYPE html><html><head><title>My first @font-face demo</title><style type="text/css">@font-face {font-family: 'font_testregular';src: url('font_test-webfont.eot');src: url('font_test-webfont.eot?#iefix')format('embedded-opentype'),url('font_test-webfont.woff') format('woff'),url('font_test-webfont.ttf') format('truetype'),url('font_test-webfont.svg#font_testregular')format('svg');font-weight: normal;font-style: normal;} Normal font usage: h1 , p{font-family: 'font_testregular', Helvetica, Arial,sans-serif;}h1 {font-size: 45px;}p:first-letter {font-size: 100px;text-decoration: wave;}p {font-size: 18px;line-height: 27px;}</style> Font usage in canvas: <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script><script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],tx = canvas.getContext('2d');var t = 'font_testregular'; var c = 'red';var v =' sample text via canvas';ctx.font = '52px "'+t+'"';ctx.fillStyle = c;ctx.fillText(v, x, y);}</script></head><body onload="generate();"><h1>Header sample</h1><p>Sample text with lettrine effect</p><canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas></body></html> How it works... This recipe takes us through getting an original TTF file: Font download: When downloading a font (either free or commercial) we have to pay close attention to terms of use. Sometimes, you are not allowed to use these fonts on the web and you are only allowed to use them locally. Font creation: During this process, we have to pay attention to some directives. We have to create Glyphs for all the needed alphabets (upper case and lower case), numbers, and symbols to avoid font incompatibility. We have to take care of the spacing between glyphs and eventually, variations, and ligatures. A special creation process is reserved for right- to left-written languages. Font formats generation: Font squirrel is a very good online tool to generate the most common formats to handle the cross-browser compatibility. It is recommended that we optimize the font ourselves via expert mode. We have the possibility of fixing some issues during the font creation such as missing glyphs, X-height matching, and Glyph spacing. Font usage: We will go through the following font usage: Normal font usage: We used the same method as already adopted via font-family; web-safe fonts are also applied: h1 , p{font-family: 'font_testregular', Helvetica, Arial, sans-serif;} Font usage in canvas: The canvas is a HTML5 tag that renders dynamically, bitmap images via scripts Creating 2D shapes. In order to generate this image based on fonts, we will create the canvas tag at first. An alternative text will be displayed if canvas is not supported by the browser. <canvas height="800px" width="500px">Your browser does not support the CANVAS element.Try the latest Firefox, Google Chrome, Safari or Opera.</canvas> We will now use the jQuery library in order to generate the canvas output. An onload function will be initiated to create the content of this tag: <scriptsrc = "http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" ></script> In the following function, we create a variable ctx which is a canvas occurrence of 2D context via canvas.getContext('2d'). We also define font-family using t as a variable, font-size, text to display using v as a variable, and color using c as a variable. These properties will be used as follows: <script language="javascript" type="text/javascript">var x = 30, y = 60;function generate(){var canvas = $('canvas')[0],ctx = canvas.getContext('2d');var t = 'font_testregular';var c = 'red' ;var v =' sample text via canvas'; This is for font-size and family. Here the font-size is 52px and the font-family is font_testregular: ctx.font = '52px "'+t+'"'; This is for color by fillstyle: ctx.fillStyle = c; Here we establish both text to display and axis coordinates where x is the horizontal position and y is vertical one. ctx.fillText(v, x, y); Using Web fonts In this recipe, you will learn how to use fonts hosted in distant servers for many reasons such as support services and special loading scripts. A lot of solutions are widely available on the web such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. In this task, we will be using Google fonts and its special JavaScript open source library, WebFont loader. Getting ready Please refer to the project WebFonts to get the full source code. How to do it... We will get through four steps: Let us configure the link tag: <link rel="stylesheet" id="linker" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> Then we will set up the WebFont loader: <script type="text/javascript">WebFontConfig = {google: {families: [ 'Tangerine' ]}};(function() {var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})();</script><style type="text/css">.wf-loading p#firstp {font-family: serif}.wf-inactive p#firstp {font-family: serif}.wf-active p#firstp {font-family: 'Tangerine', serif} Next we will write the import command: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Then we will cover font usage: h1 {font-size: 45px;font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";font-size: 40px;text-align: justify;color: blue;padding: 0 5px;}</style></head><body><div id="container"><h1>This H1 tag's font was used via @import command </h1><p>This font was imported via a Stylesheet link</p><p id="firstp">This font was created via WebFont loaderand managed by wf a script generated from webfonts.js.<br />loading time will be managed by CSS properties :<i>.wf-loading , .wf-inactive and .wf-active</i> </p></div></body></html> How it works... In this recipe and for educational purpose, we used following ways to embed the font in the source code (the link tag, the WebFont loader, and the import command). The link tag: A simple link tag to a style sheet is used referring to the address already created: <link rel="stylesheet" type="text/css"href="http://fonts.googleapis.com/css?family=Mr+De+Haviland"> The WebFont loader: It is a JavaScript library developed by Google and Typekit. It grants advanced control options over the font loading process and exceptions. It lets you use multiple web font providers. In the following script, we can identify the font we used, Tangerine, and the link to predefined address of Google APIs with the world google: WebFontConfig = {google: { families: [ 'Inconsolata:bold' ] }}; We now will create wf which is an instance of an asynchronous JavaScript element. This instance is issued from Ajax Google API: var wf = document.createElement('script');wf.src = ('https:' == document.location.protocol ?'https' : 'http') +'://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js';wf.type = 'text/javascript';wf.async = 'true';var s = document.getElementsByTagName('script')[0];s.parentNode.insertBefore(wf, s);})(); We can have control over fonts during and after loading by using specific class names. In this particular case, only the p tag with the ID firstp will be processed during and after font loading. During loading, we use the class .wf-loading. We can use a safe font (for example, Serif) and not the browser's default page until loading is complete as follows: .wf-loading p#firstp {font-family: serif;} After loading is complete, we will usually use the font that we were importing earlier. We can also add a safe font for older browsers: .wf-active p#firstp {font-family: 'Tangerine', serif;} Loading failure: In case we failed to load the font, we can specify a safe font to avoid falling in default browser's font: .wf-inactive p#firstp {font-family: serif;} The import command: It is the easiest way to link to the fonts: @import url(http://fonts.googleapis.com/css?family=Bigelow+Rules); Font usage: We will use the fonts as we did already via font-family property: h1 {font-family: "Bigelow Rules";}p {font-family: "Mr De Haviland";} There's more... The WebFont loader has the ability to embed fonts from mutiple WebFont providers. It has some predefined providers in the script such as Google, Typekit, Ascender, Fonts.com web fonts, and Fontdeck. For example, the following is the specific source code for Typekit and Ascender: WebFontConfig ={typekit: {id: 'TypekitId'}};WebFontConfig ={ascender: {key: 'AscenderKey',families: ['AscenderSans:bold,bolditalic,italic,regular']}}; For the font providers that are not listed above, a custom module can handle the loading of the specific style sheet: WebFontConfig = {custom: {families: ['OneFont', 'AnotherFont'],urls: ['http://myotherwebfontprovider.com/stylesheet1.css','http://yetanotherwebfontprovider.com/stylesheet2.css' ]}}; For more details and options of the WebFont loader script, you can visit the following link: https://developers.google.com/fonts/docs/webfont_loader To download this API you may access the following URL: https://github.com/typekit/webfontloader How to generate the link to the font? The URL used in every method to import the font in every method (the link tag, the WebFont loader, and the import command) is composed of the Google fonts API base url (http://fonts.googleapis.com/css) and the family parameter including one or more font names, ?family=Tangerine. Multiple fonts are separated with a pipe character (|) as follows: ?family=Tangerine|Inconsolata|Droid+Sans Optionally, we can add subsets or also specify a style for each font: Cantarell:italic|Droid+Serif:bold&subset=latin Browser-dependent output The Google fonts API serves a generated style sheet specific to the client, via the browser's request. The response is relative to the browser. For example, the output for Firefox will be: @font-face {font-family: 'Inconsolata';src: local('Inconsolata'),url('http://themes.googleusercontent.com/fonts/font?kit=J_eeEGgHN8Gk3Eud0dz8jw') format('truetype');} This method lowers the loading time because the generated style sheet is relative to client's browser. No multiformat font files are needed because Google API will generate it, automatically. Summary In this way, we have learned how to create different font formats, such as Embedded Open Type, Open Type, True Type Font, Web Open Font Format, and SVG font, and how to use the different Web fonts such as Typekit, Google fonts, Ascender, Fonts.com web fonts, and Fontdeck. Resources for Article: Further resources on this subject: So, what is Markdown? [Article] Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 1
  • 3268

article-image-using-different-jquery-event-listeners-responsive-interaction
Packt
16 Sep 2013
9 min read
Save for later

Using different jQuery event listeners for responsive interaction

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting Started First we want to create JavaScript that transforms a select form element into a button widget that changes the value of the form element when a button is pressed. So the first part of that task is to build a form with a select element. How to do it This part is simple; start by creating a new web page. Inside it, create a form with a select element. Give the select element some options. Wrap the form in a div element with the class select. See the following example. I have added a title just for placement. <h2>Super awesome form element</h2><div class="select"> <form> <select> <option value="1">1</option> <option value="Bar">Bar</option> <option value="3">3</option> </select> </form></div> Next, create a new CSS file called desktop.css and add a link to it in your header. After that, add a media query to the link for screen media and min-device-width:321px. The media query causes the browser to load the new CSS file only on devices with a screen larger than 320 pixels. Copy and paste the link to the CSS, but change the media query to screen and min-width:321px. This will help you test and demonstrate the mobile version of the widget on your desktop. <link rel="stylesheet" media="screen and (min-device-width:321px)" href="desktop.css" /><link rel="stylesheet" media="screen and (min-width:321px)" href="desktop.css" /> Next, create a script tag with a link to a new JavaScript file called uiFunctions.js and then, of course, create the new JavaScript file. Also, create another script element with a link to the recent jQuery library. <script src = "http://code.jquery.com/jquery-1.8.2.min.js"></script><script src = "uiFunctions.js"></script> Now open the new JavaScript file uiFunctions.js in your editor and add instructions to do something on a document load. $(document).ready(function(){ //Do something}); The first thing your JavaScript should do when it loads is determine what kind of device it is on—a mobile device or a desktop. There are a few logical tests you can utilize to determine whether the device is mobile. You can test navigator.userAgent; specifically, you can use the .test() method, which in essence tests a string to see whether an expression of characters is in it, checks the window width, and checks whether the screen width is smaller than 600 pixels. For this article, let's use all three of them. Ultimately, you might just want to test navigator.userAgent. Write this inside the $(document).ready() function. if( /Android|webOS|iPhone|iPad|iPod|BlackBerry/i.test(navigator.userAgent) || $(window).width()<600 ||window.screen.width<600) { //Do something for mobile} else { //Do something for the desktop} Inside, you will have to create a listener for the desktop device interaction event and the mouse click event, but that is later. First, let's write the JavaScript to create the UI widget for the select element. Create a function that iterates for each select option and appends a button with the same text as the option to the select div element. This belongs inside the $(document).ready() function, but outside and before the if condition. The order of these is important. $('select option').each(function(){ $('div.select').append('<button>'+$(this).html()+'</button>');}); Now, if you load the page on your desktop computer, you will see that it generates new buttons below the select element, one for each select option. You can click on them, but nothing happens. What we want them to do is change the value of the select form element. To do so, we need to add an event listener to the buttons inside the else condition. For the desktop version, you need to add a .click() event listener with a function. Inside the function, create two new variables, element and itemClicked. Make element equal the string button, and itemClicked, the jQuery object event target, or $(event.target). The next line is tricky; we're going to use the .addClass() method to add a selected class to the element variable :nth-child(n). Also, the n of the :nth-child(n) should be a call to a function named .eventAction(), to which we will add the integer 2. We will create the function next. $('button').click(function(){ var element = 'button'; var itemClicked = $(event.target); $(element+':nth-child(' + (eventAction(itemClicked,element) + 2) + ')').addClass('selected');}); Next, outside the $(document).ready() function, create the eventAction() function. It will receive the variables itemClicked and element. The reason we make this function is because it performs the same functions for both the desktop click event and the mobile tap or long tap events. function eventAction(itemClicked,element){ //Do something!}; Inside the eventAction() function, create a new variable called choiceAction. Make choiceAction equal to the index of the element object in itemClicked, or just take a look at the following code: var choiceAction = $(element).index(itemClicked); Next, use the .removeClass() method to remove the selected class from the element object. $(element).removeClass('selected'); There are only two more steps to complete the function. First, add the selected attribute to the select field option using the .eq() method and the choiceAction variable. Finally, remember that when the function was called in the click event, it was expecting something to replace the n in :nth-child(n); so end the function by returning the value of the choiceAction variable. $('select option').eq(choiceAction).attr('selected','selected');return choiceAction; That takes care of everything but the mobile event listeners. The button style will be added at the end of the article. See how it looks in the following screenshot: This will be simple. First, using jQuery's $.getScript() method, add a line to retrieve the jQuery library in the first if condition where we tested navigator.userAgent and the screen sizes to see whether the page was loaded into the viewport of a mobile device. The jQuery Mobile library will transform the HTML into a mobile, native-looking app. $.getScript("http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.js"); The next step is to copy the desktop's click event listener, paste it below the $.getScript line, and change some values. Replace the .click() listener with a jQuery Mobile event listener, .tap() or .taphold(), change the value of the element variable to the string .uti-btn, and append the daisy-chained .parent().prev() methods to the itemClicked variable value, $(event.target). Replace the line that calls the eventAction() function in the :nth-child(n) selector with a more simple call to eventAction(), with the variables itemClicked and element. $('button').click(function(){ var element = '.ui-btn'; var itemClicked = $(event.target).parent().prev(); eventAction(itemClicked,element);}); When you click on the buttons to update the select form element in the mobile device, you will need to instruct jQuery Mobile to refresh its select menu. jQuery Mobile has a method to refresh its select element. $('select').selectmenu("refresh",true); That is all you need for the JavaScript file. Now open the HTML file and add a few things to the header. First, add a style tag to make the select form element and .ui-select hidden with the CSS display:none;. Next, add links to the jQuery Mobile stylesheets and desktop.css with a media query for media screen and max-width: 600px; or max-device-width:320px;. <style> select,.ui-select{display:none;}</style><link rel="stylesheet" media="screen and (max-width:600px)" href="http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.css"><link rel="stylesheet" media="screen and (min-width:600px)" href="desktop.css" /> When launched on a mobile device, the widget will look like this: Then, open the desktop.css file and create some style for the widget buttons. For the button element, add an inline display, padding, margins, border radius, background gradient, box shadow, font color, text shadow, and a cursor style. button { display:inline; padding:8px 15px; margin:2px; border-top:1px solid #666; border-left:1px solid #666; border-bottom:1px solid #333; border-right:1px solid #333; border-radius:5px; background: #7db9e8; /* Old browsers */ background:-moz-linear-gradient(top, #7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#7db9e8), color-stop(49%,#207cca), color-stop(50%,#2989d8), color-stop(100%,#1e5799)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top,#7db9e8 0%,#207cca 49%, #2989d8 50%,#1e5799 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* IE10+ */ background:linear-gradient(to bottom,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* W3C */ filter:progid:DXImageTransform.Microsoft.gradient ( startColorstr='#7db9e8', endColorstr='#1e5799',GradientType=0 ); /* IE6-9 */ color:white; text-shadow: -1px -1px 1px #333; box-shadow: 1px 1px 4px 2px #999; cursor:pointer;} Finally, add CSS for the .selected class that was added by the JavaScript. This CSS will change the button to look as if the button has been pressed in. .selected{ border-top:1px solid #333; border-left:1px solid #333; border-bottom:1px solid #666; border-right:1px solid #666; color:#ffff00; box-shadow:inset 2px 2px 2px 2px #333; background: #1e5799; /* Old browsers */ background:-moz-linear-gradient(top,#1e5799 0%,#2989d8 50%, #207cca 51%, #7db9e8 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#1e5799),color-stop(50%,#2989d8), color-stop(51%,#207cca),color-stop(100%,#7db9e8)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top, #1e5799 0%,#2989d8 50%, #207cca 51%,#7db9e8 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* IE10+ */ background: linear-gradient(to bottom, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1e5799', endColorstr='#7db9e8',GradientType=0 ); /* IE6-9 */} How it works This uses a combination of JavaScript and media queries to build a dynamic HTML form widget. The JavaScript tests the user agent and screen size to see if it is a mobile device and responsively delivers a different event listener for the different device types. In addition to that, the look of the widget will be different for different devices. Summary In this article we learned how to create an interactive widget that uses unobtrusive JavaScript, which uses different event listeners for desktop versus mobile devices. This article also helped you build your own web app that can transition between the desktop and mobile versions without needing you to rewrite your entire JavaScript code. Resources for Article : Further resources on this subject: Video conversion into the required HTML5 Video playback [Article] LESS CSS Preprocessor [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 8609

Packt
16 Sep 2013
16 min read
Save for later

Linux Shell Scripting – various recipes to help you

Packt
16 Sep 2013
16 min read
(For more resources related to this topic, see here.) The shell scripting language is packed with all the essential problem-solving components for Unix/Linux systems. Text processing is one of the key areas where shell scripting is used, and there are beautiful utilities such as sed, awk, grep, and cut, which can be combined to solve problems related to text processing. Various utilities help to process a file in fine detail of a character, line, word, column, row, and so on, allowing us to manipulate a text file in many ways. Regular expressions are the core of pattern-matching techniques, and most of the text-processing utilities come with support for it. By using suitable regular expression strings, we can produce the desired output, such as filtering, stripping, replacing, and searching. Using regular expressions Regular expressions are the heart of text-processing techniques based on pattern matching. For fluency in writing text-processing tools, one must have a basic understanding of regular expressions. Using wild card techniques, the scope of matching text with patterns is very limited. Regular expressions are a form of tiny, highly-specialized programming language used to match text. A typical regular expression for matching an e-mail address might look like [a-z0-9_]+@[a-z0-9]+\.[a-z]+. If this looks weird, don't worry, it is really simple once you understand the concepts through this recipe. How to do it... Regular expressions are composed of text fragments and symbols, which have special meanings. Using these, we can construct any suitable regular expression string to match any text according to the context. As regex is a generic language to match texts, we are not introducing any tools in this recipe. Let's see a few examples of text matching: To match all words in a given text, we can write the regex as follows: ( ?[a-zA-Z]+ ?) ? is the notation for zero or one occurrence of the previous expression, which in this case is the space character. The [a-zA-Z]+ notation represents one or more alphabet characters (a-z and A-Z). To match an IP address, we can write the regex as follows: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} Or: [[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3} We know that an IP address is in the form 192.168.0.2. It is in the form of four integers (each from 0 to 255), separated by dots (for example, 192.168.0.2). [0-9] or [:digit:] represents a match for digits from 0 to 9. {1,3} matches one to three digits and \. matches the dot character (.). This regex will match an IP address in the text being processed. However, it doesn't check for the validity of the address. For example, an IP address of the form 123.300.1.1 will be matched by the regex despite being an invalid IP. This is because when parsing text streams, usually the aim is to only detect IPs. How it works... Let's first go through the basic components of regular expressions (regex): regex Description Example ^ This specifies the start of the line marker. ^tux matches a line that starts with tux. $ This specifies the end of the line marker. tux$ matches a line that ends with tux. . This matches any one character. Hack. matches Hack1, Hacki, but not Hack12 or Hackil; only one additional character matches. [] This matches any one of the characters enclosed in [chars]. coo[kl] matches cook or cool. [^] This matches any one of the characters except those that are enclosed in [^chars]. 9[^01] matches 92 and 93, but not 91 and 90. [-] This matches any character within the range specified in []. [1-5] matches any digits from 1 to 5. ? This means that the preceding item must match one or zero times. colou?r matches color or colour, but not colouur. + This means that the preceding item must match one or more times. Rollno-9+ matches Rollno-99 and Rollno-9, but not Rollno-. * This means that the preceding item must match zero or more times. co*l matches cl, col, and coool. () This treats the terms enclosed as one entity ma(tri)?x matches max or matrix. {n} This means that the preceding item must match n times. [0-9]{3} matches any three-digit number. [0-9]{3} can be expanded as [0-9][0-9][0-9]. {n,} This specifies the minimum number of times the preceding item should match. [0-9]{2,} matches any number that is two digits or longer. {n, m} This specifies the minimum and maximum number of times the preceding item should match. [0-9]{2,5} matches any number has two digits to five digits. | This specifies the alternation-one of the items on either of sides of | should match. Oct (1st | 2nd) matches Oct 1st or Oct 2nd. \ This is the escape character for escaping any of the special characters mentioned previously. a\.b matches a.b, but not ajb. It ignores the special meaning of . because of \. For more details on the regular expression components available, you can refer to the following URL: http://www.linuxforu.com/2011/04/sed-explained-part-1/ There's more... Let's see how the special meanings of certain characters are specified in the regular expressions. Treatment of special characters Regular expressions use some characters, such as $, ^, ., *, +, {, and }, as special characters. But, what if we want to use these characters as normal text characters? Let's see an example of a regex, a.txt. This will match the character a, followed by any character (due to the '.' character), which is then followed by the string txt . However, we want '.' to match a literal '.' instead of any character. In order to achieve this, we precede the character with a backward slash \ (doing this is called escaping the character). This indicates that the regex wants to match the literal character rather than its special meaning. Hence, the final regex becomes a\.txt. Visualizing regular expressions Regular expressions can be tough to understand at times, but for people who are good at understanding things with diagrams, there are utilities available to help in visualizing regex. Here is one such tool that you can use by browsing to http://www.regexper.com; it basically lets you enter a regular expression and creates a nice graph to help understand it. Here is a screenshot showing the regular expression we saw in the previous section: Searching and mining a text inside a file with grep Searching inside a file is an important use case in text processing. We may need to search through thousands of lines in a file to find out some required data, by using certain specifications. This recipe will help you learn how to locate data items of a given specification from a pool of data. How to do it... The grep command is the magic Unix utility for searching in text. It accepts regular expressions, and can produce output in various formats. Additionally, it has numerous interesting options. Let's see how to use them: To search for lines of text that contain the given pattern: $ grep pattern filenamethis is the line containing pattern Or: $ grep "pattern" filenamethis is the line containing pattern We can also read from stdin as follows: $ echo -e "this is a word\nnext line" | grep wordthis is a word Perform a search in multiple files by using a single grep invocation, as follows: $ grep "match_text" file1 file2 file3 ... We can highlight the word in the line by using the --color option as follows: $ grep word filename --color=autothis is the line containing word Usually, the grep command only interprets some of the special characters in match_text. To use the full set of regular expressions as input arguments, the -E option should be added, which means an extended regular expression. Or, we can use an extended regular expression enabled grep command, egrep. For example: $ grep -E "[a-z]+" filename Or: $ egrep "[a-z]+" filename In order to output only the matching portion of a text in a file, use the -o option as follows: $ echo this is a line. | egrep -o "[a-z]+\." line. In order to print all of the lines, except the line containing match_pattern, use: $ grep -v match_pattern file The -v option added to grep inverts the match results. Count the number of lines in which a matching string or regex match appears in a file or text, as follows: $ grep -c "text" filename 10 It should be noted that -c counts only the number of matching lines, not the number of times a match is made. For example: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -c "[0-9]" 2 Even though there are six matching items, it prints 2, since there are only two matching lines. Multiple matches in a single line are counted only once. To count the number of matching items in a file, use the following trick: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -o "[0-9]" | wc -l 6 Print the line number of the match string as follows: $ cat sample1.txt gnu is not unix linux is fun bash is art $ cat sample2.txt planetlinux $ grep linux -n sample1.txt 2:linux is fun or $ cat sample1.txt | grep linux -n If multiple files are used, it will also print the filename with the result as follows: $ grep linux -n sample1.txt sample2.txt sample1.txt:2:linux is fun sample2.txt:2:planetlinux Print the character or byte offset at which a pattern matches, as follows: $ echo gnu is not unix | grep -b -o "not" 7:not The character offset for a string in a line is a counter from 0, starting with the first character. In the preceding example, not is at the seventh offset position (that is, not starts from the seventh character in the line; that is, gnu is not unix). The -b option is always used with -o. To search over multiple files, and list which files contain the pattern, we use the following: $ grep -l linux sample1.txt sample2.txt sample1.txt sample2.txt The inverse of the -l argument is -L. The -L argument returns a list of non-matching files. There's more... We have seen the basic usages of the grep command, but that's not it; the grep command comes with even more features. Let's go through those. Recursively search many files To recursively search for a text over many directories of descendants, use the following command: $ grep "text" . -R -n In this command, "." specifies the current directory. The options -R and -r mean the same thing when used with grep. For example: $ cd src_dir $ grep "test_function()" . -R -n ./miscutils/test.c:16:test_function(); test_function() exists in line number 16 of miscutils/test.c. This is one of the most frequently used commands by developers. It is used to find files in the source code where a certain text exists. Ignoring case of pattern The -i argument helps match patterns to be evaluated, without considering the uppercase or lowercase. For example: $ echo hello world | grep -i "HELLO" hello grep by matching multiple patterns Usually, we specify single patterns for matching. However, we can use an argument -e to specify multiple patterns for matching, as follows: $ grep -e "pattern1" -e "pattern" This will print the lines that contain either of the patterns and output one line for each match. For example: $ echo this is a line of text | grep -e "this" -e "line" -o this line There is also another way to specify multiple patterns. We can use a pattern file for reading patterns. Write patterns to match line-by-line, and execute grep with a -f argument as follows: $ grep -f pattern_filesource_filename For example: $ cat pat_file hello cool $ echo hello this is cool | grep -f pat_file hello this is cool Including and excluding files in a grep search grep can include or exclude files in which to search. We can specify include files or exclude files by using wild card patterns. To search only for .c and .cpp files recursively in a directory by excluding all other file types, use the following command: $ grep "main()" . -r --include *.{c,cpp} Note, that some{string1,string2,string3} expands as somestring1 somestring2 somestring3. Exclude all README files in the search, as follows: $ grep "main()" . -r --exclude "README" To exclude directories, use the --exclude-dir option. To read a list of files to exclude from a file, use --exclude-from FILE. Using grep with xargs with zero-byte suffix The xargs command is often used to provide a list of file names as a command-line argument to another command. When filenames are used as command-line arguments, it is recommended to use a zero-byte terminator for the filenames instead of a space terminator. Some of the filenames can contain a space character, and it will be misinterpreted as a terminator, and a single filename may be broken into two file names (for example, New file.txt can be interpreted as two filenames New and file.txt). This problem can be avoided by using a zero-byte suffix. We use xargs so as to accept a stdin text from commands such as grep and find. Such commands can output text to stdout with a zero-byte suffix. In order to specify that the input terminator for filenames is zero byte (\0), we should use -0 with xargs. Create some test files as follows: $ echo "test" > file1 $ echo "cool" > file2 $ echo "test" > file3 In the following command sequence, grep outputs filenames with a zero-byte terminator (\0), because of the -Z option with grep. xargs -0 reads the input and separates filenames with a zero-byte terminator: $ grep "test" file* -lZ | xargs -0 rm Usually, -Z is used along with -l. Silent output for grep Sometimes, instead of actually looking at the matched strings, we are only interested in whether there was a match or not. For this, we can use the quiet option (-q), where the grep command does not write any output to the standard output. Instead, it runs the command and returns an exit status based on success or failure. We know that a command returns 0 on success, and non-zero on failure. Let's go through a script that makes use of grep in a quiet mode, for testing whether a match text appears in a file or not. #!/bin/bash #Filename: silent_grep.sh #Desc: Testing whether a file contain a text or not if [ $# -ne 2 ]; then echo "Usage: $0 match_text filename" exit 1 fi match_text=$1 filename=$2 grep -q "$match_text" $filename if [ $? -eq 0 ]; then echo "The text exists in the file" else echo "Text does not exist in the file" fi The silent_grep.sh script can be run as follows, by providing a match word (Student) and a file name (student_data.txt) as the command argument: $ ./silent_grep.sh Student student_data.txt The text exists in the file Printing lines before and after text matches Context-based printing is one of the nice features of grep. Suppose a matching line for a given match text is found, grep usually prints only the matching lines. But, we may need "n" lines after the matching line, or "n" lines before the matching line, or both. This can be performed by using context-line control in grep. Let's see how to do it. In order to print three lines after a match, use the -A option: $ seq 10 | grep 5 -A 3 5 6 7 8 In order to print three lines before the match, use the -B option: $ seq 10 | grep 5 -B 3 2 3 4 5 Print three lines after and before the match, and use the -C option as follows: $ seq 10 | grep 5 -C 3 2 3 4 5 6 7 8 If there are multiple matches, then each section is delimited by a line "--": $ echo -e "a\nb\nc\na\nb\nc" | grep a -A 1 a b -- a b Cutting a file column-wise with cut We may need to cut the text by a column rather than a row. Let's assume that we have a text file containing student reports with columns, such as Roll, Name, Mark, and Percentage. We need to extract only the name of the students to another file or any nth column in the file, or extract two or more columns. This recipe will illustrate how to perform this task. How to do it... cut is a small utility that often comes to our help for cutting in column fashion. It can also specify the delimiter that separates each column. In cut terminology, each column is known as a field . To extract particular fields or columns, use the following syntax: cut -f FIELD_LIST filename FIELD_LIST is a list of columns that are to be displayed. The list consists of column numbers delimited by commas. For example: $ cut -f 2,3 filename Here, the second and the third columns are displayed. cut can also read input text from stdin. Tab is the default delimiter for fields or columns. If lines without delimiters are found, they are also printed. To avoid printing lines that do not have delimiter characters, attach the -s option along with cut. An example of using the cut command for columns is as follows: $ cat student_data.txt No Name Mark Percent 1 Sarath 45 90 2 Alex 49 98 3 Anu 45 90 $ cut -f1 student_data.txt No 1 2 3 Extract multiple fields as follows: $ cut -f2,4 student_data.txt Name Percent Sarath 90 Alex 98 Anu 90 To print multiple columns, provide a list of column numbers separated by commas as arguments to -f. We can also complement the extracted fields by using the --complement option. Suppose you have many fields and you want to print all the columns except the third column, then use the following command: $ cut -f3 --complement student_data.txt No Name Percent 1 Sarath 90 2 Alex 98 3 Anu 90 To specify the delimiter character for the fields, use the -d option as follows: $ cat delimited_data.txt No;Name;Mark;Percent 1;Sarath;45;90 2;Alex;49;98 3;Anu;45;90 $ cut -f2 -d";" delimited_data.txt Name Sarath Alex Anu There's more The cut command has more options to specify the character sequences to be displayed as columns. Let's go through the additional options available with cut. Specifying the range of characters or bytes as fields Suppose that we don't rely on delimiters, but we need to extract fields in such a way that we need to define a range of characters (counting from 0 as the start of line) as a field. Such extractions are possible with cut. Let's see what notations are possible: N- from the Nth byte, character, or field, to the end of line N-M from the Nth to Mth (included) byte, character, or field -M from first to Mth (included) byte, character, or field We use the preceding notations to specify fields as a range of bytes or characters with the following options: -b for bytes -c for characters -f for defining fields For example: $ cat range_fields.txt abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxy You can print the first to fifth characters as follows: $ cut -c1-5 range_fields.txt abcde abcde abcde abcde The first two characters can be printed as follows: $ cut range_fields.txt -c -2 ab ab ab ab Replace -c with -b to count in bytes. We can specify the output delimiter while using with -c, -f, and -b, as follows: --output-delimiter "delimiter string" When multiple fields are extracted with -b or -c, the --output-delimiter is a must. Otherwise, you cannot distinguish between fields if it is not provided. For example: $ cut range_fields.txt -c1-3,6-9 --output-delimiter "," abc,fghi abc,fghi abc,fghi abc,fghi
Read more
  • 0
  • 0
  • 2567

article-image-designing-sizing-building-and-configuring-citrix-vdi-box
Packt
13 Sep 2013
7 min read
Save for later

Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box

Packt
13 Sep 2013
7 min read
(For more resources related to this topic, see here.) Sizing the servers There are a number of tools and guidelines to help you to size Citrix VIAB appliances. Essentially, the guides cover the following topics: CPU Memory Disk IO Storage In their sizing guides, Citrix classifies users into the following two groups: 4kers Knowledge workers Therefore, the first thing to determine is how many of your proposed VIAB users are task workers, and how many are knowledge workers? Task workers Citrix would define task workers as users who run a small set of simple applications, not very graphical in nature or CPU- or memory-intensive, for example, Microsoft Office and a simple line of business applications. Knowledge workers Citrix would define knowledge workers as users who run multimedia and CPU- and memory-intensive applications. They may include large spreadsheet files, graphics packages, video playback, and so on. CPU Citrix offers recommendations based on CPU cores, such as the following: 3 x desktops per core per knowledge worker 6 x desktops per core per task user 1 x core for the hypervisor These figures can be increased slightly if the CPUs have hyper-threading. You should also add another 15 percent if delivering personal desktops. The sizing information has been gathered from the Citrix VIAB sizing guide PDF. Example 1 If you wanted to size a server appliance to support 50 x task-based users running pooled desktops, you would require 50 / 6 = 8.3 + 1 (for the hypervisor) = 9.3 cores, rounded up to 10 cores. Therefore, a dual CPU with six cores would provide 12 x CPU cores for this requirement. Example 2 If you wanted to size a server appliance to support 15 x task and 10 x knowledge workers you would require (15 / 6 = 2.5) + (10 / 3 = 3.3) + 1 (for the hypervisor) = 7 cores. Therefore, a dual CPU with 4 cores would provide 8 x CPU cores for this requirement. Memory The memory required depends on the desktop OS that you are running and also on the amount of optimization that you have done to the image. Citrix recommends the following guidelines: Task worker for Windows 7 should be 1.5 GB Task worker for Windows XP should be 0.5 GB Knowledge worker Windows 7 should be 2 GB Knowledge worker Windows XP should be 1 GB It is also important to allocate memory for the hypervisor and the VIAB virtual appliance. This can vary depending on the number of users, so we would recommend using the sizing spreadsheet calculator available in the Resources section of the VIAB website. However, as a guide, we would allocate 3 GB memory (based on 50 users) for the hypervisor and 1 GB for VIAB. The amount of memory required by the hypervisor will grow as the number of users on the server grows. Citrix also recommends adding 10 percent more memory for server operations. Example 1 If you wanted to size a server appliance to support 50 x task-based users, with Windows 7, you would require 50 x 1.5 + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 87 GB. Therefore, you would typically round this up to a 96 GB memory, providing an ideal configuration for this requirement. Example 2 Therefore, if you wanted to size a server appliance to support 15 x task and 10 x knowledge workers, with Windows 7, you would require (15 x 1.5) + (10 x 2) + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 51.5 GB. Therefore, a 64 GB memory would be an ideal configuration for this requirement. Disk IO As multiple Windows images run on the appliances, disk IO becomes very important and can often become the first bottleneck for VIAB.Citrix calculates IOPS with a 40-60 split between read and write OPS, during end user desktop access.Citrix doesn't recommend using slow disks for VIAB and has statistic information for SAS 10 and 15K and SSD disks.The following table shows the IOPS delivered from the following disks: Hard drive RPM IOPS RAID 0 IOPS RAID 1 SSD 6000   15000 175 122.5 10000 125 87.7 The following table shows the IOPS required for task and knowledge workers for Windows XP and Windows 7: Desktop IOPS Windows XP Windows 7 Task user 5 IOS 10 IOPS Knowledge user 10 IOPS 20 IOPS Some organizations decide to implement RAID 1 or 10 on the appliances to reduce the chance of an appliance failure. This does require many more disks however, and significantly increases the cost of the solution. SSD SSD is becoming an attractive proposition for organizations that want to run a larger number of users on each appliance. SSD is roughly 30 times faster than 15K SAS drives, so it will eliminate desktop IO bottlenecks completely. SSD continues to come down in price, so can be well worth considering at the start of a VIAB project. SSDs have no moving mechanical components. Compared with electromechanical disks, SSDs are typically less susceptible to physical shock, run more quietly, have lower access time, and less latency. However, while the price of SSDs has continued to decline, SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. A further option to consider would be Fusion-IO, which is based on NAND flash memory technology and can deliver an exceptional number of IOPS. Example 1 If you wanted to size a server appliance to support 50 x task workers, with Windows 7, using 15K SAS drives, you would require 175 / 10 = 17.5 users on each disk, therefore, 50 / 17. 5 = 3 x 15K SAS disks. Example 2 If you wanted to size a server appliance to support 15 x task workers and 10 knowledge workers, with Windows 7, you would require the following: 175 / 10 = 17.5 task users on each disk, therefore 15 / 17.5 = 0.8 x 15K SAS disks 175 / 20 = 8.75 knowledge users on each disk, therefore 10 / 8.75 = 1.1 x 15K SAS disks Therefore, 2 x 15K SAS drives would be required. Storage Storage capacity is determined by the number of images, number of desktops, and types of desktop. It is best practice to store user profile information and data elsewhere. Citrix uses the following formula to determine the storage capacity requirement: 2 x golden image x number of images (assume 20 GB for an image) 70 GB for VDI-in-a-Box 15 percent of the size of the image / desktop (achieved with linked clone technology) Example 1 Therefore, if you wanted to size a server appliance to support 50 x task-based users, with two golden Windows 7 images, you would require the following: Space for the golden image: 2 x 20 GB x 2 = 80 GB VIAB appliance space: 70 GB Image space/desktop: 15% x 20 GB x 50 = 150 GB Extra room for swap and transient activity: 100 GB Total: 400 GB Recommended: 500 GB to 1 TB per server We have already specified 3 x 15K SAS drives for our IO requirements. If those were 300-GB disks, they should provide enough storage. This section of the article provides you with a step-by-step guide to help you to build and configure a VIAB solution; starting with the hypervisor install. It then goes onto to cover adding an SSL certificate, the benefits of using the GRID IP Address feature, and how you can use the Kiosk mode to deliver a standard desktop to public access areas. It then covers adding a license file and provides details on the useful features contained within Citrix profile management. It then highlights how VIAB can integrate with other Citrix products such as NetScaler VPX, to enable secure connections across the Internet and GoToAssist, a support and monitoring package which is very useful if you are supporting a number of VIAB appliances across multiple sites. ShareFile can again be a very useful tool to enable data files to follow the user, whether they are connecting to a local device or a virtual desktop. This can avoid the problems of files being copied across the network, delaying users. We then move on to a discussion on the options available for connecting to VIAB, including existing PCs, thin clients, and other devices, including mobile devices. The chapter finishes with some useful information on support for VIAB, including the support services included with subscription and the knowledge forums. Installing the hypervisor All the hypervisors have two elements; the bare metal hypervisor that installs on the server and its management tools that you would typically install on the IT administrator workstations. Bare Metal Hypervisor Management tool Citrix XenServer XenCenter Microsoft Hyper-V Hyper V Manager VMware ESXi vSphere Client It is relatively straightforward to install the hypervisor. Make sure you enable linked clones in XenServer, because this is required for the linked clone technology. Give the hypervisor a static IP address and make a note of the administrator's username and password. You will need to download ISO images for the installation media; if you don't already have them, they can be found on the Internet.
Read more
  • 0
  • 0
  • 7821

article-image-making-specs-more-concise-intermediate
Packt
13 Sep 2013
6 min read
Save for later

Making specs more concise (Intermediate)

Packt
13 Sep 2013
6 min read
(For more resources related to this topic, see here.) Making specs more concise (Intermediate) So far, we've written specifications that work in the spirit of unit testing, but we're not yet taking advantage of any of the important features of RSpec to make writing tests more fluid. The specs illustrated so far closely resemble unit testing patterns and have multiple assertions in each spec. How to do it... Refactor our specs in spec/lib/location_spec.rb to make them more concise: require "spec_helper" describe Location do describe "#initialize" do subject { Location.new(:latitude => 38.911268, :longitude => -77.444243) } its (:latitude) { should == 38.911268 } its (:longitude) { should == -77.444243 } end end While running the spec, you see a clean output because we've separated multiple assertions into their own specifications: Location #initialize latitude should == 38.911268 longitude should == -77.444243 Finished in 0.00058 seconds 2 examples, 0 failures The preceding output requires either the .rspec file to contain the --format doc line, or when executing rspec in the command line, the --format doc argument must be passed. The default output format will print dots (.) for passing tests, asterisks (*) for pending tests, E for errors, and F for failures. It is time to add something meatier. As part of our project, we'll want to determine if Location is within a certain mile radius of another point. In spec/lib/location_spec.rb, we'll write some tests, starting with a new block called context. The first spec we want to write is the happy path test. Then, we'll write tests to drive out other states. I am going to re-use our Location instance for multiple examples, so I'll refactor that into another new construct, a let block: require "spec_helper" describe Location do let(:latitude) { 38.911268 } let(:longitude) { -77.444243 } let(:air_space) { Location.new(:latitude => 38.911268,: longitude => -77.444243) } describe "#initialize" do subject { air_space } its (:latitude) { should == latitude } its (:longitude) { should == longitude } end end Because we've just refactored, we'll execute rspec and see the specs pass. Now, let's spec out a Location#near? method by writing the code we wish we had: describe "#near?" do context "when within the specified radius" do subject { air_space.near?(latitude, longitude, 1) } it { should be_true } end end end Running rspec now results in failure because there's no Location#near? method defined. The following is the naive implementation that passes the test (in lib/location.rb): def near?(latitude, longitude, mile_radius) true end Now, we can drive a failure case, which will force a real implementation in spec/lib/location_spec.rb within the describe "#near?" block: context "when outside the specified radius" do subject { air_space.near?(latitude * 10, longitude * 10, 1) } it { should be_false } end Running the specs now results in the expected failure. The following is a passing implementation of the haversine formula in lib/location.rb that satisfies both cases: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) to_radians = Proc.new { |d| d * Math::PI / 180 } dist_lat = to_radians.call(lat - self.latitude) dist_long = to_radians.call(long - self.longitude) lat1 = to_radians.call(self.latitude) lat2 = to_radians.call(lat) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) + Math.sin(dist_long/2) * Math.sin(dist_long/2) * Math.cos(lat1) * Math.cos(lat2) c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) (R * c) <= mile_radius end Refactor both of the previous tests to be more expressive by utilizing predicate matchers: describe "#near?" do context "when within the specified radius" do subject { air_space } it { should be_near(latitude, longitude, 1) } end context "when outside the specified radius" do subject { air_space } it { should_not be_near(latitude * 10, longitude * 10, 1) } end end Now that we have a passing spec for #near?, we can alleviate a problem with our implementation. The #near? method is too complicated. It could be a pain to try and maintain this code in future. Refactor for ease of maintenance while ensuring that the specs still pass: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) loc = Location.new(:latitude => lat,:longitude => long) R * haversine_distance(loc) <= mile_radius end private def to_radians(degrees) degrees * Math::PI / 180 end def haversine_distance(loc) dist_lat = to_radians(loc.latitude - self.latitude) dist_long = to_radians(loc.longitude - self.longitude) lat1 = to_radians(self.latitude) lat2 = to_radians(loc.latitude) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) +Math.sin(dist_long/2) * Math.sin(dist_long/2) *Math.cos(lat1) * Math.cos(lat2) 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) end Finally, run rspec again and see that the tests continue to pass. A successful refactor! How it works... The subject block takes the return statement of the block—a new instance of Location in the previous example—and binds it to a locally scoped variable named subject. Subsequent it and its blocks can refer to that subject variable. Furthermore, the its blocks implicitly operate on the subject variable to produce more concise tests. Here is an example illustrating how subject is used to produce easier-to-read tests: describe "Example" do subject { { :key1 => "value1", :key2 => "value2" } } it "should have a size of 2" do subject.size.should == 2 end end We can use subject from within the it block and this will refer to the anonymous hash returned by the subject block. In the preceding test, we could have been more concise with an its block: its (:size) { should == 2 } We're not limited to just sending symbols to an its block—we can use strings too: its ('size') { should == 2 } When there is an attribute of subject you want to assert but the value cannot easily be turned into a valid Ruby symbol, you'll need to use a string. This string is not evaluated as Ruby code; it's only evaluated against the subject under test as a method of that class. Hashes, in particular, allow you to define an anonymous array with the key value to assert the value for that key: its ([:key1]) { should == "value1" } There's more... In the previous code examples, another block known as the context block was presented. The context block is a grouping mechanism for associating tests. For example, you may have a conditional branch in your code that changes the outputs of a method. Here, you may use two context blocks, one for a value and the second for another value. In our example, we're separating the happy path (when a given point is within the specified mile radius) from the alternative (when a given point is outside the specified mile radius). context is a useful construct that allows you to declare let and other blocks within it, and those blocks apply only for the scope of the containing context. Summary This article demonstrated to us the idiomatic RSpec code that makes good use of the RSpec Domain Specific Language (DSL). Resources for Article : Further resources on this subject: Quick start - your first Sinatra application [Article] Behavior-driven Development with Selenium WebDriver [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 4457
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-windows-store
Packt
12 Sep 2013
17 min read
Save for later

Introducing the Windows Store

Packt
12 Sep 2013
17 min read
(For more resources related to this topic, see here.) Developing a Windows Store app is not just about design, coding, and markup. A very essential part of the process that leads to a successful app is done on the Windows Store Dashboard. It is the place where you submit the app, pave its way to the market, and monitor how it is doing there. Also, it is the place where you can get all the information about your existing apps and where you can plan your next app. The submission process is broken down into seven phases. If you haven't already opened a Windows Store developer account, now is the time to do so because you will need it to access your Dashboard. Before you sign up, make sure you have a credit card. The Windows Store requires a credit card to open a developer account even if you had a registration code that entitles you to a free registration. Once signed in, locate your app listed on the home page under the Apps in progress section and click on Edit. This will direct you to the Release Summary page and the app will be titled AppName: Release 1. The release number will auto-increment each time you submit a new release for the same app. The Release Summary page lists the steps that will get your app ready for Windows Store certification. On this page, you can enter all the info about your Windows Store app and upload its packages for certification. At the moment you will notice that the two buttons at the bottom of the page labeled as Review release info and Submit app for certification are disabled and will remain so until all the previous steps have been marked Complete. The submission progress can always be saved to be resumed later, so it is not necessarily a one-time mission. We'll go over these steps one by one: App name: This is the first step and it includes reserving a unique name for the app. Selling details: This step includes selecting the following: The app price tier option sets the price of your app (for example, free or 1.99 USD). The free trial period option is the number of days the customer can use the app before they start paying to use it. This option is enabled only if the app price tier is not set to Free. The Market where you would like the app to be listed in the Windows Store. Bear in mind that if your app isn't free, your developer account must have a valid tax profile for each country/region you select. The release date option specifies the earliest date when the app will be listed in the Windows Store. The default option is to release as soon as the app passes the certification. The App category and subcategory option indicates where your app be listed in the Store, which in turn lists the apps under Categories. The Hardware requirements option will specify the minimum requirements for the DirectX feature level and the system RAM. The Accessibility option is a checkbox that when checked indicates that the app has been tested to meet accessibility guidelines. Services: In this step, you can add services to your app such as Windows Azure Mobile Services and Live Services. You can also provide products and features that the customer can buy from within the app called In-app offers. Age rating and rating certificates: In this step, you can set an age rating for the app from the available Windows Store age ratings. Also, you can upload country/region-specific rating certificates in case your app is a game. Cryptography: In this step, you specify if your app calls, supports, and contains or uses cryptography or encryption. The following are some of the examples of how an app might apply cryptography or encryption: Use of a digital signature such as authentication or integrity checking Encryption of any data or files that your app uses or accesses Key management, certificate management, or anything that interacts with a public key infrastructure Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS) Encrypting passwords or other forms of information security Copy protection or digital rights management (DRM) Antivirus protection Packages: In this step, you can upload your app to the Store by uploading the .appxupload file that was created in Visual Studio during the package-creation process. We will shortly see how to create an app package. The latest upload will show on the Release Summary page in the packages box and should be labeled as Validation Complete. Description: In this step you can add a brief description (mandatory) on what the app does for your customers. The description has a 10,000-character limit and will be displayed in the details page of the app's listing in the Windows Store. Besides description, this step contains the following features: App features: This feature is optional. It allows you to list up to 20 of the app's key features. Screenshots: This feature is mandatory and requires to provide at least one .png file image; the first can be a graphic that represents your app but all the other images must be screenshots with a caption taken directly from the app. Notes: This feature is optional. Enter any other info that you think your customer needs to know; for example, changes in an update. Recommended hardware: This feature is optional. List the hardware configurations that the app will need to run. Keywords: This feature is optional. Enter keywords related to the app to help its listing appear in search results. Copyright and trademark info: This feature is mandatory. Enter the copyright and trademark info that will be displayed to customers in the app's listing page. Additional license terms: This feature is optional. Enter any changes to the Standard App License Terms that the customers accept when they acquire this app. Promotional images: This feature is optional. Add images that the editors use to feature apps in the Store. Website: This feature is optional. Enter the URL of the web page that describes the app, if any. Support contact info: This feature is mandatory. Enter the support contact e-mail address or URL of the web page where your customers can reach out for help. Privacy policy: This feature is optional. Enter the URL of the web page that contains the privacy policy. Notes to testers: This is the last step and it includes adding notes about this specific release for those who will review your app from the Windows Store team. The info will help the testers understand and use this app in order to complete their testing quickly and certify the app for the Windows Store. Each step will remain disabled until the preceding one is completed and the steps that are in progress are labeled with the approximate time (in minutes) it will take you to finish it. And whenever the work in a single step is done, it will be marked Complete on the summary page as shown in the following screenshot: Submitting the app for certification After all the steps are marked Complete, you can submit the app for certification. Once you click on Submit for certification, you will receive an e-mail notification that the Windows Store has received your app for certification. The dashboard will submit the app and you will be directed to the Certification status page. There, you can view the progress of the app during the certification process, which includes the following steps: Pre-processing: This step will check if you have entered all the required details that are needed to publish the app. Security tests: This step tests your app against viruses and malware. Technical compliance: This step involves the Windows App certification Kit to check if the app complies with the technical policies. The same assessment can be run locally using Visual Studio, which we will see shortly, before you upload your package. Content compliance: This step is done by testers from the Store team who will check if the contents available in the app comply with the content policies set by Microsoft. Release: This step involves releasing the app; it shouldn't take much time unless the publish date you've specified in Selling details is in the future, in which case the app will remain in this stage until that date arrives. Signing and publishing: This is the final step in the certification process. At this stage, the packages you submitted will be signed with a trusted certificate that matches the technical details of your developer account, thus guaranteeing for the potential customers and viewers that the app is certified by the Windows Store. The following screenshot shows the certification process on Windows Store Dashboard: No need to wait on that page; you can click on the Go to dashboard button and you will be redirected to the My apps page. In the box containing the app you just submitted, you will notice that the Edit and Delete links are gone, and instead there is only the Status link, which will take you to the Certification status page. Additionally, a Notifications section will appear on this page and will list status notifications about the app you just submitted, for example: BookTestApp: Release 1 submitted for certification. 6/4/2013 When the certification process is completed, you will be notified via e-mail with the result. Also, a notification will be added to the dashboard main page showing the result of the certification, either failed or succeeded, with a link to the certification report. In case the app fails, the certification reports will show you which part needs revisiting. Moreover, there are some resources to help you identify and fix the problems and errors that might arise during the certification process; these resources can be found at the Windows Dev Center page for Windows Store apps at the following location: http://msdn.microsoft.com/en-us/library/windows/apps/jj657968.aspx Also, you can always check your dashboard to check the status of your app during certification. After the certification process is completed successfully, the app package will be published to the Store with all the relevant data that will be visible in your app listing page. This page can be accessed by millions of Windows 8 users who will in turn be able to find, install, and use your app. Once the app has been published to the Store and it's up and running, you can start collecting telemetry data on how it is doing in the Store; these metrics include information on how many times the app has been launched, how long it has been running, and if it is crashing or encountering a JavaScript exception. Once you enable telemetry data collection, the Store will retrieve this info for your apps, analyze them, and summarize them in very informative reports on your dashboard. Now that we have covered almost everything you need to know about the process of submitting your app to the Windows Store, let us see what is needed to be done in Visual Studio. The Store within Visual Studio Windows Store can be accessed from within Visual Studio using the Store menu. Not all the things that we did on the dashboard can be done here; a few very important functionalities such as app package creation are provided by this menu. The Store menu can be located under the Project item in the menu bar using Visual Studio 2012 Ultimate, or if you are using Visual Studio 2012 Express, you can find it directly in the menu bar, and it will appear only if you're working on a Windows Store project or solution. We will get to see the commands provided by the Store menu in detail and the following is the screenshot that shows how the menu will look: The command options in the Store menu are as follows: Open Developer Account...: This option will open a web page that directs you to Windows Dev Center for Windows Store apps, where you can obtain a developer account for the Store. Reserve App Name...: This option will direct you to your Windows Store Dashboard and specifically to the Submit an app page, where you can start with the first step, reserving an app name. Acquire Developer License...: This option will open up a dialog window that will prompt you to sign in with your Microsoft Account; after you sign in, it will retrieve your developer license or renew it if you already have one. Edit App Manifest: This option will open a tab with Manifest Designer, so you can edit the settings in the app's manifest file. Associate App with the Store...: This option will open a wizard-like window in Visual Studio, containing the steps needed to associate an app with the Store. The first step will prompt you to sign in; afterwards, the wizard will retrieve the apps registered with the Microsoft Account you used to sign in. Select an app and the wizard will automatically download the following values to the app's manifest file for the current project on the local computer: Package's display name Package's name Publisher ID Publisher's display name Capture Screenshot...: This option will build the current app project and launch it in the simulator instead of the start screen. Once the simulator opens, you can use the Copy screenshot button on the simulator sidebar. This button will be used to take a screenshot of the running app that will save this image as a .png file. Create App Package...: This option will open a window containing the Create App Packages wizard that we will see shortly. Upload App Package...: This option will open a browser that directs you to the Release Summary page in the Windows Store Dashboard, if your Store account is all set and your app is registered. Otherwise, it will just take you to the sign-in page. In the Release Summary page, you can select Packages and from there upload your app package. Creating an App Package One of the most important utilities in the Store menu is the app package creation, which will build and create a package for the app that we can upload to the Store at a later stage. This package is consistent with all the app-specific and developer-specific details that the Store requires. Moreover, the developers do not have to worry about any of the intricacies of the whole package-creation process, which is abstracted for us and available via a wizard-link window. In the Create App Packages wizard, we can create an app package for the Windows Store directly, or create the one to be used for testing or local distribution. This wizard will prompt you to specify metadata for the app package. The following screenshot shows the first two steps involved in this process: In the first step, the wizard will ask you if you want to build packages to upload to the Windows Store; choose Yes if you want to build a package for the Store or choose No if you want a package for testing and local use. Taking the first scenario in consideration, click on Sign In to proceed and complete the sign-in process using your Microsoft Account. After a successful sign-in, the wizard will prompt you to select the app name (step 2 of the preceding screenshot) either by clicking on the apps listed in the wizard or choosing the Reserve Name link that will direct you to the Windows Store Dashboard to complete the process and reserve a new app name. The following screenshot shows step 3 and step 4: Step 3 contains the Select and Configure Packages section in which we will select Output location that points to where the package files will be created. Also, in this section we can enter a version number for this package or chose to make it auto-increment each time we package the app. Additionally, we can select the build configuration we want for the package from the Neutral, ARM, x64, and x86 options and by default, the current active project platform will be selected and a package will be produced for each configuration type selected. The last option in this section is the Include public symbol files option. Selecting this option will generate the public symbols files (.pdb) and add it to the package, which will help the store later in analyzing your app and will be used to map crashes of your app. Finally, click on Create and wait while the packaging is being processed. Once completed, the Package Creation Completed section appears (step 4) and will show Output location as a link that will direct you to the package files. Also, there is a button to directly launch the Windows App Certification Kit. Windows App Certification Kit will validate the app package against the Store requirements and generate a report of the validation. The following screenshot shows the window containing the Windows App Certification Kit process: Alternatively, there is a second scenario for creating an app package but it is more aimed at testing, which is identical to the process we just saw except that you have to choose No in the first page on the wizard and there is no need to sign-in with the Microsoft Account. This option will end the wizard when the package creation has completed and display the link to the output folder but you will not be able to launch the Windows App Certification Kit. The packages created with this option can only be used on a computer that has a developer license installed. This scenario will be used more often since the package for the Store should ideally be tested locally first. After creating the app package for testing or local distribution, you can install it on a local machine or device. Let's install the package locally. Start the Create App Packages wizard; choose No in the first step, complete the wizard, and find files of the app package just created in the output folder that you specified for the package location. Name this as PackageName_Test. This folder will contain an .appx file, a security certificate, a Windows PowerShell script, and other files. The Windows PowerShell script generated with the app package will be used to install the package for testing. Navigate to the Output folder and install the app package. Locate and select the script file named Add-AppDevPackage, and then right-click and choose Run with PowerShell as shown in the following screenshot: Run the script and it will perform the following steps: It displays information about Execution Policy Change and prompts about changing the execution policy. Enter Y to continue. It checks if you have a developer license; in case there wasn't any script, it will prompt you to get one. It checks and verifies whether the app package and the required certificates are present; if any item is missing, you will be notified to install them before the developer package is installed. It checks for and installs any dependency packages such as the WinJS library. It displays the message Success: Your package was successfully installed. Press Enter to continue and the window will close. The aforementioned steps are shown in the following screenshot: Once the script has completed successfully, you can look for your app on the Start screen and start it. Note that for users who are on a network and don't have permission to access the directory where the Add-AppDevPackage PowerShell script file is located, an error message might appear. This issue can be solved by simply copying the contents of the output folder to the local machine before running the script. Also, for any security-related issues, you might want to consult the Windows Developer Center for solutions. Summary In this article, we saw the ins and outs of the Windows Store Dashboard and we covered the steps of the app submission process leading to the publishing of the app in the Store. We also learned about the Store menu in Visual Studio and the options it provides to interact with the dashboard. Moreover, we learned how to create app packages and how to deploy the app locally for testing. Resources for Article: Further resources on this subject: WPF 4.5 Application and Windows [Article] HTML5 Canvas [Article] Responsive Design with Media Queries [Article]
Read more
  • 0
  • 0
  • 2442

article-image-so-what-ext-js
Packt
12 Sep 2013
8 min read
Save for later

So, what is Ext JS?

Packt
12 Sep 2013
8 min read
(For more resources related to this topic, see here.) JavaScript is a classless, prototype-oriented language but Ext JS follows a class-based approach to make the code extensible and scalable over time. Class names can be grouped into packages with namespaces using the object property dot-notation (.). Namespaces allow developers to write structured and maintainable code, use libraries without the risk of overwriting functions, avoid cluttering the global namespace, and provide an ability to encapsulate the code. The strength of the framework lies in its component design. The bundled, basic default components can be easily extended as per your needs and the extended components can be re-used. A new component can also be created by combining one or more default components. The framework includes many default components such as windows, panels, toolbars, drop-down menus, menu bars, dialog boxes, grids, trees, and much more, each with their own configuration properties (configs), component properties, methods, events, and CSS classes. The configs are user-configurable at runtime while instantiating, whereas component properties are references to objects used internally by class. Component properties belong to the prototype of the class and affect all the instances of the class. The properties of the individual components determine the look and feel. The methods help in achieving a certain action. The user interaction triggers the equivalent Ext JS events apart from triggering the DOM events. A cross-browser web application with header, footer, left column section with links, a content with a CSS grid/table (with add, edit, and delete actions for each row of the grid), and a form with few text fields and a submit button can be created with ease using Ext JS's layout mechanism, few default components, and the CSS theme. For the preceding application, the border layout can be used with the north region for the header, south region for the footer, west region for the left column links, and center region for the content. The content area can have a horizontal layout, with the grid and form panel components with text fields and buttons. Creating the preceding application from scratch without using the framework will take a lot more time than it would take by using it. Moreover, this is just one screen, and as the development progresses with more and more features, incorporating new layouts and creating new components will be a tedious process. All the components or a group of components with their layout can be made a custom component and re-used with different data (that is, the grid data can be modified with new data and re-used in a different page). Developers need not worry about the cross-platform compatibility issues, since the framework takes care of this, and they can concentrate on the core logic. The helper functions of the Ext.DomQuery class can be used for querying the DOM. The error handling can be done by using the Ext.Error class, which is a wrapper for the native JavaScript Error object. A simple webpage with a minimal UI too can make use of this framework in many ways. Native JavaScript offers utility classes such as Array, Number, Date, Object, Function, and String, but is limited in what can be done with it across different browsers. Ext JS provides its own version of these classes that works in all the browsers along with offering extra functionality. Any Ext JS component can be added to an existing web page by creating an instance of it. For example, a tab feature can be added to an existing web page by creating a new Ext JS Ext.tab tab component and adding it to an existing div container, by referring the div elements id attribute to the renderTo config property of the tab. The backend communication with your server-side code can be done by using simplified cross-browser Ext.Ajax class methods. Ext JS 4 supports all major web browsers, from Internet Explorer 6 to the latest version of Google Chrome. The recommended browsers for development and debugging are Google Chrome 10+, Apple Safari 5+, and Mozilla Firefox 4+. Both commercial and open source licenses are available for Ext JS. Installation and environment setup In five easy steps, you can be ready with Ext JS and start the development. Step 1 – What do you need? You need the following components for the installation and environment setup: Web browser : Any of the leading browsers mentioned in previous section. For this book, we will consider Mozilla Firebug with the Firebug debugger plugin installed. Web server : To start with, a local web server is not required, but it will be required if communication with a server is required to make AJAX calls. Ext JS 4 SDK : Download the Ext JS bundle from http://www.sencha.com/products/extjs/download/. Click on the Download button on the left side of the page. Step 2 – Installing the browser and debugger Any supported browser mentioned in the previous section can be used for the tutorial. For simplicity and debugging options, we will use the latest Firefox and Firebug debugger plugin. Download the latest Firefox plugin from http://www.mozilla.org/en-US/firefox/fx/#desktop and Firebug from https://getfirebug.com/. Other browser debugging options are as follows: Google Chrome : Chrome Developer Tools ( Tools | Developer tools ) Safari : Go to Settings | Preferences | Advanced , select Show Develop menu in menu bar ; navigate to Develop | Show Web Inspector . Internet Explorer : Go to Tools | Developer Tools   Step 3 – Installing the web server Install the web server and unpack Ext JS. The URLs that provide information for installing the Apache web server on various operating systems are provided as follows: The instructions for installing Apache on Windows can be found at http://httpd.apache.org/docs/current/platform/windows.html The instructions for installing Apache on Linux can be found at http://httpd.apache.org/docs/current/install.html Mac OS X comes with a built-in Apache installation, which you can enable by navigating to System Preferences | Sharing , and selecting the Web Sharing checkbox Install Apache or any other web server in your system. Browse to http://yourwebserver.com or http://localhost, and check that the installation is successful. The http://yourwebserver.com link will show something similar to the the following screenshot, which confirms that Apache is installed successfully: Step 4 – Unpacking Ext JS In this tutorial, we will use Apache for Windows. Unpack the Ext JS bundle into the web server's root directory (htdocs). Rename the Ext JS folder with long version numbers to extjs4 for simplicity. The root directory varies, depending upon your operating system and web server. The Apache root directory path for various operating system are as follows: Windows : C:Program FilesApache Software FoundationApache2.2htdocs Linux : /var/www/ Mac OS X : /Library/WebServer/Documents/ The downloaded EXT JS bundle is packed with examples along with required sources. Browse to http://yourwebserver.com/extjs4, and make sure that it loads the Ext JS index page. This page provides access to all the examples to play around with the API. The API Docs link at bottom-right of the page lists the API information with a search text field at the top-right side of the page. As we progress through the tutorial, please refer to the API as and when required: Step 5 –Testing Ext JS library. A basic Ext JS application page will have a link tag with an Ext JS CSS file (ext-all.css), a script tag for the Ext JS library, and scripts related to your own application. In this example, we don't have any application-specific JavaScripts. Create an HTML file named check.html with the code that follows beneath the httpd folder. Ext.onReady is a method, which is executed when all the scripts are fully loaded. Ext.Msg.alert is a message box that shows a message to the user. The first parameter is the title and the second parameter is the message: <html> <head> <meta http-equiv="Content-Type" content = "text/html; charset=utf-8"> <title>Ext JS started Setup Test</title> <link rel="stylesheet" type="text/css" href = "../extjs4/resources/css/ext-all.css"></link> <script type="text/javascript" src = "../extjs4/ext-all-dev.js"></script> <script type="text/javascript"> Ext.onReady(function() { Ext.Msg.alert("Ext JS 4 Starter","Welcome to Ext 4 Starter!"); }); </script> </head> <body> </body> </html> The following screenshot shows check.html in action: And that's it By now, you should have a working installation of Ext JS, and should be able to play around and discover more about it. Summary Thus we have discussed about having a working environment of EXT JS. Resources for Article : Further resources on this subject: Tips & Tricks for Ext JS 3.x [Article] Ext JS 4: Working with the Grid Component [Article] Building a Ext JS Theme into Oracle APEX [Article]
Read more
  • 0
  • 0
  • 3081

article-image-features-raphaeljs
Packt
12 Sep 2013
16 min read
Save for later

Features of RaphaelJS

Packt
12 Sep 2013
16 min read
(For more resources related to this topic, see here.) Creating a Raphael element Creating a Raphael element is very easy. To make it better, there are predefined methods to create basic geometrical shapes. Basic shape There are three basic shapes in RaphaelJS, namely circle, ellipse, and rectangle. Rectangle We can create a rectangle using the rect() method. This method takes four required parameters and a fifth optional parameter, border-radius. The border-radius parameter will make the rectangle rounded (rounded corners) by the number of pixels specified. The syntax for this method is: paper.rect(X,Y,Width,Height,border-radius(optional)); A normal rectangle can be created using the following code snippet: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating a rectangle with the rect() method. The four required parameters are X,Y,Width & Height var rect = paper.rect(35,25,170,100).attr({ "fill":"#17A9C6", //filling with background color "stroke":"#2A6570", // border color of the rectangle "stroke-width":2 // the width of the border }); The output for the preceding code snippet is shown in the following screenshot: Plain rectangle Rounded rectangle The following code will create a basic rectangle with rounded corners: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The fifth parameter will make the rectangle rounded by the number of pixels specified – A rectangle with rounded corners var rect = paper.rect(35,25,170,100,20).attr({ "fill":"#17A9C6",//background color of the rectangle "stroke":"#2A6570",//border color of the rectangle "stroke-width":2 // width of the border }); //in the preceding code 20(highlighted) is the border-radius of the rectangle. The output for the preceding code snippet is a rectangle with rounded corners, as shown in the following screenshot: Rectangle with rounded corners We can create other basic shapes in the same way. Let's create an ellipse with our magic wand. Ellipse An ellipse is created using the ellipse() method and it takes four required parameters, namely x,y, horizontal radius, and vertical radius. The horizontal radius will be the width of the ellipse divided by two and the vertical radius will be the height of the ellipse divided by two. The syntax for creating an ellipse is: paper.ellipse(X,Y,rX,rY); //rX is the horizontal radius & rY is the vertical radius of the ellipse Let's consider the following example for creating an ellipse: // creating a raphael paperin 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The ellipse() method takes four required parameters: X,Y, horizontal radius & vertical Radius var ellipse = paper.ellipse(195,125,170,100).attr({ "fill":"#17A9C6", // background color of the ellipse "stroke":"#2A6570", // ellipse's border color "stroke-width":2 // border width }); The preceding code will create an ellipse of width 170 x 2 and height 100 x 2. An ellipse created using the ellipse() method is shown in the following screenshot: An Ellipse Complex shapes It's pretty easy to create basic shapes, but what about complex shapes such as stars, octagons, or any other shape which isn't a circle, rectangle, or an ellipse. It's time for the next step of Raphael wizardry. Complex shapes are created using the path() method which has only one parameter called pathString. Though the path string may look like a long genetic sequence with alphanumeric characters, it's actually very simple to read, understand, and draw with. Before we get into path drawing, it's essential that we know how it's interpreted and the simple logic behind those complex shapes. Imagine that you are drawing on a piece of paper with a pencil. To draw something, you will place the pencil at a point in the paper and begin to draw a line or a curve and then move the pencil to another point on the paper and start drawing a line or curve again. After several such cycles, you will have a masterpiece—at least, you will call it a masterpiece. Raphael uses a similar method to draw and it does so with a path string. A typical path string may look like this: M0,0L26,0L13,18L0,0. Let's zoom into this path string a bit. The first letter says M followed by 0,0. That's right genius, you've guessed it correctly. It says move to 0,0 position, the next letter L is line to 26,0. RaphaelJS will move to 0,0 and from there draw a line to 26,0. This is how the path string is understood by RaphaelJS and paths are drawn using these simple notations. Here is a comprehensive list of commands and their respective meanings: Command Meaning expansion Attributes M move to (x, y) Z close path (none) L line to (x, y) H horizontal line to x V vertical line to y C curve to (x1, y1, x2, y2, x, y) S smooth curve to (x2, y2, x, y) Q quadratic Bézier curve to (x1, y1, x, y) T smooth quadratic Bézier curve to (x, y) A elliptical arc (rx, ry, x axis-rotation, large-arc-flag, sweep-flag, x, y) R Catmull-Rom-curve to* x1, y1 (x y) The uppercase commands are absolute (M20, 20); they are calculated from the 0,0 position of the drawing area (paper). The lowercase commands are relative (m20, 20); they are calculated from the last point where the pen left off. There are so many commands, which might feel like too much to take in—don't worry; there is no need to remember every command and its format. Because we'll be using vector graphics editors to extract paths, it's essential that you understand the meaning of each and every command so that when someone asks you "hey genius, what does this mean?", you shouldn't be standing there clueless pretending to have not heard it. The syntax for the path() method is as follows: paper.path("pathString"); Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 350,200); // Creating a shape using the path() method and a path string var tri = paper.path("M0,0L26,0L13,18L0,0").attr({ "fill":"#17A9C6", // filling the background color "stroke":"#2A6570", // the color of the border "stroke-width":2 // the size of the border }); All these commands ("M0,0L26,0L13,18L0,0") use uppercase letters. They are therefore absolute values. The output for the previous example is shown in the following screenshot: A triangle shape drawn using the path string Extracting and using paths from an editor Well, a triangle may be an easy shape to put into a path string. How about a complex shape such as a star? It's not that easy to guess and manually find the points. It's also impossible to create a fairly more complex shape like a simple flower or a 2D logo. Here in this section, we'll see a simple but effective method of drawing complex shapes with minimal fuss and sharp accuracy. Vector graphics editors The vector graphics editors are meant for creating complex shapes with ease and they have some powerful tools in their disposal to help us draw. For this example, we'll create a star shape using an open source editor called Inkscape, and then extract those paths and use Raphael to get out the shape! It is as simple as it sounds, and it can be done in four simple steps. Step 1 – Creating the shape in the vector editor Let's create some star shapes in Inkscape using the built-in shapes tool. Star shapes created using the built-in shapes tool Step 2 – Saving the shape as SVG The paths used by SVG and RaphaelJS are similar. The trick is to use the paths generated by the vector graphics editor in RaphaelJS. For this purpose, the shape must be saved as an SVG file. Saving the shape as an SVG file Step 3 – Copying the SVG path string The next step is to copy the path from SVG and paste it into Raphael's path() method. SVG is a markup language, and therefore it's nested in tags. The SVG path can be found in the <path> and </path> tags. After locating the path tag, look for the d attribute. This will contain a long path sequence. You've now hit the bullseye. The path string is highlighted Step 4 – Using the copied path as a Raphael path string After copying the path string from SVG, paste it into Raphael's path() method. var newpath=paper.path("copied path string from SVG").attr({ "fill":"#5DDEF4", "stroke":"#2A6570", "stroke-width":2 }); That's it! We have created a complex shape in RaphaelJS with absolute simplicity. Using this technique, we can only extract the path, not the styles. So the background color, shadow, or any other style in the SVG won't apply. We need to add our own styles to the path objects using the attr() method. A screenshot depicting the complex shapes created in RaphaelJS using the path string copied from an SVG file is shown here: Complex shapes created in RaphaelJS using path string Creating text Text can be created using the text() method. Raphael gives us a way to add a battery of styles to the text object, right from changing colors to animating physical properties like position and size. The text() method takes three required parameters, namely, x,y, and the text string. The syntax for the text() method is as follows: paper.text(X,Y,"Raphael JS Text"); // the text method with X,Y coordinates and the text string Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating text var text = paper.text(40,55,"Raphael Text").attr({ "fill":"#17A9C6", // font-color "font-size":75, // font size in pixels //text-anchor indicates the starting position of the text relative to the X, Y position.It can be "start", "middle" or "end" default is "middle" "text-anchor":"start", "font-family":"century gothic" // font family of the text }); I am pretty sure that the text-anchor property is a bit heavy to munch. Well, there is a saying that a picture is worth a thousand words. The following diagram clearly explains the text-anchor property and its usage. A brief explanation of text-anchor property A screenshot of the text rendered using the text() method is as follows: Rendering text using the text() method Manipulating the style of the element The attr() method not only adds styles to an element, but it also modifies an existing style of an element. The following example explains the attr() method: rect.attr('fill','#ddd'); // This will update the background color of the rectangle to gray Transforming an element RaphaelJS not only creates elements, but it also allows the manipulating or transforming of any element and its properties dynamically. Manipulating a shape By the end of this section, you would know how to transform a shape. There might be many scenarios wherein you might need to modify a shape dynamically. For example, when the user mouse-overs a circle, you might want to scale up that circle just to give a visual feedback to the user. Shapes can be manipulated in RaphaelJS using the transform() method. Transformation is done through the transform() method, and it is similar to the path() method where we add the path string to the method. transform() works in the same way, but instead of the path string, it's the transformation string. There is only a moderate difference between a transformation string and a path string. There are four commands in the transformation string: T Translate S Scale R Rotate in degrees M Matrix The fourth command, M, is of little importance and let's keep it out of the way, to avoid confusion. The transformation string might look similar to a path string. In reality, they are different, not entirely but significantly, sharing little in common. The M in a path string means move to , whereas the same in a transformation string means Matrix . The path string is not to be confused with a transformation string. As with the path string, the uppercase letters are for absolute transformations and the lowercase for relative transformation. If the transformation string reads r90T100,0, then the element will rotate 90 degrees and move 100 px in the x axis (left). If the same reads r90t100,0, then the element will rotate 90 degrees and since the translation is relative, it will actually move vertically down 100px, as the rotation has tilted its axis. I am sure the previous point will confuse most, so let me break it up. Imagine a rectangle with a head and now this head is at the right side of the rectangle. For the time being, let's forget about absolute and relative transformation; our objective is to: Rotate the rectangle by 90 degrees. Move the rectangle 100px on the x axis (that is, 100px to the right). It's critical to understand that the elements' original values don't change when we translate it, meaning its x and y values will remain the same, no matter how we rotate or move the element. Now our first requirement is to rotate the rectangle by 90 degrees. The code for that would be rect.transform("r90") where r stands for rotation—fantastic, the rectangle is rotated by 90 degrees. Now pay attention to the next important step. We also need the rectangle to move 100px in the x axis and so we update our previous code to rect.transform("r90t100,0"), where t stands for translation. What happens next is interesting—the translation is done through a lowercase t, which means it's relative. One thing about relative translations is that they take into account any previous transformation applied to the element, whereas absolute translations simply reset any previous transformations before applying their own. Remember the head of the rectangle on the right side? Well, the rectangle's x axis falls on the right side. So when we say, move 100px on the x axis, it is supposed to move 100px towards its right side, that is, in the direction where its head is pointing. Since we have rotated the rectangle by 90 degrees, its head is no longer on the right side but is facing the bottom. So when we apply the relative translation, the rectangle will still move 100px to its x axis, but the x axis is now pointing down because of the rotation. That's why the rectangle will move 100px down when you expect it to move to the right. What happens when we apply absolute translation is something that is entirely different from the previous one. When we again update our code for absolute translation to rect.transform("r90T100,0"), the axis of the rectangle is not taken into consideration. However, the axis of the paper is used, as absolute transformations don't take previous transformations into account, and they simply reset them before applying their own. Therefore, the rectangle will move 100px to the right after rotating 90 degrees, as intended. Absolute transformations will ignore all the previous transformations on that element, but relative transformations won't. Getting a grip on this simple logic will save you a lot of frustration in the future while developing as well as while debugging. The following is a screenshot depicting relative translation: Using relative translation The following is a screenshot depicting absolute translation: Using absolute translation Notice the gap on top of the rotated rectangle; it's moved 100px on the one with relative translation and there is no such gap on top of the rectangle with absolute translation. By default, the transform method will append to any transformation already applied to the element. To reset all transformations, use element.transform(""). Adding an empty string to the transform method will reset all the previous transformations on that element. It's also important to note that the element's original x,y position will not change when translated. The element will merely assume a temporary position but its original position will remain unchanged. Therefore after translation, if we call for the element's position programmatically, we will get the original x,y, not the translated one, just so we don't jump from our seats and call RaphaelJS dull! The following is an example of scaling and rotating a triangle: //creating a Triangle using the path string var tri = paper.path("M0,0L104,0L52,72L0,0").attr({ "fill":"#17A9C6", "stroke":"#2A6570", "stroke-width":2 }); //transforming the triangle. tri.animate({ "transform":"r90t100,0,s1.5" },1000); //the transformation string should be read as rotating the element by 90 degrees, translating it to 100px in the X-axis and scaling up by 1.5 times The following screenshot depicts the output of the preceding code: Scaling and rotating a triangle The triangle is transformed using relative translation (t). Now you know the reason why the triangle has moved down rather than moving to its right. Animating a shape What good is a magic wand if it can't animate inanimate objects! RaphaelJS can animate as smooth as butter almost any property from color, opacity, width, height, and so on with little fuss. Animation is done through the animate() method. This method takes two required parameters, namely final values and milliseconds, and two optional parameters, easing and callback. The syntax for the animate() method is as follows: Element.animate({ Animation properties in key value pairs },time,easing,callback_function); Easing is that special effect with which the animation is done, for example, if the easing is bounce, the animation will appear like a bouncing ball. The following are the several easing options available in RaphaelJS: linear < or easeIn or ease-in > or easeOut or ease-out <> or easeInOut or ease-in-out backIn or back-in backOut or back-out elastic bounce Callbacks are functions that will execute when the animation is complete, allowing us to perform some tasks after the animation. Let's consider the example of animating the width and height of a rectangle: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); rect.animate({ "width":200, // final width "height":200 // final height },300,"bounce',function(){ // something to do when the animation is complete – this callback function is optional // Print 'Animation complete' when the animation is complete $("#animation_status").html("Animation complete") }) The following screenshot shows a rectangle before animation: Rectangle before animation A screenshot demonstrating the use of a callback function when the animation is complete is as follows. The text Animation complete will appear in the browser after completing the animation. Use of a callback function The following code animates the background color and opacity of a rectangle: rect.animate({ "fill":"#ddd", // final color, "fill-opacity":0.7 },300,"easeIn",function(){ // something to do when the animation is complete – this call back function is optional // Alerts done when the animation is complete alert("done"); }) Here the rectangle is animated from blue to gray and with an opacity from 1 to 0.7 over a duration of 300 milliseconds. Opacity in RaphaelJS is the same as in CSS, where 1 is opaque and 0 is transparent.
Read more
  • 0
  • 0
  • 4563

article-image-ibm-cognos-insight
Packt
12 Sep 2013
9 min read
Save for later

IBM Cognos Insight

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) An example case for IBM Cognos Insight Consider an example of a situation where an organization from the retail industry heavily depends on spreadsheets as its source of data collection, analysis, and decision making. These spreadsheets contain data that is used to analyze customers' buying patterns across the various products sold by multiple channels in order to boost the sales across the company. The analysis hopes to reveal customers' buying patterns demographically, streamline sales channels, improve supply chain management, give an insight into forecast spending, and redirect budgets to advertising, marketing, and human capital management, as required. As this analysis is going to involve multiple departments and resources working with spreadsheets, one of the challenges will be to have everyone speak in similar terms and numbers. Collaboration across departments is important for a successful analysis. Typically in such situations, multiple spreadsheets are created across resource pools and segregated either by time, product, or region (due to the technical limitations of spreadsheets) and often the analysis requires the consolidation of these spreadsheets to be able to make the educated decision. After the number-crunching, a consolidated spreadsheet showing high level summaries is sent out to executives, while the details remain on other tabs within the same spreadsheet or on altogether separate spreadsheet files. This manual procedure has a high probability of errors. The similar data analysis process in IBM Cognos Insight would result in faster decision making by keeping the details and the summaries in a highly compressed Online Analytical Processing (OLAP) in-memory cube. Using the intuitive drag-and-drop functionality or the smart-metadata import wizard, the spreadsheet data now appears instantaneously (due to the in-memory analysis) in a graphical and pivot table format. Similar categorical data values, such as customer, time, product, sales channel and retail location are stored as dimension structures. All the numerical values bearing factual data such as revenue, product cost, and so on, defined as measures are stored in the OLAP cube along with the dimensions. Two or more of these dimensions and measures together form a cube view that can be sliced and diced and viewed at a summarized or a detailed level. Within each dimension, elements such as customer name, store location, revenue amount generated, and so on, are created. These can be used in calculations and trend analysis. These dimensions can be pulled out on the analysis canvas as explorer points that can be used for data filtering and sorting. Calculations, business rules and differentiator metrics can be added to the cube view to enhance the analysis. After enhancements to the IBM Cognos Insight workspace have been saved, these workspaces or fi les can be e-mailed and distributed as offline analyses. Also, the users have the option to publish the workspace into the IBM Cognos Business Intelligence web portal, Cognos Connection or IBM Cognos Express, both of which are targeted to larger audiences, where this information can be shared with broader workgroups. Security layers can be included to protect sensitive data, if required. The publish-and-distribute option within IBM Cognos Insight is used for advanced analytics features and write-back functionality in larger deployments. where, the users can modify plans online or offline, and sync up to the enterprise environment on an as-and-when basis. As an example, the analyst can create what-if scenarios for business purposes to simulate the introduction of a new promotion price for a set of smart phones during high foot traffic times to drive up sales. Or simulating an extension of store hours during summer months to analyze the effects on overall store revenue can be created. The following diagram shows the step-by-step process of dropping a spreadsheet into IBM Cognos Insight and viewing the dashboard and the scorecard style reports instantaneously, which can then be shared on the IBM Cognos BI web-portal or published to an IBM TM1 environment. The preceding screenshot demonstrates the steps from raw data in spreadsheets being imported into IBM Cognos Insight to reveal a dashboard style report instantaneously. Additional calculations to this workspace creates scorecard type graphical variances, thus giving an overall picture through rich graphics. Using analytics successfully Over the past few years, there have been huge improvements in the technology and processes of gathering the data. Using Business Analytics and applications such as IBM Cognos Insight we can now analyze and accurately measure anything and everything. This leads to the question: Are we using Analytics successfully? The following high-level recommendations should be used as a guidance for organizations that are either attempting a Business Analytics implementation for the first time or for those who are already involved with Business Analytics, both working towards a successful implementation: The first step is to prioritize the targets that will produce intelligent analytics from the available trustworthy data. Choosing this target wisely and thoughtfully has an impact on the success rate of the implementation. Usually, these are high value targets that need problem solving and/or quick wins to justify the need and/or investment towards a Business Analytics solution. Avoid the areas with a potential for probable budget cuts and/or involving corporate cultural and political battles that are considered to be the major factors leading to an implementation pitfall. Improve your chances by asking the question—where will we achieve maximum business value? Selecting the appropriate product to deliver the technology is the key for success—a product that is suitable for all the skill levels and that can be supported by the organization's infrastructure. IBM Cognos Insight is one such product where the learning curve is minimal; thanks to its ease of use and vast features. The analysis produced by using IBM Cognos Insight can then be shared by publishing to an enterprise-level solution such as IBM Cognos BI, IBM Cognos Express, or IBM TM1. This product reduces dependencies on IT departments in terms of personnel and IT resources due to the small learning curve, easy setup, intuitive look, feel, and vast features. The sharing and collaborating capabilities eliminate the need for multiple silos of spreadsheets, one of the reasons why organizations want to move towards a more structured and regulated Enterprise Analytics approach. Lastly, organize a governing body such as a Analytics Competency Center (ACC) or Analytics Center of Excellence (ACE) that has the primary responsibility to do the following: Provide the leadership and build the team Plan and manage the Business Analytics vision and strategy (BA Roadmap) Act as a governing body maintaining standardization at the Enterprise level Develop, test, and deliver Business Analytic solutions Document all the processes and procedures, both functional and technical Train and support end users of Business Analytics Find ways to increase the Return on Investment (ROI) Integrate Business Analytics into newer technologies such as mobile and cloud computing The goals of a mature, enterprise-wide Analytics solution is when any employee within the organization, be it an analyst to an executive, or a member of the management team, can have their business-related questions answered in real time or near real time. These answers should also be able to predict the unknown and prepare for the unforeseen circumstances better. With the success of a Business Analytics solution and realized ROI, a question that should be asked is—are the solutions robust and flexible enough to expand regionally/globally? Also, can it sustain a merger or acquisition with minimal consolidation efforts? If the Business Analytics solution provides the confidence in all of the above, the final question should be—can the Business Analytics solution be provided as a service to the organizations' suppliers and customers? In 2012, a global study was conducted jointly by IBM's Institute of Business Value (IBV) and MIT Sloan Management Review. This study, which included 1700 CEOs globally, reinforced the fact that one of the top objectives within their organizations was sharing and collaboration. IBM Cognos Insight, the desktop analysis application, provides collaborative features that allow the users to launch development efforts via IBMs Cognos Business Intelligence, Cognos Express, and Performance Management enterprise platforms. Let us consider a fictitious company called PointScore. Having completed its marketing, sales, and price strategy analysis, PointScore is now ready to demonstrate its research and analysis efforts to its client. Using IBM Cognos Insight, PointScore has three available options. All of these will leverage the Cognos Suite of products that its client has been using and is familiar with. Each of these options can be used to share the information with a larger audience within the organization. Though technical, this article is written for a non-technical audience as well. IBM Cognos Insight is a product that has its roots embedded in Business Intelligence and its foundation is built upon Performance Management solutions. This article provides the readers with Business Analytics techniques and discusses the technical aspects of the product, describing its features and benefits. The goal of writing this article was to make you feel confident about the product. This article is meant to expand on your creativity so that you can build better analysis and workspaces using Cognos Insight. The article focuses on the strengths of the product, which is to share and collaborate the development efforts into an existing IBM Cognos BI, Cognos Express, or TM1 environment. This sharing is possible because of the tight integration among all the products under the IBM Business Analytics umbrella. Summary After reading this article, you should be able to tackle Business Analytics implementations It will also help you to leverage the sharing capability to reach an end goal of spreading the value of Business Analytics throughout their organizations. Resources for Article: Further resources on this subject: How to Set Up IBM Lotus Domino Server [Article] Tips and Tricks on IBM FileNet P8 Content Manager [Article] Reporting Planning Data in IBM Cognos 8: Publish and BI Integration [Article]
Read more
  • 0
  • 0
  • 2999
article-image-video-conversion-required-html5-video-playback
Packt
12 Sep 2013
5 min read
Save for later

Video conversion into the required HTML5 Video playback

Packt
12 Sep 2013
5 min read
(For more resources related to this topic, see here.) If you have issues with Playback support and probably thinking that you would play any video in Windows Media Player, it is not so as Windows Media Player doesn't support all formats. This article will show you how to fix this and get them playing. Transcoding audio files (must know) We start this section by getting ready the files we are going to use later on—it is likely you may well have some music tracks already, but not in the right format. We will fix that in this task by using a shareware program called Switch Audio File Converter, which is available from http://www.nch.com.au/switch for approximately USD40. Getting ready For this task, you need download a copy of the Switch Sound Converter application—it is available from http://www.nch.com.au/switch/index.html. You may like to note that a license is required for encoding AMR files or using MP3 files in certain instances—these can be purchased at the same time as purchasing the main license. How to do it... The first thing to do is install the software, so let's go ahead and run switchsetup.exe—note that for the purposes of this demo, you should not select any of the additional related programs when requested. Double-click the application to open it, then click on Add File and browse to, and then select the file you want to convert: Click on Output Format and change it to .ogg—it will automatically download the required converter as soon as you click on Convert. The file is saved by default into your Music folder underneath your profile. How it works... Switch Sound File Converter has been designed to make the conversion process as simple as possible—this includes downloading any extra components that are required for the purposes of encoding or decoding audio files. You can alter the encoding settings, although you should find that for general use this may not be necessary. There's more... There are lots of converters available that you can try—I picked this one as it is quick and easy to use, and doesn't have a large footprint (unlike some others). If you prefer, you can also use online services to accomplish the same task—two examples include Fre:ac (http://www.freac.org) or Online-Convert.com (http://www.online-convert.com). Note though that some sites will take note of details such as your IP address or what it is you are converting as well as store copies for a period of time. Installing playback support: codecs (Must know) Now that we have converted our audio files ready for playback—it's time to ensure that we can actually play them back in our PCs as well as in our browsers. Most of the latest browsers will play at least one of the formats we've created in the previous task but it is likely that you won't be able to play them outside of the browser. Let's take a look at how we can fix this by updating the codecs installed in your PC. For those of you not familiar with codecs, they are designed to help encode assets when the audio file is created and decode them as part of playback. Software and hardware makers will decide the makeup of each codec based on which containers and technologies they should support; a number of factors such as file size, quality, and bandwidth all play a part in their decisions. Let's take a look at how we can update our PCs to allow for proper playback of HTML5 video. Getting ready There are lots of individuals or companies who have produced different codecs, with differing results. We will take a look at one package that seems to work very well for Windows, which is the K-Lite Codec Pack. You need to download a copy of the pack, which is available from http://fileforum.betanews.com/detail/KLite-Codec-Pack-Basic/1094057842/1 —use the blue Download link on the right side of the page. This will download the basic version, which is more than sufficient for our needs at this stage. How to do it... Download, then run K-Lite_Codec_Pack_860_Basic.exe. Click on Next. On the Installation Mode screen, select the Simple option. On the File Associations page, select Windows Media Player. On the File associations screen for Windows Media Player screen, click on Select all audio: On the Thumbnails screen, click on Next. On the Speaker configuration screen, click on Next, then Install. The software will confirm when the codecs have been installed. How it works... In order to play back HTML5 format audio in Windows Media Player, you need to ensure you have the correct support in place; Windows Media Player doesn't understand the encoding format of HTML5 audio by default. We can overcome this by installing additional codecs that tell Windows how to encode or decode a particular file format; K-Lite's package aims to remove the pain of this process. There's more... The package we've looked at in this task is only available for Windows, if you are a Mac user, you will need to use an alternative method. There are lots of options available online—one such option is X Lossless Decoder, available from http://www.macupdate.com/app/mac/23430/x-lossless-decoder, which includes support for both .ogg and .mp4 formats. Summary We've taken a look at the recipes that show you to transcode a video into HTML5 Format and install playback support. This is only just the start of what you can achieve using this article—there is a whole world out there to explore. Resources for Article : Further resources on this subject: Basic use of Local Storage [Article] Customize your LinkedIn profile headline [Article] Blocking versus Non blocking scripts [Article]
Read more
  • 0
  • 0
  • 2007

article-image-one-page-application-development
Packt
12 Sep 2013
10 min read
Save for later

One-page Application Development

Packt
12 Sep 2013
10 min read
(For more resources related to this topic, see here.) Model-View-Controller or MVC Model-View-Controller ( MVC ) is a heavily used design pattern in programming. A design pattern is essentially a reusable solution that solves common problems in programming. For example, the Namespace and Immediately-Invoked Function Expressions are patterns that are used throughout this article. MVC is another pattern to help solve the issue of separating the presentation and data layers. It helps us keep our markup and styling outside of the JavaScript; keeping our code organized, clean, and manageable—all essential requirements for creating one-page-applications. So let's briefly discuss the several parts of MVC, starting with models. Models A model is a description of an object, containing the attributes and methods that relate to it. Think of what makes up a song, for example the track's title, artist, album, year, duration, and more. In its essence, a model is a blueprint of your data. Views The view is a physical representation of the model. It essentially displays the appropriate attributes of the model to the user, the markup and styles used on the page. Accordingly, we use templates to populate our views with the data provided. Controllers Controllers are the mediators between the model and the view. The controller accepts actions and communicates information between the model and the view if necessary. For example, a user can edit properties on a model; when this is done the controller tells the View to update according to the user's updated information. Relationships The relationship established in an MVC application is critical to sticking with the design pattern. In MVC, theoretically, the model and view never speak with each other. Instead the controller does all the work; it describes an action, and when that action is called either the model, view, or both update accordingly. This type of relationship is established in the following diagram: This diagram explains a traditional MVC structure, especially that the communication between the controller and model is two-way; the controller can send data to/from the model and vice versa for the view. However, the view and model never communicate, and there's a good reason for that. We want to make sure our logic is contained appropriately; therefore, if we wanted to delegate events properly for user actions, then that code would go into the view. However, if we wanted to have utility methods, such as a getName method that combines a user's first name and last name appropriately, that code would be contained within a user model. Lastly, any sort of action that pertains to retrieving and displaying data would be contained in the controller. Theoretically, this pattern helps us keep our code organized, clean, and efficient. In many cases this pattern can be directly applied, especially in many backend languages like Ruby, PHP, and Java. However, when we start applying this strictly to the frontend, we are confronted with many structural challenges. At the same time, we need this structure to create solid one-page-applications. The following sections will introduce you to the libraries we will use to solve these issues and more. Introduction to Underscore.js One of the libraries we will be utilizing in our sample application will be Underscore.js. Underscore has become extremely popular in the last couple of years due to the many utility methods it provides developers without extending built-in JavaScript objects, such as String, Array, or Object. While it provides many useful methods, the suite has also been optimized and tested across many of the most popular web browsers, including Internet Explorer. For these reasons, the community has widely adopted this library and continually supported it. Implementation Underscore is extremely easy to implement in our applications. In order to get Underscore going, all we need to do is include it on our page like so: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> </head> <body> <script src = "//ajax.googleapis.com/ajax/libs/jquery/ 1.9.0/jquery.min.js"></script> <script src = "//cdnjs.cloudflare.com/ajax/libs/underscore.js/ 1.4.3/underscore-min.js"></script> </body> </html> Once we include Underscore on our page, we have access to the library at the global scope using the _ object. We can then access any of the utility methods provided by the library by doing _.methodName. You can review all of the methods provided by Underscore online (http://underscorejs.org/), where all methods are documented and contain samples of their implementation. For now, let's briefly review some of the methods we'll be using in our application. _.extend The extend method in Underscore is very similar to the extend method we have been using from Zepto (http://zeptojs.com/#$.extend). If we look at the documentation provided on Underscore's website (http://underscorejs.org/#extend), we can see that it takes multiple objects with the first parameter being the destination object that gets returned once all objects are combined. Copy all of the properties in the source objects over to the destination object, and return the destination object. It's in-order, so the last source will override properties of the same name in previous arguments. As an example, we can take a Song object and create an instance of it while also overriding its default attributes. This can be seen in the following example: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); </script> If we log out the Sample object, we'll notice that it has inherited from the Song constructor and overridden the default attributes track, duration, and album. Although we can improve the performance of inheritance using traditional JavaScript, using an extend method helps us focus on delivery. We'll look at how we can utilize this method to create a base architecture within our sample application later on in the article. _.each The each method is extremely helpful when we want to iterate over an Array or Object. In fact this is another method that we can find in Zepto and other popular libraries like jQuery. Although each library's implementation and performance is a little different, we'll be using Underscore's _.each method, so that we can stick within our application's architecture without introducing new dependencies. As per Underscore's documentation (http://underscorejs.org/#each), the use of _.each is similar to other implementations: Iterates over a list of elements, yielding each in turn to an iterator function. The iterator is bound to the context object, if one is passed. Each invocation of iterator is called with three arguments: (element, index, list). If list is a JavaScript object, iterator's arguments will be (value, key, list). Delegates to the native forEach function if it exists. Let's take a look at an example of using _.each with the code we created in the previous section. We'll loop through the instance of Sample and log out the object's properties, including track, duration, and album. Because Underscore's implementation allows us to loop through an Object, just as easily as an Array, we can use this method to iterate over our Sample object's properties: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); _.each(Sample, function(value, key, list){ console.log(key + ": " + value); }); </script> The output from our log should look something like this: track: Sample Title duration: 0 album: Sample Album As you can see, it's extremely easy to use Underscore's each method with arrays and objects. In our sample application, we'll use this method to loop through an array of objects to populate our page, but for now let's review one last important method we'll be using from Underscore's library. _.template Underscore has made it extremely easy for us to integrate templating into our applications. Out of the box, Underscore comes with a simple templating engine that can be customized for our purposes. In fact, it can also precompile your templates for easy debugging. Because Underscore's templating can interpolate variables, we can utilize it to dynamically change the page as we wish. The documentation provided by Underscore (http://underscorejs.org/#template) helps explain the different options we have when using templates: Compiles JavaScript templates into functions that can be evaluated for rendering. Useful for rendering complicated bits of HTML from JSON data sources. Template functions can both interpolate variables, using <%= … %>, as well as execute arbitrary JavaScript code, with <% … %>. If you wish to interpolate a value, and have it be HTML-escaped, use <%- … %>. When you evaluate a template function, pass in a data object that has properties corresponding to the template's free variables. If you're writing a one-off, you can pass the data object as the second parameter to template in order to render immediately instead of returning a template function. Templating on the frontend can be difficult to understand at first, after all we were used to querying a backend, using AJAX, and retrieving markup that would then be rendered on the page. Today, best practices dictate we use RESTful APIs that send and retrieve data. So, theoretically, you should be working with data that is properly formed and can be interpolated. But where do our templates live, if not on the backend? Easily, in our markup: <script type="tmpl/sample" id="sample-song"> <section> <header> <h1><%= track %></h1> <strong><%= album %></strong> </header> </section> </script> Because the preceding script has an identified type for the browser, the browser avoids reading the contents inside this script. And because we can still target this using the ID, we can pick up the contents and then interpolate it with data using Underscore's template method: <script> function Song() { this.track = "Track Title"; this.duration = 215; this.album = "Track Album"; }; var Sample = _.extend(new Song(), { 'track': 'Sample Title', 'duration': 0, 'album': 'Sample Album' }); var template = _.template(Zepto('#sample-song').html(), Sample); Zepto(document.body).prepend(template); </script> The result of running the page, would be the following markup: <body> <section> <header> <h1>Sample Title</h1> <strong>Sample Album</strong> </header> </section> <!-- scripts and template go here --> </body> As you can see, the content from within the template would be prepended to the body and the data interpolated, displaying the properties we wish to display; in this case the title and album name of the song. If this is a bit difficult to understand, don't worry about it too much, I myself had a lot of trouble trying to pick up the concept when the industry started moving into one-page applications that ran off raw data (JSON). For now, these are the methods we'll be using consistently within the sample application to be built in this article. It is encouraged that you experiment with the Underscore.js library to discover some of the more advanced features that make your life easier, such as _.map, _.reduce, _.indexOf, _.debounce, and _.clone. However, let's move on to Backbone.js and how this library will be used to create our application.
Read more
  • 0
  • 0
  • 1861

article-image-photo-pad
Packt
11 Sep 2013
7 min read
Save for later

Photo Pad

Packt
11 Sep 2013
7 min read
(For more resources related to this topic, see here.) Time for action – creating Photo Pad In the HTML file, we will add a toolbar with buttons for Load, Save, and Effects. <body> <div id="app"> <header>Photo Pad </header> <div id="main"> <div id="toolbar"> <div class="dropdown-menu"> <button data-action="menu">Load</button> <ul id="load-menu" data-option="file-picker" class="file-picker menu"> <li data-value="file-picker"> <input type="file" /> </li> </ul> </div> <button data-action="save">Save</button> <div class="dropdown-menu"> <button data-action="menu">Effects</button> <ul data-option="applyEffect" class="menu"> <li data-value="invert">Invert</li> </ul> </div> </div> <canvas width="0" height="0"> Sorry, your browser doesn't support canvas. </canvas> </div> <footer>Click load to choose a file</footer> </div> </body> The Load toolbar item has a drop-down menu, but instead of menu items it has a file input control in it where the user can select a file to load. The Effects item has a drop-down menu of effects. For now we just have one in there, Invert, but we will add more later. For our CSS we will copy everything we had in canvasPad.css to photoPad.css, so that we get all of the same styling for the toolbar and menus. We will also use the Toolbar object in toolbar.js. In our JavaScript file we will change the application object name to PhotoPadApp. We also need a couple of variables in PhotoPadApp. We will set the canvas variable to the <canvas> element, the context variable to the canvas's context, and define an $img variable to hold the image we will be showing. Here we initialize it to a new <img> element using jQuery: function PhotoPadApp() { var version = "5.2", canvas = $("#main>canvas")[0], context = canvas.getContext("2d"), $img = $("<img>"); The first toolbar action we will implement is the Save button, since we already have that code from Canvas Pad. We check the action in toolbarButtonClicked() to see if it's "save", and if so we get the data URL and open it in a new browser window: function toolbarButtonClicked(action) { switch (action) { case "save": var url = canvas.toDataURL(); window.open(url, "PhotoPadImage"); break; } } What just happened? We created the scaffolding for the Photo Pad application with toolbar items for Load, Save, and Effects. We implemented the save function the same as we did for Canvas Pad. The next thing we'll implement is the Load drop-down menu since we need an image to manipulate. When the Load toolbar button is clicked, it will show the drop-down menu with a file input control in it that we defined previously. All of that we get for free because it's just another drop-down menu in our toolbar. But before we can do that we need to learn about the HTML5 File API. The File API We may not be able to save files directly to the user's filesystem, but we can access files using HTML5's File API. The File API allows you to get information about, and load the contents of, files that the user selects. The user can select files using an input element with a type of file. The process for loading a file works in the following way: The user selects one or more files using a <input type="file"> element. We get the list of files from the input element's files property. The list is a FileList object containing File objects. You can enumerate over the file list and access the files just like you would an array. The File object contains three fields. name: This is the filename. It doesn't include path information. size: This is the size of the file in bytes. type: This is the MIME type, if it can be determined. Use a FileReader object to read the file's data. The file is loaded asynchronously. After the file has been read, it will call the onload event handler. FileReader has a number of methods for reading files that take a File object and return the file contents. readAsArrayBuffer(): This method reads the file contents into an ArrayBuffer object. readAsBinaryString(): This method reads the file contents into a string as binary data. readAsText(): This method reads the file contents into a string as text. readAsDataURL(): This method reads the file contents into a data URL string. You can use this as the URL for loading an image. Time for action – loading an image file Let's add some code to the start() method of our application to check if the File API is available. You can determine if a browser supports the File API by checking if the File and FileReader objects exist: this.start = function() { // code not shown... if (window.File && window.FileReader) { $("#load-menu input[type=file]").change(function(e) { onLoadFile($(this)); }); } else { loadImage("images/default.jpg"); } } First we check if the File and FileReader objects are available in the window object. If so, we hook up a change event handler for the file input control to call the onLoadFile() method passing in the <input> element wrapped in a jQuery object. If the File API is not available we will just load a default image by calling loadImage(), which we will write later. Let's implement the onLoadFile() event handler method: function onLoadFile($input) { var file = $input[0].files[0]; if (file.type.match("image.*")) { var reader = new FileReader(); reader.onload = function() { loadImage(reader.result); }; reader.readAsDataURL(file); } else { alert("Not a valid image type: " + file.type); setStatus("Error loading image!"); } } Here we get the file that was selected by looking at the file input's files array and taking the first one. Next we check the file type, which is a MIME type, to make sure it is an image. We are using the String object's regular expression match() method to check that it starts with "image". If it is an image, we create a new instance of the FileReader object. Then we set the onload event handler to call the loadImage() method, passing in the FileReader object's result field, which contains the file's contents. Lastly, we call the FileReader object's readAsDataURL() method, passing in the File object to start loading the file asynchronously. If it isn't an image file, we show an alert dialog box with an error message and show an error message in the footer by calling setStatus(). Once the file has been read, the loadImage() method will be called. Here we will use the data URL we got from the FileReader object's result field to draw the image into the canvas: function loadImage(url) { setStatus("Loading image"); $img.attr("src", url); $img[0].onload = function() { // Here "this" is the image canvas.width = this.width; canvas.height = this.height; context.drawImage(this, 0, 0); setStatus("Choose an effect"); } $img[0].onerror = function() { setStatus("Error loading image!"); } } First we set the src attribute for the image element to the data URL we got after the file was loaded. This will cause the image element to load that new image. Next we define the onload event handler for the image, so that we are notified when the image is loaded. Note that when we are inside the onload event handler, this points to the <image> element. First we change the canvas' width and height to the image's width and height. Then we draw the image on the canvas using the context's drawImage() method. It takes the image to draw and the x and y coordinates of where to draw it. In this case we draw it at the top-left corner of the canvas (0, 0). Lastly, we set an onerror event handler for the image. If an error occurs loading the image, we show an error message in the footer. What just happened? We learned how to use the File API to load an image file from the user's filesystem. After the image was loaded we resized the canvas to the size of the image and drew the image onto the canvas.
Read more
  • 0
  • 0
  • 1681
article-image-master-virtual-desktop-image-creation
Packt
11 Sep 2013
11 min read
Save for later

Master Virtual Desktop Image Creation

Packt
11 Sep 2013
11 min read
(For more resources related to this topic, see here.) When designing your VMware Horizon View infrastructure, creating a Virtual Desktop master image is second only to infrastructure design in terms of importance. The reason for this is simple; as ubiquitous as Microsoft Windows is, it was never designed to be a hosted Virtual Desktop. The good news is that with a careful bit of planning, and a thorough understanding of what your end users need, you can build a Windows desktop that serves all your needs, while requiring the bare minimum of infrastructure resources. A default installation of Windows contains many optional components and configuration settings that are either unsuitable for, or not needed in, a Virtual Desktop environment, and understanding their impact is critical to maintaining Virtual Desktop performance over time and during peak levels of use. Uninstalling unneeded components and disabling services or scheduled tasks that are not required will help reduce the amount of resources the Virtual Desktop requires, and ensure that the View infrastructure can properly support the planned number of desktops even as resources are oversubscribed. Oversubscription is defined as having assigned more resources than what is physically available. This is most commonly done with processor resources in Virtual Desktop environments, where a single server processor core may be shared between multiple desktops. As the average desktop does not require 100 percent of its assigned resources at all times, we can share those resources between multiple desktops without affecting the performance. Why is desktop optimization important? To date, Microsoft has only ever released a version of Windows designed to be installed on physical hardware. This isn't to say that Microsoft is unique is this regard, as neither Linux and Mac OS X offers an installation routine that is optimized for a virtualized hardware platform. While nothing stops you from using a default installation of any OS or software package in a virtualized environment, you may find it difficult to maintain consistent levels of performance in Virtual Desktop environments where many of the resources are shared, and in almost every case oversubscribed in some manner. In this section, we will examine a sample of the CPU and disk IO resources that can be recovered were you to optimize the Virtual Desktop master image. Due to the technological diversity that exists from one organization to the next, optimizing your Virtual Desktop master image is not an exact science. The optimization techniques used and their end results will likely vary from one organization to the next due to factors unrelated to View or vSphere. The information contained within this article will serve as a foundation for optimizing a Virtual Desktop master image, focusing primarily on the operating system. Optimization results – desktop IOPS Desktop optimization benefits one infrastructure component more than any other: storage. Until all flash storage arrays achieve price parity with the traditional spinning disk arrays many of us use today, reducing the per-desktop IOPS required will continue to be an important part of any View deployment. On a per-disk basis, a flash drive can accommodate more than 15 times the IOPS of an enterprise SAS or SCSI disk, or 30 times the IOPS of a traditional desktop SATA disk. Organizations that choose an all-flash array may find that they have more than sufficient IOPS capacity for their Virtual Desktops, even without doing any optimization. The following graph shows the reduction in IOPS that occurred after performing the optimization techniques described later in this article. The optimized desktop generated 15 percent fewer IOPS during the user workload simulation. By itself that may not seem like a significant reduction, but when multiplied by hundreds or thousands of desktops the savings become more significant. Optimization results – CPU utilization View supports a maximum of 16 Virtual Desktops per physical CPU core. There is no guarantee that your View implementation will be able to attain this high consolidation ratio, though, as desktop workloads will vary from one type of user to another. The optimization techniques described in this article will help maximize the number of desktops you can run per each server core. The following graph shows the reduction in vSphere host % Processor Time that occurred after performing the optimization techniques described later in this article: % Processor Time is one of the metrics that can be used to measure server processor utilization within vSphere. The statistics in the preceding graph were captured using the vSphere ESXTOP command line utility, which provides a number of performance statistics that the vCenter performance tabs do not offer, in a raw format that is more suited for independent analysis. The optimized desktop required between 5 to 10 percent less processor time during the user workload simulation. As was the case with the IOPS reduction, the savings are significant when multiplied by large numbers of desktops. Virtual Desktop hardware configuration The Virtual Desktop hardware configuration should provide only what is required based on the desktop needs and the performance analysis. This section will examine the different virtual machine configuration settings that you may wish to customize, and explain their purpose. Disabling virtual machine logging Every time a virtual machine is powered on, and while it is running, it logs diagnostic information within the datastore that hosts its VMDK file. For environments that have a large number of Virtual Desktops, this can generate a noticeable amount of storage I/O. The following steps outline how to disable virtual machine logging: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight General . Clear Enable logging as shown in the following screenshot, which sets the logging = "FALSE" option in the virtual machine VMX file: While disabling logging does reduce disk IO, it also removes log files that may be used for advanced troubleshooting or auditing purposes. The implications of this change should be considered before placing the desktop into production. Removing unneeded devices By default, a virtual machine contains several devices that may not be required in a Virtual Desktop environment. In the event that these devices are not required, they should be removed to free up server resources. The following steps outline how to remove the unneeded devices: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, under Hardware , highlight Floppy drive 1 as shown in the following screenshot and click on Remove : In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight Boot Options . Check the checkbox under the Force BIOS Setup section as shown in the following screenshot: Click on OK to close the Virtual Machine Properties window. Power on the virtual machine; it will boot into the PhoenixBIOS Setup Utility . The PhoenixBIOS Setup Utility menu defaults to the Main tab. Use the down arrow key to move down to the Legacy Diskette A , and then press the Space bar key until the option changes to Disabled . Use the right arrow key to move to the Advanced tab. Use the arrow down key to select I/O Device Configuration and press Enter to open the I/O Device Configuration window. Disable the serial ports, parallel port, and floppy disk controller as shown in the following screenshot. Use the up and down arrow keys to move between devices, and the Space bar to disable or enable each as required: Press the F10 key to save the configuration and exit the PhoenixBIOS Setup Utility . Do not remove the virtual CD-ROM device, as it is used by vSphere when performing an automated installation or upgrade of the VMware Tools software. Customizing the Windows desktop OS cluster size Microsoft Windows uses a default cluster size, also known as allocation unit size, of 4 KB when creating the boot volume during a new installation of Windows. The cluster size is the smallest amount of disk space that will be used to hold a file, which affects how many disk writes must be made to commit a file to disk. For example, when a file is 12 KB in size, and the cluster size is 4 KB, it will take three write operations to write the file to disk. The default 4 KB cluster size will work with any storage option that you choose to use with your environment, but that does not mean it is the best option. Storage vendors frequently do performance testing to determine which cluster size is optimal for their platforms, and it is possible that some of them will recommend that the Windows cluster size should be changed to ensure optimal performance. The following steps outline how to change the Windows cluster size during the installation process; the process is the same for both Windows 7 and Windows 8. In this example, we will be using an 8 KB cluster size, although any size can be used based on the recommendation from your storage vendor. The cluster size can only be changed during the Windows installation, not after. If your storage vendor recommends the 4 KB Windows cluster size, the default Windows settings are acceptable. Boot from the Windows OS installer ISO image or physical CD and proceed through the install steps until the Where do you want to install Windows? dialog box appears. Press Shift + F10 to bring up a command window. In the command window, enter the following commands: diskpart select disk 0 create partition primary size=100 active format fs=ntfs label="System Reserve" quick create partition primary format fs=ntfs label=OS_8k unit=8192 quick assign exit Click on Refresh to refresh the Where do you want to install Windows? window. Select Drive 0 Partition 2: OS_8k , as shown in the following screenshot, and click on Next to begin the installation: The System Reserve partition is used by Windows to store files critical to the boot process and will not be visible to the end user. These files must reside on a volume that uses a 4 KB cluster size, so we created a small partition solely for that purpose. Windows will automatically detect this partition and use it when performing the Windows installation. In the event that your storage vendor recommends a different cluster size than shown in the previous example, replace the 8192 in the sample command in step 3 with whatever value the vendor recommends, in bytes, without any punctuation. Windows OS pre-deployment tasks The following tasks are unrelated to the other optimization tasks that are described in this article but they should be completed prior to placing the desktop into production. Installing VMware Tools VMware Tools should be installed prior to the installation of the View Agent software. To ensure that the master image has the latest version of the VMware Tools software, apply the latest updates to the host vSphere Server prior to installing the tools package on the desktop. The same applies if you are updating your VMware Tools software. The View Agent software should be reinstalled after the VMware Tools software is updated to ensure that the appropriate View drivers are installed in place of the versions included with VMware Tools. Cleaning up and defragmenting the desktop hard disk To minimize the space required by the Virtual Desktop master image and ensure optimal performance, the Virtual Desktop hard disks should be cleaned of nonessential files and optimized prior to deployment into production. The following actions should be taken once the Virtual Desktop master image is ready for deployment: Use the Windows Disk Cleanup utility to remove any unnecessary files. Use the Windows Defragment utility to defragment the virtual hard disk. If the desktop virtual hard disks are thinly provisioned, you may wish to shrink them after the defragmentation completes. This can be performed with utilities from your storage vendor if available, by using the vSphere vmkfstools utility, or by using the vSphere storage vMotion feature to move the virtual machine to a different datastore. Visit your storage vendor or the VMware vSphere Documentation (http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html) for instructions on how to shrink virtual hard disks or perform a storage vMotion.
Read more
  • 0
  • 0
  • 2512

article-image-html5-canvas
Packt
11 Sep 2013
5 min read
Save for later

HTML5 Canvas

Packt
11 Sep 2013
5 min read
(For more resources related to this topic, see here.) Setting up your HTML5 canvas (Should know) This recipe will show you how to first of all set up your own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. For this task we will be creating a series of primitives such as circles and rectangles. Modern video games make use of these types of primitives in many different forms. For example, both circles and rectangles are commonly used within collision-detection algorithms such as bounding circles or bounding boxes. How to do it... As previously mentioned we will begin by creating our own HTML5 canvas. We will start by creating a blank HTML file. To do this, you will need some form of text editor such as Microsoft Notepad (available for Windows) or the TextEdit application (available on Mac OS). Once you have a basic webpage set up, all that is left to do in order to create a canvas is to place the following between both body tags: <canvas id="canvas" width="800" height="600"></canvas> As previously mentioned, we will be implementing a number of basic elements within the canvas. In order to do this we must first link the JavaScript file to our webpage. This file will be responsible for the initialization, loading, and drawing of objects to the canvas. In order for our scripts to have any effect on our canvas we must create a separate file called canvas example. Create this new file within your text editor and then insert the following code declarations: var canvas = document.getElementById("canvas"), context = canvas.getContext("2d"); These declarations are responsible for retrieving both the canvas element and context. Using the canvas context, we can begin to draw primitives, text, and load textures into our canvas. We will begin by drawing a rectangle in the top-left corner of our canvas. In order to do this place the following code below our previous JavaScript declarations: context.fillStyle="#FF00FF";context.fillRect(15,15,150,75); If you were to now view the original webpage we created, you would see the rectangle being drawn in the top-left corner at position X: 15, Y: 15. Now that we have a rectangle, we can look at how we would go about drawing a circle onto our canvas. This can be achieved by means of the following code: context.beginPath();context.arc(350,150,40,0,2 * Math.PI);context.stroke(); How it works... The first code extract represents the basic framework required to produce a blank webpage and is necessary for a browser to read and display the webpage in question. With a basic webpage created, we then declare a new HTML5 canvas. This is done by assigning an id attribute, which we use to refer to the canvas within our scripts. The canvas declaration then takes a width and height attribute, both of which are also necessary to specify the size of the canvas, that is, the number of pixels wide and pixels high. Before any objects can be drawn to the canvas, we first need to get the canvas element. This is done through means of the getElementById method that you can see in our canvas example. When retrieving the canvas element, we are also required to specify the canvas context by calling a built-in HTML5 method known as getContext. This object gives access to many different properties and methods for drawing edges, circles, rectangles, external images, and so on. This can be seen when we draw a rectangle to our the canvas. This was done using the fillStyle property, which takes in a hexadecimal value and in return specifies the color of an element. Our next line makes use of the fillRect method, which requires a minimum of four values to be passed to it. These values include the X and Y position of the rectangle, as well as the width and height of the rectangle. As a result, a rectangle is drawn to the canvas with the color, position, width, and height specified. We then move on to drawing a circle to the canvas, which is done by firstly calling a built-in HTML canvas method known as BeginPath. This method is used to either begin a new path or to reset a current path. With a new path setup, we then take advantage of a method known as Arc that allows for the creation of arcs or curves, which can be used to create circles. This method requires that we pass both an X and Y position, a radius, and a starting angle measured in radians. This angle is between 0 and 2 * Pi where 0 and 2 are located at the 3 o'clock position of the arc's circle. We also must pass an ending angle, which is also measured in radians. The following figure is taken directly from the W3C HTML canvas reference, which you can find at the following link http://bit.ly/UCVPY1: Summary In this article we saw how to first of all set up our own HTML5 canvas. With the canvas set up, we can then move on to look at some of the basic elements the canvas has to offer and how we would go about implementing them. Resources for Article: Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 2606
Modal Close icon
Modal Close icon