Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-building-chat-application
Packt
27 Jun 2013
4 min read
Save for later

Building a Chat Application

Packt
27 Jun 2013
4 min read
(For more resources related to this topic, see here.) The following is a screenshot of our chat application: Creating a project To begin developing our chat application, we need to create an Opa project using the following Opa command: opa create chat This command will create an empty Opa project. Also, it will generate the required directories and files automatically with the structure as shown in the following screenshot: Let's have a brief look at what these source code files do: controller.opa: This file serves as the entry point of the chat application; we start the web server in controller.opa view.opa: This file serves as an user interface model.opa: This is the model of the chat application; it defines the message, network, and the chat room style.css: This is an external stylesheet file Makefile: This file is used to build an application As we do not need database support in the chat application, we can remove --import-package stdlib.database.mongo from the FLAG option in Makefile. Type make and make run to run the empty application. Launching the web server Let's begin with controller.opa, the entry point of our chat application where we launch the web server. We have already discussed the function Server.start in the Server module section. In our chat application, we will use a handlers group to handle users requests. Server.start(Server.http, [ {resources: @static_resource_directory("resources")}, {register: [{css:["/resources/css/style.css"]}]}, {title:"Opa Chat", page: View.page } ]) So, what exactly are the arguments that we are passing to the Server.start function? The line {resources: @static_resource_direcotry("resources")} registers a resource handler and will serve resource files in the resources directory. Next, the line {register: [{css:["/resources/css/style.css"]}]} registers an external CSS file—style.css. This permits us to use styles in the style.css application scope. Finally, the line {title:"Opa Chat", page: View.page} registers a single page handler that will dispatch all other requests to the function View.page. The server uses the default configuration Server.http and will run on port 8080. Designing user interface When the application starts, all the requests (except requests for resources) will be distributed to the function View.page, which displays the chat page on the browser. Let's take a look at the view part; we define a module named View in view.opa. import stdlib.themes.bootstrap.css module View { function page(){ user = Random.string(8) <div id=#title class="navbar navbar-inverse navbar-fixed-top"> <div class=navbar-inner> <div id=#logo /> </div> </div> <div id=#conversation class=container-fluid onready={function(_){Model.join(updatemsg)}} /> <div id=#footer class="navbar navbar-fixed-bottom"> <div class=input-append> <input type=text id=#entry class=input-xxlarge onnewline={broadcast(user)}/> <button class="btn btn-primary" onclick={broadcast(user)}>Post</button> </div> </div> } ... } The module View contains functions to display the page on the browser. In the first line, import stdlib.themes.bootstrap.css, we import Bootstrap styles. This permits us to use Bootstrap markup in our code, such as navbar, navbar-fixtop, and btn-primary. We also registered an external style.css file so we can use styles in style.css such as conversation and footer. As we can see, this code in the function page follows almost the same syntax as HTML. As discussed in earlier, we can use HTML freely in the Opa code, the HTML values having a predefined type xhtml in Opa. Summary In this article, we started by creating and a project and launching the web server. Resources for Article : Further resources on this subject: MySQL 5.1 Plugin: HTML Storage Engine—Reads and Writes [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] Oracle Web RowSet - Part1 [Article]
Read more
  • 0
  • 0
  • 7844

article-image-selecting-dom-elements-using-mootools-12-part-1
Packt
07 Jan 2010
7 min read
Save for later

Selecting DOM Elements using MooTools 1.2: Part 1

Packt
07 Jan 2010
7 min read
In order to successfully and effortlessly write unobtrusive JavaScript, we must have a way to point to the Document Object Model (DOM) elements that we want to manipulate. The DOM is a representation of objects in our HTML and a way to access and manipulate them. In traditional JavaScript, this involves a lot (like, seriously a lot) of code authoring, and in many instances, a lot of head-banging-against-wall-and-pulling-out-hair as you discover a browser quirk that you must solve. Let me save you some bumps, bruises, and hair by showing you how to select DOM elements the MooTools way. This article will cover how you can utilize MooTools to select/match simple (like, "All div elements") up to the most complex and specific elements (like, "All links that are children of a span that has a class of awesomeLink and points to http://superAwesomeSite.com MooTools and CSS selectors MooTools selects an element (or a set of elements) in the DOM using CSS selectors syntax. Just a quick refresher on CSS terminology, a CSS style rule consists of: selector { property: property value; } selector: indicates what elements will be affected by the style rule. property: the CSS property (also referred to as attribute). For example, color is a CSS property, so is font-style. You can have multiple property declarations in one style rule property value: the value you want assigned to the property. For example, bold is a possible CSS property value of the font-weight CSS property. For example, if you wanted to select a paragraph element with an ID of awesomeParagraph to give it a red color (in hexadecimal, this is #ff0), in CSS syntax you'd write: #awesomeParagraph { color: #ff0; } Also, if I wanted to increase its specificity and make sure that only paragraph elements that have an ID of awesomeParagraph is selected: p#awesomeParagraph { color: #ff0;} You'll be happy to know that this same syntax ports over to MooTools. What's more is that you'll be able to take advantage of all of CSS3's more complex selection syntax because even though CSS3 isn't supported by all browsers yet, MooTools supports them already. So you don't have to learn another syntax for writing selectors, you can use your existing knowledge of CSS - awesome, isn't it? Working with the $() and $$() functions The $() and $$() functions are the bread and butter of MooTools. When you're working with unobtrusive JavaScript, you need to specify which elements you want to operate on. The dollar and dollars functions help you do just that - they will allow you to specify what elements you want to work on. Notice, the dollar sign $ is shaped like an S, which we can interpret to mean 'select'. The $() dollar function The dollar function is used for getting an element by it's ID, which returns a single element that is extended with MooTools Element methods or null if nothing matches. Let's go back to awesomeParagraph in the earlier example. If I wanted to select awesomeParagraph, this is what I would write: $('awesomeParagraph’) By doing so, we can now operate on it by passing methods to the selection. For example, if you wanted to change its style to have a color of red. You can use the .setStyle() method which allows you to specify a CSS property and its matching property value, like so $('awesomeParagraph').setStyle('color', '#ff0'); The $$() dollars function The $$() function is the $() big brother (that's why it gets an extra dollar sign). The $$() function can do more robust and complex selections, can select a group, or groups of element’s and always returns an array object, with or without selected elements in it. Likewise, it can be interchanged with the dollar function. If we wanted to select awesomeParagraph using the dollars function, we would write: $$('#awesomeParagraph') Note that you have to use the pound sign (#) in this case as if you were using CSS selectors. When to use which If you need to select just one element that has an ID, you should use the $() function because it performs better in terms of speed than the $$() function. Use the $$() function to select multiple elements. In fact, when in doubt, use the$$() function because it can do what the $() function can do (and more). A note on single quotes (') versus double quotes ("")The example above would work even if we used double quotes such as $("awesomeParagraph") or $$("#awesomeParagraph") ") - but many MooTools developers prefer single quotes so they don’t have to escape characters as much (since the double quote is often used in HTML, you'll have to do "in order not to prematurely end your strings). It's highly recommended that you use single quotes - but hey, it's your life! Now, let's see these functions in action. Let's start with the HTML markup first. Put the following block of code in an HTML document: <body> <ul id="superList"> <li>List item</li> <li><a href="#">List item link</a></li> <li id="listItem">List item with an ID.</li> <li class="lastItem">Last list item.</li> </ul></body> What we have here is an unordered list. We'll use it to explore the dollar and dollars function. If you view this in your web browser, you should see something like this: Time for action – selecting an element with the dollar function Let's select the list item with an ID of listItem and then give it a red color using the .setStyle() MooTools method. Inside your window.addEvent('domready') method, use the following code: $('listItem').setStyle('color', 'red'); Save the HTML document and view it on your web browser. You should see the 3rd list item will be red. Now let's select the entire unordered list (it has an ID of superList), then give it a black border (in hexadecimal, this is #000000). Place the following code, right below the code the line we wrote in step 1: $('superList').setStyle('border', '1px solid #000000'); If you didn't close your HTML document, hit your browser's refresh. You should now see something like this: Time for action – selecting elements with the dollars function We'll be using the same HTML document, but this time, let's explore the dollars function. We're going to select the last list item (it has a class of lastItem). Using the .getFirst(), we select the first element from the array $$() returned. Then, we're going to use the .get() method to get its text. To show us what it is, we'll pass it to the alert() function. The code to write to accomplish this is: alert( $$('.lastItem').getFirst().get('text') ); Save the HTML document and view it in your web browser (or just hit your browser's refresh button if you still have the HTML document from the preview Time for action open). You should now see the following: Time for action – selecting multiple sets of elements with the dollars function What if we wanted to select multiple sets of elements and run the same method (or methods) on them? Let's do that now. Let's select the list item that has an <a> element inside it and the last list item (class="lastItem") and then animate them to the right by transitioning their margin-left CSS property using the .tween() method. Right below the line we wrote previously, place the following line: $$('li a, .lastItem').tween('margin-left', '50px'); View your work in your web browser. You should see the second and last list item move to the right by 50 pixels. What just happened? We just explored the dollar and dollars function to see how to select different elements and apply methods on them. You just learned to select: An element with an ID (#listItem and #superList) using the dollar $() function An element with a class (.lastItem) using the dollars $$() function Multiple elements by separating them with commas (li and .lastItem) You also saw how you can execute methods on your selected elements. In the example, we used the .setStyle(), .getFirst(), get(), and .tween() MooTools methods. Because DOM selection is imperative to writing MooTools code, before we go any further, let's talk about some important points to note about what we just did.
Read more
  • 0
  • 0
  • 7823

article-image-designing-sizing-building-and-configuring-citrix-vdi-box
Packt
13 Sep 2013
7 min read
Save for later

Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box

Packt
13 Sep 2013
7 min read
(For more resources related to this topic, see here.) Sizing the servers There are a number of tools and guidelines to help you to size Citrix VIAB appliances. Essentially, the guides cover the following topics: CPU Memory Disk IO Storage In their sizing guides, Citrix classifies users into the following two groups: 4kers Knowledge workers Therefore, the first thing to determine is how many of your proposed VIAB users are task workers, and how many are knowledge workers? Task workers Citrix would define task workers as users who run a small set of simple applications, not very graphical in nature or CPU- or memory-intensive, for example, Microsoft Office and a simple line of business applications. Knowledge workers Citrix would define knowledge workers as users who run multimedia and CPU- and memory-intensive applications. They may include large spreadsheet files, graphics packages, video playback, and so on. CPU Citrix offers recommendations based on CPU cores, such as the following: 3 x desktops per core per knowledge worker 6 x desktops per core per task user 1 x core for the hypervisor These figures can be increased slightly if the CPUs have hyper-threading. You should also add another 15 percent if delivering personal desktops. The sizing information has been gathered from the Citrix VIAB sizing guide PDF. Example 1 If you wanted to size a server appliance to support 50 x task-based users running pooled desktops, you would require 50 / 6 = 8.3 + 1 (for the hypervisor) = 9.3 cores, rounded up to 10 cores. Therefore, a dual CPU with six cores would provide 12 x CPU cores for this requirement. Example 2 If you wanted to size a server appliance to support 15 x task and 10 x knowledge workers you would require (15 / 6 = 2.5) + (10 / 3 = 3.3) + 1 (for the hypervisor) = 7 cores. Therefore, a dual CPU with 4 cores would provide 8 x CPU cores for this requirement. Memory The memory required depends on the desktop OS that you are running and also on the amount of optimization that you have done to the image. Citrix recommends the following guidelines: Task worker for Windows 7 should be 1.5 GB Task worker for Windows XP should be 0.5 GB Knowledge worker Windows 7 should be 2 GB Knowledge worker Windows XP should be 1 GB It is also important to allocate memory for the hypervisor and the VIAB virtual appliance. This can vary depending on the number of users, so we would recommend using the sizing spreadsheet calculator available in the Resources section of the VIAB website. However, as a guide, we would allocate 3 GB memory (based on 50 users) for the hypervisor and 1 GB for VIAB. The amount of memory required by the hypervisor will grow as the number of users on the server grows. Citrix also recommends adding 10 percent more memory for server operations. Example 1 If you wanted to size a server appliance to support 50 x task-based users, with Windows 7, you would require 50 x 1.5 + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 87 GB. Therefore, you would typically round this up to a 96 GB memory, providing an ideal configuration for this requirement. Example 2 Therefore, if you wanted to size a server appliance to support 15 x task and 10 x knowledge workers, with Windows 7, you would require (15 x 1.5) + (10 x 2) + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 51.5 GB. Therefore, a 64 GB memory would be an ideal configuration for this requirement. Disk IO As multiple Windows images run on the appliances, disk IO becomes very important and can often become the first bottleneck for VIAB.Citrix calculates IOPS with a 40-60 split between read and write OPS, during end user desktop access.Citrix doesn't recommend using slow disks for VIAB and has statistic information for SAS 10 and 15K and SSD disks.The following table shows the IOPS delivered from the following disks: Hard drive RPM IOPS RAID 0 IOPS RAID 1 SSD 6000   15000 175 122.5 10000 125 87.7 The following table shows the IOPS required for task and knowledge workers for Windows XP and Windows 7: Desktop IOPS Windows XP Windows 7 Task user 5 IOS 10 IOPS Knowledge user 10 IOPS 20 IOPS Some organizations decide to implement RAID 1 or 10 on the appliances to reduce the chance of an appliance failure. This does require many more disks however, and significantly increases the cost of the solution. SSD SSD is becoming an attractive proposition for organizations that want to run a larger number of users on each appliance. SSD is roughly 30 times faster than 15K SAS drives, so it will eliminate desktop IO bottlenecks completely. SSD continues to come down in price, so can be well worth considering at the start of a VIAB project. SSDs have no moving mechanical components. Compared with electromechanical disks, SSDs are typically less susceptible to physical shock, run more quietly, have lower access time, and less latency. However, while the price of SSDs has continued to decline, SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. A further option to consider would be Fusion-IO, which is based on NAND flash memory technology and can deliver an exceptional number of IOPS. Example 1 If you wanted to size a server appliance to support 50 x task workers, with Windows 7, using 15K SAS drives, you would require 175 / 10 = 17.5 users on each disk, therefore, 50 / 17. 5 = 3 x 15K SAS disks. Example 2 If you wanted to size a server appliance to support 15 x task workers and 10 knowledge workers, with Windows 7, you would require the following: 175 / 10 = 17.5 task users on each disk, therefore 15 / 17.5 = 0.8 x 15K SAS disks 175 / 20 = 8.75 knowledge users on each disk, therefore 10 / 8.75 = 1.1 x 15K SAS disks Therefore, 2 x 15K SAS drives would be required. Storage Storage capacity is determined by the number of images, number of desktops, and types of desktop. It is best practice to store user profile information and data elsewhere. Citrix uses the following formula to determine the storage capacity requirement: 2 x golden image x number of images (assume 20 GB for an image) 70 GB for VDI-in-a-Box 15 percent of the size of the image / desktop (achieved with linked clone technology) Example 1 Therefore, if you wanted to size a server appliance to support 50 x task-based users, with two golden Windows 7 images, you would require the following: Space for the golden image: 2 x 20 GB x 2 = 80 GB VIAB appliance space: 70 GB Image space/desktop: 15% x 20 GB x 50 = 150 GB Extra room for swap and transient activity: 100 GB Total: 400 GB Recommended: 500 GB to 1 TB per server We have already specified 3 x 15K SAS drives for our IO requirements. If those were 300-GB disks, they should provide enough storage. This section of the article provides you with a step-by-step guide to help you to build and configure a VIAB solution; starting with the hypervisor install. It then goes onto to cover adding an SSL certificate, the benefits of using the GRID IP Address feature, and how you can use the Kiosk mode to deliver a standard desktop to public access areas. It then covers adding a license file and provides details on the useful features contained within Citrix profile management. It then highlights how VIAB can integrate with other Citrix products such as NetScaler VPX, to enable secure connections across the Internet and GoToAssist, a support and monitoring package which is very useful if you are supporting a number of VIAB appliances across multiple sites. ShareFile can again be a very useful tool to enable data files to follow the user, whether they are connecting to a local device or a virtual desktop. This can avoid the problems of files being copied across the network, delaying users. We then move on to a discussion on the options available for connecting to VIAB, including existing PCs, thin clients, and other devices, including mobile devices. The chapter finishes with some useful information on support for VIAB, including the support services included with subscription and the knowledge forums. Installing the hypervisor All the hypervisors have two elements; the bare metal hypervisor that installs on the server and its management tools that you would typically install on the IT administrator workstations. Bare Metal Hypervisor Management tool Citrix XenServer XenCenter Microsoft Hyper-V Hyper V Manager VMware ESXi vSphere Client It is relatively straightforward to install the hypervisor. Make sure you enable linked clones in XenServer, because this is required for the linked clone technology. Give the hypervisor a static IP address and make a note of the administrator's username and password. You will need to download ISO images for the installation media; if you don't already have them, they can be found on the Internet.
Read more
  • 0
  • 0
  • 7821

article-image-bug-tracking
Packt
04 Jan 2017
11 min read
Save for later

Bug Tracking

Packt
04 Jan 2017
11 min read
In this article by Eduardo Freitas, the author of the book Building Bots with Node.js, we will learn about Internet Relay Chat (IRC). It enables us to communicate in real time in the form of text. This chat runs on a TCP protocol in a client server model. IRC supports group messaging which is called as channels and also supports private message. (For more resources related to this topic, see here.) IRC is organized into many networks with different audiences. IRC being a client server, users need IRC clients to connect to IRC servers. IRC Client software comes as a packaged software as well as web based clients. Some browsers are also providing IRC clients as add-ons. Users can either install on their systems and then can be used to connect to IRC servers or networks. While connecting these IRC Servers, users will have to provide unique nick or nickname and choose existing channel for communication or users can start a new channel while connecting to these servers. In this article, we are going to develop one of such IRC bots for bug tracking purpose. This bug tracking bot will provide information about bugs as well as details about a particular bug. All this will be done seamlessly within IRC channels itself. It's going to be one window operations for a team when it comes to knowing about their bugs or defects. Great!! IRC Client and server As mentioned in introduction, to initiate an IRC communication, we need an IRC Client and Server or a Network to which our client will be connected. We will be using freenode network for our client to connect to. Freenode is the largest free and open source software focused IRC network. IRC Web-based Client I will be using IRC web based client using URL(https://webchat.freenode.net/). After opening the URL, you will see the following screen, As mentioned earlier, while connecting, we need to provide Nickname: and Channels:. I have provided Nickname: as Madan and at Channels: as #BugsChannel. In IRC, channels are always identified with #, so I provided # for my bugs channel. This is the new channel that we will be starting for communication. All the developers or team members can similarly provide their nicknames and this channel name to join for communication. Now let's ensure Humanity: by selecting I'm not a robot and click button Connect. Once connected, you will see the following screen. With this, our IRC client is connected to freenode network. You can also see username on right hand side as @Madan within this #BugsChannel. Whoever is joining this channel using this channel name and a network, will be shown on right hand side. In the next article, we will ask our bot to join this channel and the same network and will see how it appears within the channel. IRC bots IRC bot is a program which connects to IRC as one of the clients and appears as one of the users in IRC channels. These IRC bots are used for providing IRC Services or to host chat based custom implementations which will help teams for efficient collaboration. Creating our first IRC bot using IRC and NodeJS Let's start by creating a folder in our local drive in order to store our bot program from the command prompt. mkdir ircbot cd ircbot Assuming we have Node.js and NPM installed and let's create and initialize our package.json, which will store our bot's dependencies and definitions. npm init Once you go through the npm init options (which are very easy to follow), you'll see something similar to this. On your project folder you'll see the result which is your package.json file. Let's install irc package from NPM. This can be located at https://www.npmjs.com/package/irc. In order to install it, run this npm command. npm install –-save irc You should then see something similar to this. Having done this, the next thing to do is to update your package.json in order to include the "engines" attribute. Open with a text editor the package.json file and update it as follows. "engines": { "node": ">=5.6.0" } Your package.json should then look like this. Let's create our app.js file which will be the entry point to our bot as mentioned while setting up our node package. Our app.js should like this. var irc = require('irc'); var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); Now let's run our Node.js program and at first see how our console looks. If everything works well, our console should show our bot as connected to the required network and also joined a channel. Console can be seen as the following, Now if you look at our channel #BugsChannel in our web client, you should see our bot has joined and also sent a welcome message as well. Refer the following screen: If you look at the the preceding screen, our bot program got has executed successfully. Our bot BugTrackerIRCBot has joined the channel #BugsChannel and also bot sent an introduction message to all whoever is on channel. If you look at the right side of the screen under usernames, we are seeing BugTrackerIRCBot below @Madan Code understanding of our basic bot After seeing how our bot looks in IRC client, let's look at basic code implementation from app.js. We used irc library with the following lines, var irc = require('irc'); Using irc library, we instantiated client to connect one of the IRC networks using the following code snippet, var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); Here we connected to network irc.freenode.net and provided a nickname as BugTrackerIRCBot. This name has been given as I would like my bot to track and report the bugs in future. Now we ask client to connect and join a specific channel using the following code snippet, client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); In preceeding code snippet, once client is connected, we get reply from server. This reply we are showing on a console. Once successfully connected, we ask bot to join a channel using the following code lines: client.join('#BugsChannel', function(input) { Remember, #BugsChannel is where we have joined from web client at the start. Now using client.join(), I am asking my bot to join the same channel. Once bot is joined, bot is saying a welcome message in the same channel using function client.say(). Hope this has given some basic understanding of our bot and it's code implementations. In the next article, we will enhance our bot so that our teams can have effective communication experience while chatting itself. Enhancing our BugTrackerIRCBot Having built a very basic IRC bot, let's enhance our BugTrackerIRCBot. As developers, we always would like to know how our programs or a system is functioning. To do this typically our testing teams carry out testing of a system or a program and log their bugs or defects into a bug tracking software or a system. We developers later can take a look at those bugs and address them as a part of our development life cycle. During this journey, developers will collaborate and communicate over messaging platforms like IRC. We would like to provide unique experience during their development by leveraging IRC bots. So here is what exactly we are doing. We are creating a channel for communication all the team members will be joined and our bot will also be there. In this channel, bugs will be reported and communicated based on developers' request. Also if developers need some additional information about a bug, chat bot can also help them by providing a URL from the bug tracking system. Awesome!! But before going in to details, let me summarize using the following steps about how we are going to do this, Enhance our basic bot program for more conversational experience Bug tracking system or bug storage where bugs will be stored and tracked for developers Here we mentioned about bug storage system. In this article, I would like to explain DocumentDB which is a NoSQL JSON based cloud storage system. What is DocumentDB? I have already explained NoSQLs. DocumentDB is also one of such NoSQLs where data is stored in JSON documents and offered by Microsoft Azure platform. Details of DocumentDB can be referred from (https://azure.microsoft.com/en-in/services/documentdb/) Setting up a DocumentDB for our BugTrackerIRCBot Assuming you already have a Microsoft Azure subscription follow these steps to configure DocumentDB for your bot. Create account ID for DocumentDB Let's create a new account called botdb using the following screenshot from Azure portal. Select NoSQL API as of DocumentDB. Select appropriate subscription and resources. I am using existing resources for this account. You can also create a new dedicated resource for this account. Once you enter all the required information, hit Create button at the bottom to create new account for DocumentDB. Newly created account botdb can be seen as the following, Create collection and database Select a botdb account from account lists shown precedingly. This will show various menu options like Properties, Settings, Collections etc. Under this account we need to create a collection to store bugs data. To create a new collection, click on Add Collection option as shown in the following screenshot, On click of Add Collection option, following screen will be shown on right side of the screen. Please enter the details as shown in the following screenshot: In the preceding screen, we are creating a new database along with our new collection Bugs. This new database will be named as BugDB. Once this database is created, we can add other bugs related collections in future in the same database. This can be done in future using option Use existing from the preceding screen. Once you enter all the relevant data, click OK to create database as well as collection. Refer the following screenshot: From the preceding screen, COLLECTION ID and DATABASE shown will be used during enhancing our bot. Create data for our BugTrackerIRCBot Now we have BugsDB with Bugs collection which will hold all the data for bugs. Let's add some data into our collection. To add a data let's use menu option Document Explorer shown in the following screenshot: This will open up a screen showing list of Databases and Collections created so far. Select our database as BugDB and collection as Bugs from the available list. Refer the following screenshot: To create a JSON document for our Bugs collection, click on Create option. This will open up a New Document screen to enter JSON based data. Please enter a data as per the following screenshot: We will be storing id, status, title, description, priority,assignedto, url attributes for our single bug document which will get stored in Bugs collection. To save JOSN document in our collection click Save button. Refer the following screenshot: This way we can create sample records in bugs collection which will be later wired up in NodeJS program. Sample list of bugs can be seen in the following screenshot: Summary Every development team needs bug tracking and reporting tools. There are typical needs of bug reporting and bug assignment. In case of critical projects these needs become also very critical for project timelines. This article showed us how we can provide a seamless experience to developers while they are communicating with peers within a channel. To summarize so far, we understood how to use DocumentDB from Microsoft Azure. Using DocumentDB, we created a new collection along with new database to store bugs data. We also added some sample JSON documents in Bugs collection. In today's world of collaboration, development teams who would be using such integrations and automations would be efficient and effective while delivering their quality products. Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 7809

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 7807

article-image-first-steps-selenium-rc
Packt
23 Nov 2010
4 min read
Save for later

First Steps with Selenium RC

Packt
23 Nov 2010
4 min read
Selenium 1.0 Testing Tools: Beginner’s Guide Important preliminary points To complete the examples of this article you will need to make sure that you have at least Java JRE installed. You can download it from http://java.sun.com. Selenium Remote Control has been written in Java to allow it to be cross platform, so we can test on Mac, Linux, and Windows. What is Selenium Remote Control Selenium IDE only works with Firefox so we have only been checking a small subsection of the browsers that our users prefer. We, as web developers and testers, know that unfortunately our users do not just use one browser. Some may use Internet Explorer, others may use Mozilla Firefox. This is not to mention the growth of browsers such as Google Chrome and Opera. Selenium Remote Control was initially developed by Patrick Lightbody as a way to test all of these different web browsers without having to install Selenium Core on the web server. It was developed to act as a proxy between the application under test and the test scripts. Selenium Core is bundled with Selenium Remote Control instead of being installed on the server. This change to the way that Selenium tests are run allowed developers to interact with the proxy directly giving developers and testers a chance to use one of the most prominent programming languages to send commands to the browser. Java and C# have been the main languages used by developers to create Selenium Tests. This is due to most web applications being created in one of those languages. We have seen language bindings for dynamic languages being created and supported as more developers move their web applications to those languages. Ruby and Python are the most popular languages that people are moving to. Using programming languages to write your tests instead of using the HTML-style tests with Selenium IDE allows you, as a developer or tester, to make your tests more robust and take advantage of all setups and tear down those that are common in most testing frameworks. Now that we understand how Selenium Remote Control works, let us have a look at setting it up. Setting up Selenium Remote Control Selenium Remote Control is required on all machines that will be used to run tests. It is good practice to limit the number of Selenium Remote Control instances to one per CPU core. This is due to the fact that web applications are becoming more "chatty" since we use more AJAX in them. Limiting the Selenium instances to one per core makes sure that the browsers load cleanly and Selenium will run as quickly as possible. Time for action – setting up Selenium Remote Control Download Selenium Remote Control from http://seleniumhq.org/download. Extract the ZIP file. Start a Command Prompt or a console window and navigate to where the ZIP file was extracted. Run the command java –jar selenium-server-standalone.jar and the output should appear similar to the following screenshot: What just happened? We have successfully set up Selenium Remote Control. This is the proxy that our tests will communicate with. It works by language bindings, sending commands through to Selenium Remote Control which it then passes through to the relevant browser. It does this by keeping track of browsers by having a unique ID attached to the browser, and each command needs to have that ID in the request. Now that we have finished setting up Selenium Remote Control we can have a look at running our first set of tests in a number of different browsers. Pop quiz – setting up Selenium Remote Control Where can you download Selenium Remote Control from? Once you have placed Selenium Remote Control somewhere accessible, how do you start Selenium Remote Control? Running Selenium IDE tests with Selenium Remote Control The Selenium IDE to create all the tests have only been tested on applications in Firefox. This means the testing coverage that you are offering is very limited. Users will use a number of different browsers to interact with your application. Browser and operating system combinations can mean that a developer or tester will have to run your tests more than nine times. This is to make sure that you cover all the popular browser and operating system combinations. Now let's have a look at running the IDE tests with Selenium Remote Control.
Read more
  • 0
  • 0
  • 7770
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-writing-custom-spring-boot-starters
Packt
16 Sep 2015
10 min read
Save for later

Writing Custom Spring Boot Starters

Packt
16 Sep 2015
10 min read
 In this article by Alex Antonov, author of the book Spring Boot Cookbook, we will cover the following topics: Understanding Spring Boot autoconfiguration Creating a custom Spring Boot autoconfiguration starter (For more resources related to this topic, see here.) Introduction Its time to take a look behind the scenes and find out the magic behind the Spring Boot autoconfiguration and write some starters of our own as well. This is a very useful capability to possess, especially for large software enterprises where the presence of a proprietary code is inevitable and it is very helpful to be able to create internal custom starters that would automatically add some of the configuration or functionalities to the applications. Some likely candidates can be custom configuration systems, libraries, and configurations that deal with connecting to databases, using custom connection pools, http clients, servers, and so on. We will go through the internals of Spring Boot autoconfiguration, take a look at how new starters are created, explore conditional initialization and wiring of beans based on various rules, and see that annotations can be a powerful tool, which provides the consumers of the starters more control over dictating what configurations should be used and where. Understanding Spring Boot autoconfiguration Spring Boot has a lot of power when it comes to bootstrapping an application and configuring it with exactly the things that are needed, all without much of the glue code that is required of us, the developers. The secret behind this power actually comes from Spring itself or rather from the Java Configuration functionality that it provides. As we add more starters as dependencies, more and more classes will appear in our classpath. Spring Boot detects the presence or absence of specific classes and based on this information, makes some decisions, which are fairly complicated at times, and automatically creates and wires the necessary beans to the application context. Sounds simple, right? How to do it… Conveniently, Spring Boot provides us with an ability to get the AUTO-CONFIGURATION REPORT by simply starting the application with the debug flag. This can be passed to the application either as an environment variable, DEBUG, as a system property, -Ddebug, or as an application property, --debug. Start the application by running DEBUG=true ./gradlew clean bootRun. Now, if you look at the console logs, you will see a lot more information printed there that is marked with the DEBUG level log. At the end of the startup log sequence, we will see the AUTO-CONFIGURATION REPORT as follows: ========================= AUTO-CONFIGURATION REPORT ========================= Positive matches: ----------------- … DataSourceAutoConfiguration - @ConditionalOnClass classes found: javax.sql.DataSource,org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType (OnClassCondition) … Negative matches: ----------------- … GsonAutoConfiguration - required @ConditionalOnClass classes not found: com.google.gson.Gson (OnClassCondition) … How it works… As you can see, the amount of information that is printed in the debug mode can be somewhat overwhelming; so I've selected only one example of positive and negative matches each. For each line of the report, Spring Boot tells us why certain configurations have been selected to be included, what they have been positively matched on, or, for the negative matches, what was missing that prevented a particular configuration to be included in the mix. Let's look at the positive match for DataSourceAutoConfiguration: The @ConditionalOnClass classes found tells us that Spring Boot has detected the presence of a particular class, specifically two classes in our case: javax.sql.DataSource and org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType. The OnClassCondition indicates the kind of matching that was used. This is supported by the @ConditionalOnClass and @ConditionalOnMissingClass annotations. While OnClassCondition is the most common kind of detection, Spring Boot also uses many other conditions. For example, OnBeanCondition is used to check the presence or absence of specific bean instances, OnPropertyCondition is used to check the presence, absence, or a specific value of a property as well as any number of the custom conditions that can be defined using the @Conditional annotation and Condition interface implementations. The negative matches show us a list of configurations that Spring Boot has evaluated, which means that they do exist in the classpath and were scanned by Spring Boot but didn't pass the conditions required for their inclusion. GsonAutoConfiguration, while available in the classpath as it is a part of the imported spring-boot-autoconfigure artifact, was not included because the required com.google.gson.Gson class was not detected as present in the classpath, thus failing the OnClassCondition. The implementation of the GsonAutoConfiguration file looks as follows: @Configuration @ConditionalOnClass(Gson.class) public class GsonAutoConfiguration { @Bean @ConditionalOnMissingBean public Gson gson() { return new Gson(); } } After looking at the code, it is very easy to make the connection between the conditional annotations and report information that is provided by Spring Boot at the start time. Creating a custom Spring Boot autoconfiguration starter We have a high-level idea of the process by which Spring Boot decides which configurations to include in the formation of the application context. Now, let's take a stab at creating our own Spring Boot starter artifact, which we can include as an autoconfigurable dependency in our build. Let's build a simple starter that will create another CommandLineRunner that will take the collection of all the Repository instances and print out the count of the total entries for each. We will start by adding a child Gradle project to our existing project that will house the codebase for the starter artifact. We will call it db-count-starter. How to do it… We will start by creating a new directory named db-count-starter in the root of our project. As our project has now become what is known as a multiproject build, we will need to create a settings.gradle configuration file in the root of our project with the following content: include 'db-count-starter' We should also create a separate build.gradle configuration file for our subproject in the db-count-starter directory in the root of our project with the following content: apply plugin: 'java' repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { compile("org.springframework.boot:spring- boot:1.2.3.RELEASE") compile("org.springframework.data:spring-data- commons:1.9.2.RELEASE") } Now we are ready to start coding. So, the first thing is to create the directory structure, src/main/java/org/test/bookpubstarter/dbcount, in the db-count-starter directory in the root of our project. In the newly created directory, let's add our implementation of the CommandLineRunner file named DbCountRunner.java with the following content: public class DbCountRunner implements CommandLineRunner { protected final Log logger = LogFactory.getLog(getClass()); private Collection<CrudRepository> repositories; public DbCountRunner(Collection<CrudRepository> repositories) { this.repositories = repositories; } @Override public void run(String... args) throws Exception { repositories.forEach(crudRepository -> logger.info(String.format("%s has %s entries", getRepositoryName(crudRepository.getClass()), crudRepository.count()))); } private static String getRepositoryName(Class crudRepositoryClass) { for(Class repositoryInterface : crudRepositoryClass.getInterfaces()) { if (repositoryInterface.getName(). startsWith("org.test.bookpub.repository")) { return repositoryInterface.getSimpleName(); } } return "UnknownRepository"; } } With the actual implementation of DbCountRunner in place, we will now need to create the configuration object that will declaratively create an instance during the configuration phase. So, let's create a new class file called DbCountAutoConfiguration.java with the following content: @Configuration public class DbCountAutoConfiguration { @Bean public DbCountRunner dbCountRunner(Collection<CrudRepository> repositories) { return new DbCountRunner(repositories); } } We will also need to tell Spring Boot that our newly created JAR artifact contains the autoconfiguration classes. For this, we will need to create a resources/META-INF directory in the db-count-starter/src/main directory in the root of our project. In this newly created directory, we will place the file named spring.factories with the following content: org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.test.bookpubstarter.dbcount.DbCountAutoConfiguration For the purpose of our demo, we will add the dependency to our starter artifact in the main project's build.gradle by adding the following entry in the dependencies section: compile project(':db-count-starter') Start the application by running ./gradlew clean bootRun. Once the application is complied and has started, we should see the following in the console logs: 2015-04-05 INFO org.test.bookpub.StartupRunner : Welcome to the Book Catalog System! 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : AuthorRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : PublisherRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : BookRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : ReviewerRepository has 0 entries 2015-04-05 INFO org.test.bookpub.BookPubApplication : Started BookPubApplication in 8.528 seconds (JVM running for 9.002) 2015-04-05 INFO org.test.bookpub.StartupRunner           : Number of books: 1 How it works… Congratulations! You have now built your very own Spring Boot autoconfiguration starter. First, let's quickly walk through the changes that we made to our Gradle build configuration and then we will examine the starter setup in detail. As the Spring Boot starter is a separate, independent artifact, just adding more classes to our existing project source tree would not really demonstrate much. To make this separate artifact, we had a few choices: making a separate Gradle configuration in our existing project or creating a completely separate project altogether. The most ideal solution, however, was to just convert our build to Gradle Multi-Project Build by adding a nested project directory and subproject dependency to build.gradle of the root project. By doing this, Gradle actually creates a separate artifact JAR for us but we don't have to publish it anywhere, only include it as a compile project(':db-count-starter') dependency. For more information about Gradle multi-project builds, you can check out the manual at http://gradle.org/docs/current/userguide/multi_project_builds.html. Spring Boot Auto-Configuration Starter is nothing more than a regular Spring Java Configuration class annotated with the @Configuration annotation and the presence of spring.factories in the classpath in the META-INF directory with the appropriate configuration entries. During the application startup, Spring Boot uses SpringFactoriesLoader, which is a part of Spring Core, in order to get a list of the Spring Java Configurations that are configured for the org.springframework.boot.autoconfigure.EnableAutoConfiguration property key. Under the hood, this call collects all the spring.factories files located in the META-INF directory from all the jars or other entries in the classpath and builds a composite list to be added as application context configurations. In addition to the EnableAutoConfiguration key, we can declare the following automatically initializable other startup implementations in a similar fashion: org.springframework.context.ApplicationContextInitializer org.springframework.context.ApplicationListener org.springframework.boot.SpringApplicationRunListener org.springframework.boot.env.PropertySourceLoader org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider org.springframework.test.contex.TestExecutionListener Ironically enough, a Spring Boot Starter does not need to depend on the Spring Boot library as its compile time dependency. If we look at the list of class imports in the DbCountAutoConfiguration class, we will not see anything from the org.springframework.boot package. The only reason that we have a dependency declared on Spring Boot is because our implementation of DbCountRunner implements the org.springframework.boot.CommandLineRunner interface. Summary Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Time Travelling with Spring[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article]
Read more
  • 0
  • 0
  • 7741

article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 7730

article-image-creating-your-own-theme-wordpress-tutorial
Packt
09 Oct 2009
10 min read
Save for later

Creating Your Own Theme - A Wordpress Tutorial

Packt
09 Oct 2009
10 min read
WordPress is the most widely used content management system amongst bloggers for many reasons. Not only does it make site management seem like a walk in the park, but it also uses a type of shared hosting, which means that most users can afford it. It has plug-ins for any occasion and desire and finally, it has themes. For many WordPress users, finding the right theme is a long process that often leads to endless tweaking in the code and stylesheets. However, only a few ever consider learning how to create their own. If you are one of them, this tutorial by Brian Franklin will help you learn how to built and start your own theme. Template files and DIV tags The WordPress theme that waits to be created in this tutorial consists of one stylesheet, one functions file, one comments file and a number of template files. Each template file represents a separate part. Together they form the outline of the website’s overall look. However, it is in the stylesheet that web design elements for each template file are decided. DIV tags are used to define, design, and format values of each template file as well as structure its content and elements. The <div> is often described as an invisible box that is used to separate site content in a structural manner. These are later defined in the stylesheet where size, position, form etc. are assigned.  Template files that will be used in this tutorial are: Index Header Footer Sidebar Single Page And the files that define specific functions for the theme are: Functions Comments DIV tags that will be used in this tutorial are: <div id="wrapper"></div> <div id="header"></div> <div id="container"></div> <div id="footer"></div> <div class="post"></div> <div class="entry"></div> <div class="navigation"></div> <div class="sidebar"></div> Using tags requires proper opening and closing. If any tag is not properly closed (</div>) it may affect the presentation on the entire site. There is difference between a DIV id and a class is what they are used for. Unique site features require definition by ID while values that are used to define several elements are defined as class. In the stylesheet, div id is identified with a # and div class with a . (dot). Install WordPress Before you can start building a theme you need to decide how to work. It is recommended to install WordPress on your local computer. This will allow you to save your changes on a local server rather than dealing with remote server access and uploading. Follow the link for further instructions. The other alternative is installing WordPress through your hosting provider. WordPress hosting is a popular plan offered by most hosts. Either way you need to have WordPress installed, remote or local, and create a folder for your theme under the directory /wp-content/themes/yourtheme. Create the template files Use Dreamweaver, Notepad, EditPlus or any other text editor of your choice and create the template files listed above, including the functions files. Leave them blank for now and save them as PHP files (.php) in your theme folder. index.php Open the index.php, add this piece of code and save the file: <?php get_header(); ?><div id="container"> <?php if(have_posts()) : ?><?php while(have_posts()) : the_post(); ?> <div class="post" id="post-<?php the_ID(); ?>"> <h2><a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"> <?php the_title(); ?></a></h2> <div class="entry"> <?php the_content(); ?> <p class="postmetadata"><?php _e('Filed under&#58;'); ?> <?php the_category(', ') ?> <?php _e('by'); ?> <?php the_author(); ?><br /><?php comments_popup_link('No Comments &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> <?php edit_post_link('Edit', ' &#124; ', ''); ?> </p> </div> </div> <?php endwhile; ?> <div class="navigation"> <?php posts_nav_link(); ?> </div> <?php endif; ?></div><?php get_sidebar(); ?><?php get_footer(); ?> The index-file now contains the code that calls to the header, sidebar and footer template file. But since there is no code to define these template files yet, you will not see them when previewing the site. The index file is the first file the web browser will call when requesting your site. This file defines the site’s frontpage and thus also includes DIV ID tags "container" and DIV classes "post" and "entry". These elements have been included to structure your frontpage content; posts and post entries. If you preview your WordPress site you should see your three latest blog posts, including ‘next’ and ‘previous’ buttons to the remaining ones. To shorten the length of displayed frontpage post, simply log in to the WordPress admin and insert a page break where you want the post to display a Read more link. You will come back to index.php in a moment, but for now you can save and close the file. It is now time to create the remaining template files. header.php Open header.php, add this code and save the file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head profile="http://gmpg.org/xfn/11"> <title><?php bloginfo('name'); ?><?php wp_title(); ?></title> <meta http-equiv="Content-Type" content="<?php bloginfo('html_type'); ?>; charset=<?php bloginfo('charset'); ?>" /> <meta name="generator" content="WordPress <?php bloginfo('version'); ?>" /> <!-- leave this for stats please --> <link rel="stylesheet" href="<?php bloginfo('stylesheet_url'); ?>" type="text/css" media="screen" /> <link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="<?php bloginfo('rss2_url'); ?>" /> <link rel="alternate" type="text/xml" title="RSS .92" href="<?php bloginfo('rss_url'); ?>" /> <link rel="alternate" type="application/atom+xml" title="Atom 0.3" href="<?php bloginfo('atom_url'); ?>" /> <link rel="pingback" href="<?php bloginfo('pingback_url'); ?>" /> <?php wp_get_archives('type=monthly&format=link'); ?> <?php //comments_popup_script(); // off by default ?> <?php wp_head(); ?></head><body><div id="wrapper"><div id="header"><h1><a href="<?php bloginfo('url'); ?>"><?php bloginfo('name'); ?></a></h1><?php bloginfo('description'); ?></div> The header template file now has the opening tags <html> and <body>. Included in the <html> tag is the <head> tag that holds the meta tags and stylesheet links. The body tag includes two DIV ID tags; wrapper and header that define the boxes and hold the overall position of the site and the header content. footer.php Open footer.php, add the code below and save the file: <div id="footer"> <p>Copyright 2007 <a href="<?php bloginfo('url'); ?>"><?php bloginfo('name'); ?></a></p></div></div></body></html> The footer is a template that defines the bottom part of the website. The footer for this theme now holds a Copyright announcement and a php code that adds the name of the blog as a permalink. As it is the last template file to be called for a site it also closes the body and html tag. sidebar.php Open sidebar.php, add the code below and save the file: <div class="sidebar"><ul><?php if ( function_exists('dynamic_sidebar') && dynamic_sidebar() ) : else : ?> <?php wp_list_pages('depth=3&title_li=<h2>Pages</h2>'); ?> <li><h2><?php _e('Categories'); ?></h2> <ul> <?php wp_list_cats('sort_column=name&optioncount=1&hierarchical=0'); ?> </ul> </li> <li><h2><?php _e('Archives'); ?></h2> <ul> <?php wp_get_archives('type=monthly'); ?> </ul> </li> <?php get_links_list(); ?> <?php endif; ?></ul></div> The sidebar template is now defined and includes the site’s pages, categories, archive and blogroll. Study the code and see the changes in the web browser. Regarding the DIV class, it is left to be designed in the stylesheet. single.php Open single.php, add the code below and save the file: <?php get_header(); ?><div id="container"> <?php if(have_posts()) : ?><?php while(have_posts()) : the_post(); ?> <div class="post" id="post-<?php the_ID(); ?>"> <h2><a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"> <?php the_title(); ?></a></h2> <div class="entry"> <?php the_content(); ?> <p class="postmetadata"><?php _e('Filed under&#58;'); ?> <?php the_category(', ') ?> <?php _e('by'); ?> <?php the_author(); ?><br /><?php comments_popup_link('No Comments &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> <?php edit_post_link('Edit', ' &#124; ', ''); ?> </p> </div> <div class="comments-template"><?php comments_template(); ?></div> </div> <?php endwhile; ?> <div class="navigation"> <?php previous_post_link('%link') ?> <?php next_post_link(' %link') ?> </div> <?php endif; ?></div><?php get_sidebar(); ?><?php get_footer(); ?> The single.php template file specifically defines the elements on the single post page that is different from both the frontpage post listing and pages. The code above is basically a copy/paste from the index-file, only with minor changes to the Next and Previous links. 
Read more
  • 0
  • 0
  • 7720

article-image-searching-data-using-phpmyadmin-and-mysql
Packt
09 Oct 2009
4 min read
Save for later

Searching Data using phpMyAdmin and MySQL

Packt
09 Oct 2009
4 min read
In this article by Marc Delisle, we present mechanisms that can be used to find the data we are looking for instead of just browsing tables page-by-page and sorting them. This article covers single-table and whole database searches. Single-Table Searches This section describes the Search sub-page where single-table search is available. Daily Usage of phpMyAdmin The main use for the tool for some users is with the Search mode, for finding and updating data. For this, the phpMyAdmin team has made it possible to define which sub-page is the starting page in Table view, with the $cfg['DefaultTabTable'] parameter. Setting it to 'tbl_select.php' defines the default sub-page to search. With this mode, application developers can look for data in ways not expected by the interface they are building, adjusting and sometimes repairing data. Entering the Search Sub-Page The Search sub-page can be accessed by clicking the Search link in the Table view. This has been done here for the book table: Selection of Display Fields The first panel facilitates a selection of the fields to be displayed in the results: All fields are selected by default, but we can control-click other fields to make the necessary selections. Mac users would use command-click to select / unselect the fields. Here are the fields of interest to us in this example: We can also specify the number of rows per page in the textbox just next to the field selection. The Add search conditions box will be explained in the Applying a WHERE Clause section later in this article. Ordering the Results The Display order dialog permits to specify an initial sorting order for the results to come. In this dialog, a drop-down menu contains all the table's columns; it's up to us to select the one on which we want to sort. By default, the sorting will be in Ascending order, but a choice of Descending order is available. It should be noted that on the results page, we can also change the sort order. Search Criteria by Field: Query by Example The main usage of the Search panel is to enter criteria for some fields so as to retrieve only the data in which we are interested. This is called Query by example because we give an example of what we are looking for. Our first retrieval will concern finding the book with ISBN 1-234567-89-0. We simply enter this value in the isbn box and choose the = operator: Clicking on Go gives the results shown in the following screenshot. The four fields displayed are those selected in the Select fields dialog: This is a standard results page. If the results ran in pages, we could navigate through them, and edit and delete data for the subset we chose during the process. Another feature of phpMyAdmin is that the fields used as the criteria are highlighted by changing the border color of the columns to better reflect their importance on the results page. It isn't necessary to specify that the isbn column be displayed. We could have selected only the title column for display and selected the isbn column as a criterion. Print View We see the Print view and Print view (with full texts) links on the results page. These links produce a more formal report of the results (without the navigation interface) directly to the printer. In our case, using Print view would produce the following: This report contains information about the server, database, time of generation, version of phpMyAdmin, version of MySQL, and SQL query used. The other link, Print view (with full texts) would print the contents of TEXT fields in its entirety.
Read more
  • 0
  • 0
  • 7720
article-image-rabbitmq-acknowledgements
Packt
16 Sep 2013
3 min read
Save for later

RabbitMQ Acknowledgements

Packt
16 Sep 2013
3 min read
(For more resources related to this topic, see here.) Acknowledgements (Intermediate) This task will examine reliable message delivery from the RabbitMQ server to a consumer. Getting ready If a consumer takes a message/order from our queue and the consumer dies, our unprocessed message/order will die with it. In order to make sure a message is never lost, RabbitMQ supports message acknowledgments or acks. When a consumer has received and processed a message, an acknowledgment is sent to RabbitMQ from the consumer informing RabbitMQ that it is free to delete the message. If a consumer dies without sending an acknowledgment, RabbitMQ will redeliver it to another consumer. How to do it... Let's navigate to our source code examples folder and locate the folder Message-Acknowledgement. Take a look at the consumer.js script and examine the changes we have made to support acks. We pass to the {ack:true} option to the q.subscribe function, which tells the queue that messages should be acknowledged before being removed: q.subscribe({ack:true}, function(message) { When our message has been processed we call q.shift, which informs RabbitMQ that the message has been processed, and it can now be removed from the queue: q.shift(); You can also use the prefetchCount option to increase the window of how many messages the server will send you before you need to send an acknowledgement. {ack:true, prefetchCount:1} is the default and will only send you one message before you acknowledge. Setting prefetchCount to 0 will make that window unlimited. A low value will impact performance, so it may be worth considering a higher value. Let's demonstrate this concept. Edit the consumer.js script located in the folder Message-Acknowledgement. Simply comment out the line q.shift(), which will stop the consumer from acknowledging the messages. Open a command-line console and start RabbitMQ: rabbitmq-server Now open a command-line console, navigate to our source code examples folder, and locate the folder Message-Acknowledgement. Execute the following command: Message-Acknowledgement> node producer Let the producer create several message/orders; press Ctrl + C while on the command-line console to stop the producer creating orders. Now execute the following to begin consuming messages: Message-Acknowledgement> node consumer Let's open another command-line console and run list_queues: rabbitmqctl list_queues messages_readymessages_unacknowledged The response should display our shop queue; details include the name, the number of messages ready to be processed, and one message which has not been acknowledged. Listing queues ...shop.queue 9 1...done. If you press Ctrl + C while on the command-line console, the consumer script is stopped, and then list the queues again you will notice the message has returned to the queue. Listing queues ...shop.queue 10 0...done. If you edit the change we made to consumer.js script and re-run these steps, the application will work correctly, consuming messages one at a time and sending an acknowledgment to RabbitMQ when each message has been processed. Summary This article explained a reliable message delivery process in RabbitMQ using Acknowledgements. It also listed the steps that will give you acknowledegements for a messaging application using scripts in RabbitMQ. Resources for Article : Further resources on this subject: Getting Started with Oracle Information Integration [Article] Messaging with WebSphere Application Server 7.0 (Part 1) [Article] Using Virtual Destinations (Advanced) [Article]
Read more
  • 0
  • 0
  • 7715

article-image-scaling-your-application-across-nodes-spring-pythons-remoting
Packt
24 May 2010
5 min read
Save for later

Scaling your Application Across Nodes with Spring Python's Remoting

Packt
24 May 2010
5 min read
(For more resources on Spring, see here.) With the explosion of the Internet into e-commerce in recent years, companies are under pressure to support lots of simultaneous customers. With users wanting richer interfaces that perform more business functions, this constantly leads to a need for more computing power than ever before, regardless of being web-based or thick-client. Seeing the slowdown of growth in total CPU horsepower, people are looking to multi-core CPUs, 64-bit chips, or at adding more servers to their enterprise in order to meet their growing needs. Developers face the challenge of designing applications in the simple environment of a desktop scaled back for cost savings. Then they must be able to deploy into multi-core, multi-server environments in order to meet their companies business demands. Different technologies have been developed in order to support this. Different protocols have been drafted to help communicate between nodes. The debate rages on whether talking across the network should be visible in the API or abstracted away. Different technologies to support remotely connecting client process with server processes is under constant development. Introduction to Pyro (Python Remote Objects) Pyro is an open source project (pyro.sourceforge.net) that provides an object oriented form of RPC. As stated on the project's site, it resembles Java's Remote Method Invocation (RMI). It is less similar to CORBA (http://www.corba.org), a technology-neutral wire protocol used to link multiple processes together, because it doesn't require an interface definition language, nor is oriented towards linking different languages together. Pyro supports Python-to-Python communications. Thanks to the power of Jython, it is easy to link Java-to-Python, and vice versa. Python Remote Objects is not to be confused with the Python Robotics open source project (also named Pyro). Pyro is very easy to use out of the box with existing Python applications. The ability to publish services isn't hard to add to existing applications. Pyro uses its own protocol for RPC communication. Fundamentally, a Pyro-based application involves launching a Pyro daemon thread and then registering your server component with this thread. From that point on, the thread along with your server code is in stand-by mode, waiting to process client calls. The next step involves creating a Pyro client proxy that is configured to find the daemon thread, and then forward client calls to the server. From a high level perspective, this is very similar to what Java RMI and CORBA offer. However, thanks to the dynamic nature of Python, the configuration steps are much easier, and there are no requirements to extend any classes or implement any interfaces.. As simple as it is to use Pyro, there is still the requirement to write some minimal code to instantiate your objects and then register them. You must also code up the clients, making them aware of Pyro as well. Since the intent of this article is to dive into using Spring Python, we will skip writing a pure Pyro application. Instead, let's see how to use Spring Python's out-of-the-box Pyro-based components, eliminating the need to write any Pyro glue code. This lets us delegate everything to our IoC container so that it can do all the integration steps by itself. This reduces the cost of making our application distributed to zero. Converting a simple application into a distributed one on the same machine For this example, let's develop a simple service that processes some data and produces a response. Then, we'll convert it to a distributed service. First, let's create a simple service. For this example, let's create one that returns us an array of strings representing the Happy Birthday song with someone's name embedded in it. class Service(object): def happy_birthday(self, name): results = [] for i in range(4): if i == 2: results.append("Happy Birthday Dear %s!" % name) else: results.append("Happy Birthday to you!") return results Our service isn't too elaborate. Instead of printing the data directly to screen, it collects it together and returns it to the caller. This allows us the caller to print it, test it, store it, or do whatever it wants with the result. In the following screen text, we see a simple client taking the results and printing them a little formatting inside the Python shell. As we can see, we have defined a simple service, and can call it directly. In our case, we are simply joining the list together with a newline character, and printing it to the screen. Fetching the service from an IoC container from springpython.config import *from simple_service import *class HappyBirthdayContext(PythonConfig): def __init__(self): PythonConfig.__init__(self) @Object def service(self): return Service() Creating a client to call the service Now let's write a client script that will create an instance of this IoC container, fetch the service, and use it. from springpython.context import *from simple_service_ctx import *if __name__ == "__main__": ctx = ApplicationContext(HappyBirthdayContext()) s = ctx.get_object("service") print "n".join(s.happy_birthday("Greg")) Running this client script neatly creates an instance of our IoC container, fetches the service, and calls it with the same arguments shown earlier.
Read more
  • 0
  • 0
  • 7701

article-image-creating-new-characters-morphs
Packt
09 Oct 2013
7 min read
Save for later

Creating New Characters with Morphs

Packt
09 Oct 2013
7 min read
(For more resources related to this topic, see here.) Understanding morphs The word morph comes from metamorphosis, which means a change of the form or nature of a thing or person into a completely different one. Good old Franz Kafka had a field day with metamorphosis when he imagined poor Gregor Samsa waking up and finding himself changed into a giant cockroach. This concept applies to 3D modeling very well. As we are dealing with polygons, which are defined by groups of vertices, it's very easy to morph one shape into something different. All that we need to do is to move those vertices around, and the polygons will stretch and squeeze accordingly. To get a better visualization about this process, let’s bring the Basic Female figure to the scene and show it with the wireframe turned on. To do so, after you have added the Basic Female figure, click on the DrawStyle widget on the top-right portion of the 3D Viewport. From that menu, select Wire Texture Shaded. This operation changes how Studio draws the objects in the scene during preview. It doesn't change anything else about the scene. In fact, if you try to render the image at this point, the wireframe will not show up in the render. The wireframe is a great help in working with objects because it gives us a visual representation of the structure of a model. The type of wireframe that I selected in this case is superimposed to the normal texture used with the figure. This is not the only visualization mode available. Feel free to experiment with all the options in the DrawStyle menu; most of them have their use. The most useful, in my opinion, are the Hidden Line, Lit Wireframe, Wire Shaded, and Wire Texture Shaded options. Try the Wire Shaded option as well. It shows the wireframe with a solid gray color. This is, again, just for display purposes. It doesn't remove the texture from the figure. In fact, you can switch back to Texture Shaded to see Genesis fully textured. Switching the view to use the simple wireframe or the shaded wireframe is a Great way of speeding up your workflow. When Studio doesn’t have to render the textures, the Viewport becomes more responsive and all operations take less time. If you have a slow computer, using the wireframe mode is a good way of getting a faster response time. Here are the Wire Texture Shaded and Wire Shaded styles side by side: Now that we have the wireframe visible, the concept of morphing should be simpler to understand. If we pick any vertex in the geometry and we move it somewhere, the geometry is still the same, same number of polygons and same number of vertices, but the shape has shifted. Here is a practical example that shows Genesis loaded in Blender. Blender is a free, fully featured, 3D modeling program. It has extremely advanced features that compete with commercial programs sold for thousands of dollars per license. You can find more information about Blender at http://www.blender.org. Be aware that Blender is a very advanced program with a rather difficult UI. In this image, I have selected a single polygon and pulled it away from the face: In a similar way we can use programs such as modo or ZBrush to modify the basic geometry and come up with all kinds of different shapes. For example, there are people who are specialized in reproducing the faces of celebrities as morph for DAZ V4 or Genesis. What is important to understand about morphs is that they cannot add or remove any portion of the geometry. A morph only moves things around, sometimes to extreme degrees. Morphs for Genesis or Gen4 figures can be purchased from several websites specialized in selling content for Poser and DAZ Studio. In particular, Genesis makes it very easy to apply morphs and even to mix them together. Combining premade morphs to create new faces The standard installation of Genesis provides some interesting ways of changing its shape. Let's start a new Studio scene and add our old friend, the basic Female figure. Once Genesis is in the scene, double-click on it to select it. Now let’s take a look at a new tool, the Shaping tab. It should be visible in the right-hand side pane. Click on the Shaping tab; it should show a list of shapes available. The list should be something like this: As we can see, the Basic Female shape is unsurprisingly dialed all the way to the top. The value of each slider goes from zero, no influence, to one, full influence of the morph. Morphs are not exclusive so, for example, you can add a bit of Body Builder (scroll the list to the bottom if you don't see it) to be used in conjunction with the Basic Female morph. This will give us a muscular woman. This exercise is also giving us an insight about the Basic Female figure that we have used up to this time. The figure is basically the raw Genesis figure with the Basic Female morph applied as a preset. If we continue exploring the Shaping Editor, we can see that the various shapes are grouped by major body section. We have morphs for the shape of the head, the components of the face, the nose, eyes, mouth, and so on. Let's click on the head of Genesis and use the Camera: Frame tool to frame the head in the view. Move the camera a bit so that the face is visible frontally. We will apply a few morphs to the head to see how it can be transformed. Here is the starting point: Now let’s click on the Head category in the Shaping tab. In there we can see a slider labeled Alien Humanoid. Move the slider until it gets to 0.83. The difference is dramatic. Now let’s click on the Eyes category. In there we find two values: Eyes Height and Eyes Width. To create an out-of-this-world creature, we need to break the rules of proportions a little bit, and that means to remove the limits for a couple of parameters. Click on the gear button for the Eyes Height parameter and uncheck the Use Limits checkbox. Confirm by clicking on the Accept button. Once this is done, dial a value of 1.78 for the eyes height. The eyes should move dramatically up, toward the eyebrow. Lastly, let's change the neck; it's much too thick for an alien. Also, in this case, we will need to disable the use of limits. Click on the Neck category and disable the limits for the Neck Size parameter. Once that is done, set the neck size to -1.74. Here is the result, side by side, of the transformation. This is quite a dramatic change for something that is done with just dials, without using a 3D modeling program. It gets even better, as we will see shortly. Saving your morphs If you want to save a morph to re-use it later, you can navigate to File | Save As | Shaping Preset…. To re-use a saved morph, simply select the target figure and navigate to File | Merge… to load the previously saved preset/morph. Why is Studio using the rather confusing term Merge for loading its own files? Nobody knows for sure; it's one of those weird decisions that DAZ's developers made long ago and never changed. You can merge two different Studio scenes, but it is rather confusing to think of loading a morph or a pose preset as a scene merge. Try to mentally replace File | Merge with File | Load. This is the meaning of that menu option.
Read more
  • 0
  • 0
  • 7699
article-image-why-coffeescript
Packt
31 Jan 2013
9 min read
Save for later

Why CoffeeScript?

Packt
31 Jan 2013
9 min read
(For more resources related to this topic, see here.) CoffeeScript CoffeeScript compiles to JavaScript and follows its idioms closely. It's quite possible to rewrite any CoffeeScript code in Javascript and it won't look drastically different. So why would you want to use CoffeeScript? As an experienced JavaScript programmer, you might think that learning a completely new language is simply not worth the time and effort. But ultimately, code is for programmers. The compiler doesn't care how the code looks or how clear its meaning is; either it will run or it won't. We aim to write expressive code as programmers so that we can read, reference, understand, modify, and rewrite it. If the code is too complex or filled with needless ceremony, it will be harder to understand and maintain. CoffeeScript gives us an advantage to clarify our ideas and write more readable code. It's a misconception to think that CoffeeScript is very different from JavaScript. There might be some drastic syntax differences here and there, but in essence, CoffeeScript was designed to polish the rough edges of JavaScript to reveal the beautiful language hidden beneath. It steers programmers towards JavaScript's so-called "good parts" and holds strong opinions of what constitutes good JavaScript. One of the mantras of the CoffeeScript community is: "It's just JavaScript", and I have also found that the best way to truly comprehend the language is to look at how it generates its output, which is actually quite readable and understandable code. Throughout this article, we'll highlight some of the differences between the two languages, often focusing on the things in JavaScript that CoffeeScript tries to improve. In this way, I would not only like to give you an overview of the major features of the language, but also prepare you to be able to debug your CoffeeScript from its generated code once you start using it more often, as well as being able to convert existing JavaScript. Let's start with some of the things CoffeeScript fixes in JavaScript. CoffeeScript syntax One of the great things about CoffeeScript is that you tend to write much shorter and more succinct programs than you normally would in JavaScript. Some of this is because of the powerful features added to the language, but it also makes a few tweaks to the general syntax of JavaScript to transform it to something quite elegant. It does away with all the semicolons, braces, and other cruft that usually contributes to a lot of the "line noise" in JavaScript. To illustrate this, let's look at an example. On the left-hand side of the following table is CoffeeScript; on the right-hand side is the generated JavaScript: CoffeeScript JavaScript fibonacci = (n) -> return 0 if n == 0 return 1 if n == 1 (fibonacci n-1) + (fibonacci n-2) alert fibonacci 10 var fibonacci; fibonacci = function(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return (fibonacci(n - 1)) + (fibonacci(n - 2)); }; alert(fibonacci(10)); To run the code examples in this article, you can use the great Try CoffeeScript online tool, at http://coffeescript.org. It allows you to type in CoffeeScript code, which will then display the equivalent JavaScript in a side pane. You can also run the code right from the browser (by clicking the Run button in the upper-left corner). At first, the two languages might appear to be quite drastically different, but hopefully as we go through the differences, you'll see that it's all still JavaScript with some small tweaks and a lot of nice syntactical sugar. Semicolons and braces As you might have noticed, CoffeeScript does away with all the trailing semicolons at the end of a line. You can still use a semicolon if you want to put two expressions on a single line. It also does away with enclosing braces (also known as curly brackets) for code blocks such as if statements, switch, and the try..catch block. Whitespace You might be wondering how the parser figures out where your code blocks start and end. The CoffeeScript compiler does this by using syntactical whitespace. This means that indentation is used for delimited code blocks instead of braces. This is perhaps one of the most controversial features of the language. If you think about it, in almost all languages, programmers tend to already use indentation of code blocks to improve readability, so why not make it part of the syntax? This is not a new concept, and was mostly borrowed from Python. If you have any experience with significant whitespace language, you will not have any trouble with CoffeeScript indentation. If you don't, it might take some getting used to, but it makes for code that is wonderfully readable and easy to scan, while shaving off quite a few keystrokes. I'm willing to bet that if you do take the time to get over some initial reservations you might have, you might just grow to love block indentation. Blocks can be indented with tabs or spaces, but be careful about being consistent using one or the other, or CoffeeScript will not be able to parse your code correctly. Parenthesis You'll see that the clause of the if statement does not need be enclosed within parentheses. The same goes for the alert function; you'll see that the single string parameter follows the function call without parentheses as well. In CoffeeScript, parentheses are optional in function calls with parameters, clauses for if..else statements, as well as while loops. Although functions with arguments do not need parentheses, it is still a good idea to use them in cases where ambiguity might exist. The CoffeeScript community has come up with a nice idiom: wrapping the whole function call in parenthesis. The use of the alert function in CoffeeScript is shown in the following table: CoffeeScript JavaScript alert square 2 * 2.5 + 1 alert(square(2 * 2.5 + 1)); alert (square 2 * 2.5) + 1 alert((square(2 * 2.5)) + 1); Functions are first class objects in JavaScript. This means that when you refer to a function without parentheses, it will return the function itself, as a value. Thus, in CoffeeScript you still need to add parentheses when calling a function with no arguments. By making these few tweaks to the syntax of JavaScript, CoffeeScript arguably already improves the readability and succinctness of your code by a big factor, and also saves you quite a lot of keystrokes. But it has a few other tricks up its sleeve. Most programmers who have written a fair amount of JavaScript would probably agree that one of the phrases that gets typed the most frequently would have to be the function definition function(){}. Functions are really at the heart of JavaScript, yet not without its many warts. CoffeeScript has great function syntax The fact that you can treat functions as first class objects as well as being able to create anonymous functions is one of JavaScript's most powerful features. However, the syntax can be very awkward and make the code hard to read (especially if you start nesting functions). But CoffeeScript has a fix for this. Have a look at the following snippets: CoffeeScript JavaScript -> alert 'hi there!' square = (n) -> n * n var square; (function() { return alert('hi there!'); }); square = function(n) { return n * n; }; Here, we are creating two anonymous functions, the first just displays a dialog and the second will return the square of its argument. You've probably noticed the funny -> symbol and might have figured out what it does. Yep, that is how you define a function in CoffeeScript. I have come across a couple of different names for the symbol but the most accepted term seems to be a thin arrow or just an arrow. Notice that the first function definition has no arguments and thus we can drop the parenthesis. The second function does have a single argument, which is enclosed in parenthesis, which goes in front of the -> symbol. With what we now know, we can formulate a few simple substitution rules to convert JavaScript function declarations to CoffeeScript. They are as follows: Replace the function keyword with -> If the function has no arguments, drop the parenthesis If it has arguments, move the whole argument list with parenthesis in front of the -> symbol Make sure that the function body is properly indented and then drop the enclosing braces Return isn't required You might have noted that in both the functions, we left out the return keyword. By default, CoffeeScript will return the last expression in your function. It will try to do this in all the paths of execution. CoffeeScript will try turning any statement (fragment of code that returns nothing) into an expression that returns a value. CoffeeScript programmers will often refer to this feature of the language by saying that everything is an expression. This means you don't need to type return anymore, but keep in mind that this can, in many cases, alter your code subtly, because of the fact that you will always return something. If you need to return a value from a function before the last statement, you can still use return. Function arguments Function arguments can also take an optional default value. In the following code snippet you'll see that the optional value specified is assigned in the body of the generated Javascript: CoffeeScript JavaScript square = (n=1) -> alert(n * n) var square; square = function(n) { if (n == null) { n = 1; } return alert(n * n); }; In JavaScript, each function has an array-like structure called arguments with an indexed property for each argument that was passed to the function. You can use arguments to pass in a variable number of parameters to a function. Each parameter will be an element in arguments and thus you don't have to refer to parameters by name. Although the arguments object acts somewhat like an array, it is in not in fact a "real" array and lacks most of the standard array methods. Often, you'll find that arguments doesn't provide the functionality needed to inspect and manipulate its elements like they are used with an array. Summary We saw how it can help you write shorter, cleaner, and more elegant code than you normally would in JavaScript and avoid many of its pitfalls. We came to realize that even though CoffeeScripts' syntax seems to be quite different from JavaScript, it actually maps pretty closely to its generated output. Resources for Article : Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Build iPhone, Android and iPad Applications using jQTouch [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7687

article-image-indexes
Packt
23 Jul 2014
8 min read
Save for later

Indexes

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) As a database administrator (DBA) or developer, one of your most important goals is to ensure that the query times are consistent with the service-level agreement (SLA) and are meeting user expectations. Along with other performance enhancement techniques, creating indexes for your queries on underlying tables is one of the most effective and common ways to achieve this objective. The indexes of underlying relational tables are very similar in purpose to an index section at the back of a book. For example, instead of flipping through each page of the book, you use the index section at the back of the book to quickly find the particular information or topic within the book. In the same way, instead of scanning each individual row on the data page, SQL Server uses indexes to quickly find the data for the qualifying query. Therefore, by indexing an underlying relational table, you can significantly enhance the performance of your database. Indexing affects the processing speed for both OLTP and OLAP and helps you achieve optimum query performance and response time. The cost associated with indexes SQL Server uses indexes to optimize overall query performance. However, there is also a cost associated with indexes; that is, indexes slow down insert, update, and delete operations. Therefore, it is important to consider the cost and benefits associated with indexes when you plan your indexing strategy. How SQL Server uses indexes A table that doesn't have a clustered index is stored in a set of data pages called a heap. Initially, the data in the heaps is stored in the order in which the rows are inserted into the table. However, SQL Server Database Engine moves the data around the heap to store the rows efficiently. Therefore, you cannot predict the order of the rows for heaps because data pages are not sequenced in any particular order. The only way to guarantee the order of the rows from a heap is to use the SELECT statement with the ORDER BY clause. Access without an index When you access the data, SQL Server first determines whether there is a suitable index available for the submitted SELECT statement. If no suitable index is found for the submitted SELECT statement, SQL Server retrieves the data by scanning the entire table. The database engine begins scanning at the physical beginning of the table and scans through the full table page by page and row by row to look for qualifying data that is specified in the submitted SELECT statement. Then, it extracts and returns the rows that meet the criteria in the format specified in the submitted SELECT statement. Access with an index The process is improved when indexes are present. If the appropriate index is available, SQL Server uses it to locate the data. An index improves the search process by sorting data on key columns. The database engine begins scanning from the first page of the index and only scans those pages that potentially contain qualifying data based on the index structure and key columns. Finally, it retrieves the data rows or pointers that contain the locations of the data rows to allow direct row retrieval. The structure of indexes In SQL Server, all indexes—except full-text, XML, in-memory optimized, and columnstore indexes—are organized as a balanced tree (B-tree). This is because full-text indexes use their own engine to manage and query full-text catalogs, XML indexes are stored as internal SQL Server tables, in-memory optimized indexes use the Bw-tree structure, and columnstore indexes utilize SQL Server in-memory technology. In the B-tree structure, each page is called a node. The top page of the B-tree structure is called the root node. Non-leaf nodes, also referred to as intermediate levels, are hierarchical tree nodes that comprise the index sort order. Non-leaf nodes point to other non-leaf nodes that are one step below in the B-tree hierarchy, until reaching the leaf nodes. Leaf nodes are at the bottom of the B-tree hierarchy. The following diagram illustrates the typical B-tree structure: Index types In SQL Server 2014, you can create several types of indexes. They are explored in the next sections. Clustered indexes A clustered index sorts table or view rows in the order based on clustered index key column values. In short, a leaf node of a clustered index contains data pages, and scanning them will return the actual data rows. Therefore, a table can have only one clustered index. Unless explicitly specified as nonclustered, SQL Server automatically creates the clustered index when you define a PRIMARY KEY constraint on a table. When should you have a clustered index on a table? Although it is not mandatory to have a clustered index per table, according to the TechNet article, Clustered Index Design Guidelines, with few exceptions, every table should have a clustered index defined on the column or columns that used as follows: The table is large and does not have a nonclustered index. The presence of a clustered index improves performance because without it, all rows of the table will have to be read if any row needs to be found. A column or columns are frequently queried, and data is returned in a sorted order. The presence of a clustered index on the sorting column or columns prevents the sorting operation from being started and returns the data in the sorted order. A column or columns are frequently queried, and data is grouped together. As data must be sorted before it is grouped, the presence of a clustered index on the sorting column or columns prevents the sorting operation from being started. A column or columns data that are frequently used in queries to search data ranges from the table. The presence of clustered indexes on the range column will help avoid the sorting of the entire table data. Nonclustered indexes Nonclustered indexes do not sort or store the data of the underlying table. This is because the leaf nodes of the nonclustered indexes are index pages that contain pointers to data rows. SQL Server automatically creates nonclustered indexes when you define a UNIQUE KEY constraint on a table. A table can have up to 999 nonclustered indexes. You can use the CREATE INDEX statement to create clustered and nonclustered indexes. A detailed discussion on the CREATE INDEX statement and its parameters is beyond the scope of this article. For help with this, refer to the CREATE INDEX (Transact-SQL) article at http://msdn.microsoft.com/en-us/library/ms188783.aspx. SQL Server 2014 also supports new inline index creation syntax for standard, disk-based database tables, temp tables, and table variables. For more information, refer to the CREATE TABLE (SQL Server) article at http://msdn.microsoft.com/en-us/library/ms174979.aspx. Single-column indexes As the name implies, single-column indexes are based on a single-key column. You can define it as either clustered or nonclustered. You cannot drop the index key column or change the data type of the underlying table column without dropping the index first. Single-column indexes are useful for queries that search data based on a single column value. Composite indexes Composite indexes include two or more columns from the same table. You can define composite indexes as either clustered or nonclustered. You can use composite indexes when you have two or more columns that need to be searched together. You typically place the most unique key (the key with the highest degree of selectivity) first in the key list. For example, examine the following query that returns a list of account numbers and names from the Purchasing.Vendor table, where the name and account number starts with the character A: USE [AdventureWorks2012]; SELECT [AccountNumber] , [Name] FROM [Purchasing].[Vendor] WHERE [AccountNumber] LIKE 'A%' AND [Name] LIKE 'A%'; GO If you look at the execution plan of this query without modifying the existing indexes of the table, you will notice that the SQL Server query optimizer uses the table's clustered index to retrieve the query result, as shown in the following screenshot: As our search is based on the Name and AccountNumber columns, the presence of the following composite index will improve the query execution time significantly: USE [AdventureWorks2012]; GO CREATE NONCLUSTERED INDEX [AK_Vendor _ AccountNumber_Name] ON [Purchasing].[Vendor] ([AccountNumber] ASC, [Name] ASC) ON [PRIMARY]; GO Now, examine the query execution plan of this query once again, after creating the previous composite index on the Purchasing.Vendor table, as shown in the following screenshot: As you can see, SQL Server performs a seek operation on this composite index to retrieve the qualifying data. Summary Thus we have learned what indexes are, how SQL Server uses indexes, structure of indexes, and some of the types of indexes. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [article] Manage SQL Azure Databases with the Web Interface 'Houston' [article] VB.NET Application with SQL Anywhere 10 database: Part 1 [article]
Read more
  • 0
  • 0
  • 7666
Modal Close icon
Modal Close icon