Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
Packt
21 Apr 2014
7 min read
Save for later

Getting Started – An Introduction to GML

Packt
21 Apr 2014
7 min read
(For more resources related to this topic, see here.) Creating GML scripts Before diving into any actual code, the various places in which scripts can appear in GameMaker as well as the reasoning behind placing scripts in one area versus another should be addressed. Creating GML scripts within an event Within an object, each event added can either contain a script or call one. This will be the only instance when dragging-and-dropping is required as the goal of scripting is to eliminate the need for it. To add a script to an event within an object, go to the control tab of the Object Properties menu of the object being edited. Under the Code label, the first two icons deal with scripts. Displayed in the following screenshot, the leftmost icon, which looks like a piece of paper, will create a script that is unique to that object type; the middle icon, which looks like a piece of paper with a green arrow, will allow for a script resource to be selected and then called during the respective event. Creating scripts within events is most useful when the scripts within those events perform actions that are very specific to the object instance triggering the event. The following screenshot shows these object instances: Creating scripts as resources Navigating to Resources | Create Script or using the keyboard shortcut Shift + Ctrl + C will create a script resource. Once created, a new script should appear under the Scripts folder on the left side of the project where resources are located. Creating a script as a resource is most useful in the following conditions: When many different objects utilize this functionality When a function requires multiple input values or arguments When global actions such as saving and loading are utilized When implementing complex logic and algorithms Scripting a room's creation code Room resources are specific resources where objects are placed and gameplay occurs. Room resources can be created by navigating to Resources | Create room or using Shift + Ctrl + R. Rooms can also contain scripts. When editing a room, navigate to the settings tab within the Room Properties panel and you should see a button labeled Creation code as seen in the following screenshot. When clicked on, this will open a blank GML script. This script will be executed as soon as the player loads the specified room, before any objects trigger their own events. Using Creation code is essentially the same as having a script in the Create event of an object. Understanding parts of GML scripts GML scripts are made up of many different parts. The following section will go over these different parts and their syntax, formatting, and usage. Programs A program is a set of instructions that are followed in a specific order. One way to think of it is that every script written in GML is essentially a program. Programs in GML are usually enclosed within braces, { }, as shown in the following example: { // Defines an instanced string variable. str_text = "Hello Word"; // Every frame, 10 units are added to x, a built-in variable. x += 10; // If x is greater than 200 units, the string changes. if (x > 200) { str_text = "Hello Mars"; } } The previous code example contains two assignment expressions followed by a conditional statement, followed by another program with an assignment expression. If the preceding script were an actual GML script, the initial set of braces enclosing the program would not be required. Each instruction or line of code ends with a semicolon ( ;). This is not required as a line break or return is sufficient, but the semicolon is a common symbol used in many other programming languages to indicate the end of an instruction. Using it is a good habit to improve the overall readability of one's code. snake_case Before continuing with this overview of GML, it's very important to observe that the formatting used in GML programs is snake case. Though it is not necessary to use this formatting, the built-in methods and constants of GML use it; so, for the sake of readability and consistency, it is recommended that you use snake casing, which has the following requirements: No capital letters are used All words are separated by underscores Variables Variables are the main working units within GML scripts, which are used to represent values. Variables are unique in GML in that, unlike some programming languages, they are not strictly typed, which means that the variable does not have to represent a specific data structure. Instead, variables can represent either of the following types: A number also known as real, such as 100 or 2.0312. Integers can also correspond to the particular instance of an object, room, script, or another type of resource. A string which represents a collection of alphanumeric characters commonly used to display text, encased in either single or double quotation marks, for example, "Hello World". Variable prefixes As previously mentioned, the same variable can be assigned to any of the mentioned variable types, which can cause a variety of problems. To combat this, the prefixes of variable names usually identify the type of data stored within the variable, such as str_player_name (which represents a string). The following are the common prefixes that will be used: str: String spr: Sprites snd: Sounds bg: Backgrounds pth: Paths scr: Scripts fnt: Fonts tml: Timeline obj: Object rm: Room ps: Particle System pe: Particle Emitter pt: Particle Type ev: Event Variable names cannot be started with numbers and most other non-alphanumeric characters, so it is best to stick with using basic letters. Variable scope Within GML scripts, variables have different scopes. This means that the way in which the values of variables are accessed and set varies. The following are the different scopes: Instance: These variables are unique to the instances or copies of each object. They can be accessed and set by themselves or by other game objects and are the most common variables in GML. Local: Local variables are those that exist only within a function or script. They are declared using the var keyword and can be accessed only within the scripts in which they've been created. Global: A variable that is global can be accessed by any object through scripting. It belongs to the game and not an individual object instance. There cannot be multiple global variables of the same name. Constants: Constants are variables whose values can only be read and not altered. They can be instanced or global variables. Instanced constants are, for example, object_index or sprite_width. The true and false variables are examples of global constants. Additionally, any created resource can be thought of as a global constant representing its ID and unable to be assigned a new value. The following example demonstrates the assignment of different variable types: // Local variable assignment. var a = 1; // Global variable declaration and assignment. globalvar b; b = 2; // Alternate global variable declaration and assignment. global.c = 10; // Instanced variable assignment through the use of "self". self.x = 10; /* Instanced variable assignment without the use of "self". Works identically to using "self". */ y = 10; Built-in variables Some global variables and instanced variables are already provided by GameMaker: Studio for each game and object. Variables such as x, sprite_index, and image_speed are examples of built-in instanced variables. Meanwhile, some built-in variables are also global, such as health, score, and lives. The use of these in a game is really up to personal preference, but their appropriate names do make them easier to remember. When any type of built-in variable is used in scripting, it will appear in a different color, the default being a light, pinkish red. All built-in variables are documented in GameMaker: Studio's help contents, which can be accessed by navigating to Help | Contents... | Reference or by pressing F1. Creating custom constants Custom constants can be defined by going to Resources | Define Constants... or by pressing Shift + Ctrl + N. In this dialog, first a variable name and then a correlating value are set. By default, constants will appear in the same color as built-in variables when written in the GML code. The following screenshot shows this interface with some custom constants created:
Read more
  • 0
  • 0
  • 3926

article-image-skinning-character
Packt
21 Apr 2014
6 min read
Save for later

Skinning a character

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) Our world in 5000 AD is incomplete without our mutated human being Mr. Green. Our Mr. Green is a rigged model, exported from Blender. All famous 3D games from Counter Strike to World of Warcraft use skinned models to give the most impressive real world model animations and kinematics. Hence, our learning has to now evolve to load Mr. Green and add the same quality of animation in our game. We will start our study of character animation by discussing the skeleton, which is the base of the character animation, upon which a body and its motion is built. Then, we will learn about skinning, how the bones of the skeleton are attached to the vertices, and then understand its animations. In this article, we will cover basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model. Understanding the basics of a character's skeleton A character's skeleton is a posable framework of bones. These bones are connected by articulated joints, arranged in a hierarchical data structure. The skeleton is generally rendered and is used as an invisible armature to position and orient a character's skin. The joints are used for relative movement within the skeleton. They are represented by a 4 x 4 linear transformation matrices (combination of rotation, translation, and scale). The character skeleton is set up using only simple rotational joints as they are sufficient to model the joints of real animals. Every joint has limited degrees of freedom (DOFs). DOFs are the possible ranges of motion of an object. For instance, an elbow joint has one rotational DOF and a shoulder joint has three DOFs, as the shoulder can rotate along three perpendicular axes. Individual joints usually have one to six DOFs. Refer to the link http://en.wikipedia.org/wiki/Six_degrees_of_freedom to understand different degrees of freedom. A joint local matrix is constructed for each joint. This matrix defines the position and orientation of each joint and is relative to the joint above it in the hierarchy. The local matrices are used to compute the world space matrices of the joint, using the process of forward kinematics. The world space matrix is used to render the attached geometry and is also used for collision detection. The digital character skeleton is analogous to the real-world skeleton of vertebrates. However, the bones of our digital human character do have to correspond to the actual bones. It will depend on the level of detail of the character you require. For example, you may or may not require cheek bones to animate facial expressions. Skeletons are not just used to animate vertebrates but also mechanical parts such as doors or wheels. Comprehending the joint hierarchy The topology of a skeleton is a tree or an open-directed graph. The joints are connected up in a hierarchical fashion to the selected root joint. The root joint has no parent of itself and is presented in the model JSON file with the parent value of -1. All skeletons are kept as open trees without any closed loops. This restriction though does not prevent kinematic loops. Each node of the tree represents a joint, also called bones. We use both terms interchangeably. For example, the shoulder is a joint, and the upper arm is a bone, but the transformation matrix of both objects is same. So mathematically, we would represent it as a single component with three DOFs. The amount of rotation of the shoulder joint will be reflected by the upper arm's bone. The following figure shows simple robotic bone hierarchy: Understanding forward kinematics Kinematics is a mathematical description of a motion without the underlying physical forces. Kinematics describes the position, velocity, and acceleration of an object. We use kinematics to calculate the position of an individual bone of the skeleton structure (skeleton pose). Hence, we will limit our study to position and orientation. The skeleton is purely a kinematic structure. Forward kinematics is used to compute the world space matrix of each bone from its DOF value. Inverse kinematics is used to calculate the DOF values from the position of the bone in the world. Let's dive a little deeper into forward kinematics and study a simple case of bone hierarchy that starts from the shoulder, moves to the elbow, finally to the wrist. Each bone/joint has a local transformation matrix, this.modelMatrix. This local matrix is calculated from the bone's position and rotation. Let's say the model matrices of the wrist, elbow, and shoulder are this.modelMatrixwrist, this.modelMatrixelbow, and this.modelMatrixshoulder respectively. The world matrix is the transformation matrix that will be used by shaders as the model matrix, as it denotes the position and rotation in world space. The world matrix for a wrist will be: this.worldMatrixwrist = this.worldMatrixelbow * this.modelMatrixwrist The world matrix for an elbow will be: this.worldMatrixelbow = this.worldMatrixshoulder * this.modelMatrixelbow If you look at the preceding equations, you will realize that to calculate the exact location of a wrist in the world space, we need to calculate the position of the elbow in the world space first. To calculate the position of the elbow, we first need to calculate the position of shoulder. We need to calculate the world space coordinate of the parent first in order to calculate that of its children. Hence, we use depth-first tree traversal to traverse the complete skeleton tree starting from its root node. A depth-first traversal begins by calculating modelMatrix of the root node and traverses down through each of its children. A child node is visited and subsequently all of its children are traversed. After all the children are visited, the control is transferred to the parent of modelMatrix. We calculate the world matrix by concatenating the joint parent's world matrix and its local matrix. The computation of calculating a local matrix from DOF and then its world matrix from the parent's world matrix is defined as forward kinematics. Let's now define some important terms that we will often use: Joint DOFs: A movable joint movement can generally be described by six DOFs (three for position and rotation each). DOF is a general term: this.position = vec3.fromValues(x, y, z); this.quaternion = quat.fromValues(x, y, z, w); this.scale = vec3.fromValues(1, 1, 1); We use quaternion rotations to store rotational transformations to avoid issues such as gimbal lock. The quaternion holds the DOF values for rotation around the x, y, and z values. Joint offset: Joints have a fixed offset position in the parent node's space. When we skin a joint, we change the position of each joint to match the mesh. This new fixed position acts as a pivot point for the joint movement. The pivot point of an elbow is at a fixed location relative to the shoulder joint. This position is denoted by a vector position in the joint local matrix and is stored in m31, m32, and m33 indices of the matrix. The offset matrix also holds initial rotational values.
Read more
  • 0
  • 0
  • 11064

article-image-monte-carlo-simulation-and-options
Packt
21 Apr 2014
10 min read
Save for later

Monte Carlo Simulation and Options

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) In this article, we will cover the following topics: Generating random numbers from standard normal distribution and normal distribution Generating random numbers from a uniform distribution A simple application: estimate pi by the Monte Carlo simulation Generating random numbers from a Poisson distribution Bootstrapping with/without replacements The lognormal distribution and simulation of stock price movements Simulating terminal stock prices Simulating an efficient portfolio and an efficient frontier Generating random numbers from a standard normal distribution Normal distributions play a central role in finance. A major reason is that many finance theories, such as option theory and applications, are based on the assumption that stock returns follow a normal distribution. It is quite often that we need to generate n random numbers from a standard normal distribution. For this purpose, we have the following two lines of code: >>>import scipy as sp >>>x=sp.random.standard_normal(size=10) The basic random numbers in SciPy/NumPy are created by Mersenne Twister PRNG in the numpy.random function. The random numbers for distributions in numpy.random are in cython/pyrex and are pretty fast. To print the first few observations, we use the print() function as follows: >>>print x[0:5] [-0.55062594 -0.51338547 -0.04208367 -0.66432268 0.49461661] >>> Alternatively, we could use the following code: >>>import scipy as sp >>>x=sp.random.normal(size=10) This program is equivalent to the following one: >>>import scipy as sp >>>x=sp.random.normal(0,1,10) The first input is for mean, the second input is for standard deviation, and the last one is for the number of random numbers, that is, the size of the dataset. The default settings for mean and standard deviations are 0 and 1. We could use the help() function to find out the input variables. To save space, we show only the first few lines: >>>help(sp.random.normal) Help on built-in function normal: normal(...) normal(loc=0.0, scale=1.0, size=None) Drawing random samples from a normal (Gaussian) distribution The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently, is often called the bell curve because of its characteristic shape; refer to the following graph: Again, the density function for a standard normal distribution is defined as follows:   (1) Generating random numbers with a seed Sometimes, we like to produce the same random numbers repeatedly. For example, when a professor is explaining how to estimate the mean, standard deviation, skewness, and kurtosis of five random numbers, it is a good idea that students could generate exactly the same values as their instructor. Another example would be that when we are debugging our Python program to simulate a stock's movements, we might prefer to have the same intermediate numbers. For such cases, we use the seed() function as follows: >>>import scipy as sp >>>sp.random.seed(12345) >>>x=sp.random.normal(0,1,20) >>>print x[0:5] [-0.20470766 0.47894334 -0.51943872 -0.5557303 1.96578057] >>> In this program, we use 12345 as our seed. The value of the seed is not important. The key is that the same seed leads to the same random values. Generating n random numbers from a normal distribution To generate n random numbers from a normal distribution, we have the following code: >>>import scipy as sp >>>sp.random.seed(12345) >>>x=sp.random.normal(0.05,0.1,50) >>>print x[0:5] [ 0.02952923 0.09789433 -0.00194387 -0.00557303 0.24657806] >>> The difference between this program and the previous one is that the mean is 0.05 instead of 0, while the standard deviation is 0.1 instead of 1. The density of a normal distribution is defined by the following equation, where μ is the mean and σ is the standard deviation. Obviously, the standard normal distribution is just a special case of the normal distribution shown as follows:   (2) Histogram for a normal distribution A histogram is used intensively in the process of analyzing the properties of datasets. To generate a histogram for a set of random values drawn from a normal distribution with specified mean and standard deviation, we have the following code: >>>import scipy as sp >>>import matplotlib.pyplot as plt >>>sp.random.seed(12345) >>>x=sp.random.normal(0.08,0.2,1000) >>>plt.hist(x, 15, normed=True) >>>plt.show() The resultant graph is presented as follows: Graphical presentation of a lognormal distribution When returns follow a normal distribution, the prices would follow a lognormal distribution. The definition of a lognormal distribution is as follows:   (3) The following code shows three different lognormal distributions with three pairs of parameters, such as (0, 0.25), (0, 0.5), and (0, 1.0). The first parameter is for mean (), while the second one is for standard deviation, : import scipy.stats as sp import numpy as np import matplotlib.pyplot as plt x=np.linspace(0,3,200) mu=0 sigma0=[0.25,0.5,1] color=['blue','red','green'] target=[(1.2,1.3),(1.7,0.4),(0.18,0.7)] start=[(1.8,1.4),(1.9,0.6),(0.18,1.6)] for i in range(len(sigma0)): sigma=sigma0[i] y=1/(x*sigma*sqrt(2*pi))*exp(-(log(x)-mu)**2/(2*sigma*sigma)) plt.annotate('mu='+str(mu)+', sigma='+str(sigma),xy=target[i], xytext=start[i], arrowprops=dict(facecolor=color[i],shrink=0.01),) plt.plot(x,y,color[i]) plt.title('Lognormal distribution') plt.xlabel('x') plt.ylabel('lognormal density distribution') plt.show() The corresponding three graphs are put together to illustrate their similarities and differences: Generating random numbers from a uniform distribution When we plan to randomly choose m stocks from n available stocks, we could draw a set of random numbers from a uniform distribution. To generate 10 random numbers between one and 100 from a uniform distribution, we have the following code. To guarantee that we generate the same set of random numbers, we use the seed() function as follows: >>>import scipy as sp >>>sp.random.seed(123345) >>>x=sp.random.uniform(low=1,high=100,size=10) Again, low, high, and size are the three keywords for the three input variables. The first one specifies the minimum, the second one specifies the high end, while the size gives the number of the random numbers we intend to generate. The first five numbers are shown as follows: >>>print x[0:5] [ 30.32749021 20.58006409 2.43703988 76.15661293 75.06929084] >>> Using simulation to estimate the pi value It is a good exercise to estimate pi by the Monte Carlo simulation. Let's draw a square with 2R as its side. If we put the largest circle inside the square, its radius will be R. In other words, the areas for those two shapes have the following equations:   (4)   (5) By dividing equation (4) by equation (5), we have the following result: In other words, the value of pi will be 4* Scircle/Ssquare. When running the simulation, we generate n pairs of x and y from a uniform distribution with a range of zero and 0.5. Then we estimate a distance that is the square root of the summation of the squared x and y, that is, . Obviously, when d is less than 0.5 (value of R), it will fall into the circle. We can imagine throwing a dart that falls into the circle. The value of the pi will take the following form:   (6) The following graph illustrates these random points within a circle and within a square: The Python program to estimate the value of pi is presented as follows: import scipy as sp n=100000 x=sp.random.uniform(low=0,high=1,size=n) y=sp.random.uniform(low=0,high=1,size=n) dist=sqrt(x**2+y**2) in_circle=dist[dist<=1] our_pi=len(in_circle)*4./n print ('pi=',our_pi) print('error (%)=', (our_pi-pi)/pi) The estimated pi value would change whenever we run the previous code as shown in the following code, and the accuracy of its estimation depends on the number of trials, that is, n: ('pi=', 3.15) ('error (%)=', 0.0026761414789406262) >>> Generating random numbers from a Poisson distribution To investigate the impact of private information, Easley, Kiefer, O'Hara, and Paperman (1996) designed a (PIN) Probability of informed trading measure that is derived based on the daily number of buyer-initiated trades and the number of seller-initiated trades. The fundamental aspect of their model is to assume that order arrivals follow a Poisson distribution. The following code shows how to generate n random numbers from a Poisson distribution: import scipy as sp import matplotlib.pyplot as plt x=sp.random.poisson(lam=1, size=100) #plt.plot(x,'o') a = 5. # shape n = 1000 s = np.random.power(a, n) count, bins, ignored = plt.hist(s, bins=30) x = np.linspace(0, 1, 100) y = a*x**(a-1.) normed_y = n*np.diff(bins)[0]*y plt.plot(x, normed_y) plt.show() Selecting m stocks randomly from n given stocks Based on the preceding program, we could easily choose 20 stocks from 500 available securities. This is an important step if we intend to investigate the impact of the number of randomly selected stocks on the portfolio volatility as shown in the following code: import scipy as sp n_stocks_available=500 n_stocks=20 x=sp.random.uniform(low=1,high=n_stocks_available,size=n_stocks) y=[] for i in range(n_stocks): y.append(int(x[i])) #print y final=unique(y) print final print len(final) In the preceding program, we select 20 numbers from 500 numbers. Since we have to choose integers, we might end up with less than 20 values, that is, some integers appear more than once after we convert real numbers into integers. One solution is to pick more than we need. Then choose the first 20 integers. An alternative is to use the randrange() and randint() functions. In the next program, we choose n stocks from all available stocks. First, we download a dataset from http://canisius.edu/~yany/yanMonthly.pickle: n_stocks=10 x=load('c:/temp/yanMonthly.pickle') x2=unique(np.array(x.index)) x3=x2[x2<'ZZZZ'] # remove all indices sp.random.seed(1234567) nonStocks=['GOLDPRICE','HML','SMB','Mkt_Rf','Rf','Russ3000E_D','US_DEBT', 'Russ3000E_X','US_GDP2009dollar','US_GDP2013dollar'] x4=list(x3) for i in range(len(nonStocks)): x4.remove(nonStocks[i]) k=sp.random.uniform(low=1,high=len(x4),size=n_stocks) y,s=[],[] for i in range(n_stocks): index=int(k[i]) y.append(index) s.append(x4[index]) final=unique(y) print final print s In the preceding program, we remove non-stock data items. These non-stock items are a part of data items. First, we load a dataset called yanMonthly.pickle that includes over 200 stocks, gold price, GDP, unemployment rate, SMB (Small Minus Big), HML (High Minus Low), risk-free rate, price rate, market excess rate, and Russell indices. The .pickle extension means that the dataset has a type from Pandas. Since x.index would present all indices for each observation, we need to use the unique() function to select all unique IDs. Since we only consider stocks to form our portfolio, we have to move all market indices and other non-stock securities, such as HML and US_DEBT. Because all stock market indices start with a carat (^), we use less than ZZZZ to remove them. For other IDs that are between A and Z, we have to remove them one after another. For this purpose, we use the remove() function available for a list variable. The final output is shown as follows:
Read more
  • 0
  • 0
  • 10623

article-image-building-mobile-apps
Packt
21 Apr 2014
6 min read
Save for later

Building Mobile Apps

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) As mobile apps get closer to becoming the de-facto channel to do business on the move, more and more software vendors are providing easy to use mobile app development platforms for developers to build powerful HTML5, CSS, and JavaScript based apps. Most of these mobile app development platforms provide the ability to build native, web, and hybrid apps. There are several very feature rich and popular mobile app development toolkits available in the market today. Some of them worth mentioning are: Appery (http://appery.io) AppBuilder (http://www.telerik.com/appbuilder) Phonegap (http://phonegap.com/) Appmachine (http://www.appmachine.com/) AppMakr (http://www.appmakr.com/) (AppMakr is currently not starting new apps on their existing legacy platform. Any customers with existing apps still have full access to the editor and their apps.) Codiqa (https://codiqa.com) Conduit (http://mobile.conduit.com/) Apache Cordova (http://cordova.apache.org/) And there are more. The list is only a partial list of the amazing tools currently in the market for building and deploying mobile apps quickly. The Heroku platform integrates with the Appery.io (http://appery.io) mobile app development platform to provide a seamless app development experience. With the Appery.io mobile app development platform, the process of developing a mobile app is very straightforward. You build the user interface (UI) of your app using drag-and-drop from an available palette. The palette contains a rich set of user interface controls. Create the navigation flow between the different screens of the app, and link the actions to be taken when certain events such as clicking a button. Voila! You are done. You save the app and test it there using the Test menu option. Once you are done with testing the app, you can host the app using Appery's own hosting service or the Heroku hosting service. Mobile app development was never this easy. Introducing Appery.io Appery.io (http://www.appery.io) is a cloud-based mobile app development platform. With Appery.io, you can build powerful mobile apps leveraging the easy to use drag-and-drop tools combined with the ability to use client side JavaScript to provide custom functionality. Appery.io enables you to create real world mobile apps using built-in support for backend data stores, push notifications, server code besides plugins to consume third-party REST APIs and help you integrate with a plethora of external services. Appery.io is an innovative and intuitive way to develop mobile apps for any device, be it iOS, Windows or Android. Appery.io takes enterprise level data integration to the next level by exposing your enterprise data to mobile apps in a secure and straightforward way. It uses Exadel's (Appery.io's parent company) RESTExpress API to enable sharing your enterprise data with mobile apps through a REST interface. Appery.io's mobile UI development framework provides a rich toolkit to design the UI using many types of visual components required for the mobile apps including google maps, Vimeo and Youtube integration. You can build really powerful mobile apps and deploy them effortlessly using drag and drop functionality inside the Appery.io app builder. What is of particular interest to Heroku developers is Appery.io's integration with mobile backend services with option to deploy on the Heroku platform with the click of a button. This is a powerful feature where in you do not need to install any software on your local machine and can build and deploy real world mobile apps on cloud based platforms such as Appery.io and Heroku r. In this section, we create a simple mobile app and deploy it on Heroku. In doing so, we will also learn: How to create a mobile UI form How to configure your backend services (REST or database) How to integrate your UI with backend services How to deploy the app to Heroku How to test your mobile app We will also review the salient features of the Appery.io mobile app development platform and focus on the ease of development of these apps and how one could easily leverage web services to deploy apps and consume data from any system. Getting Appery.io The cool thing about Appery.io is that it is a cloud-based mobile app development toolkit and can be accessed from any popular web browser. To get started, create an account at http://appery.io and you are all set. You can sign up for a basic starter version which provides the ability to develop 1 app per account and go all the way up to the paid Premium and Enterprise grade subscriptions. Introducing the Appery.io app builder The Appery.io app builder is a cloud based mobile application development kit that can be used to build mobile apps for any platform. The Appery.io app builder consists of intuitive tooling and a rich controls palette to help developers drag and drop controls on to the device and design robust mobile apps. The Appery.io app builder has the following sections: Device layout section: This section contains the mock layout of the device onto which the developer can drag-and-drop visual controls to create a user interface. Palette: Contains a rich list of visual controls like buttons, text boxes, Google Map controls and more that developers can use to build the user experience. Project explorer: This section consists of many things including project files, application level settings/configuration, available themes for the device, custom components, available CSS and JavaScript code, templates, pop-ups and one of the key elements— the available backend services. Key menu options: Save and Test for the app being designed. Page properties: This section consists of the design level properties for the page being designed. Modifying these properties changes the user interface labels or the underlying properties of the page elements. Events: This is another very important section of the app builder that contains the event to action association for the various elements of the page. For example, it can contain the action to be taken when a click event happens on a button on this page. The following Appery.io app builder screenshot highlights the various sections of the rich toolkit available for mobile app developers to build apps quickly: Creating your first Appery.io mobile app Building a mobile app is quite straightforward using Appery.io's feature rich app development platform. To get started, create an Appery.io account at http://appery.io and login using valid credentials: Click on the Create new app link on the left section of your Appery.io account page: Enter the new app name for example, herokumobile app and click on Create: Enter the name of the first/launch page of your mobile app and click on Create page: This creates the new Appery.io app and points the user to the Appery.io app builder to design the start page of the new mobile app.
Read more
  • 0
  • 0
  • 5178

article-image-ink-slingers
Packt
21 Apr 2014
17 min read
Save for later

Ink Slingers

Packt
21 Apr 2014
17 min read
(For more resources related to this topic, see here.) The role of inking Inking, sometimes called rendering or embellishment, was used to make the pencil sketches the artist drew easier to reproduce using photographic reproduction and presses that were used to make comics back in the 30s and 40s. This was because comics were printed using presses that could only print a limited amount of colors and couldn't print grayscale or full color. Adding inks to a penciled page is the next step in creating what the reader will see in our comic. This is where we make our drawings vivid with clear lines and pools of shadows. By recreating the pencils with only solid black lines and fills, we end up with a page that is ready for toning or coloring. One of the most important things that we should keep in mind while inking is if the inks aren't solid, clear, and understandable, we cannot expect to make it better with colors. Without good inks, we won't have a good page. We can fix some penciling errors with good inks; however, once we commit to our inks, we must be happy with them. Another important thing to keep in mind while inking is that we're interpreting 3D objects using black lines. Any shading at this point is done by varying the thickness of lines, crosshatching, and feathering. If those terms sound unfamiliar, we'll get their definitions as we proceed with the exercises. It's all about the lines Before we examine the kinds of tools, we need to have an idea of the kinds of lines we need to create. When we are penciling the page, we are interested in composition, lighting, perspective, and anatomy. When we're inking, we are interested in how the lines work in context to the rest of the drawing, as shown in the following figure: In the preceding figure, the right-hand side sphere was rendered using the default marker tool. It creates dead-weight lines. This means that the thickness of the line doesn't vary, unless we go back over the lines. The sphere on the left was inked using the default G-Pen. With the pen tool, we can vary the pressure on the stylus to get lines of varying thickness. This results in a feathering effect, as shown in the following figure: This looks simple and easy to do, but requires much practice. The example shown here took less than a minute to do. Feathering is a technique that we can do quickly and accurately once our hand becomes confident. Crosshatching is another tool we can use while inking. The preceding figure shows an example of crosshatching. Here, we can also provide an illusion of one color blending into another. Even though the example is made up of lines that are at a 90 degree angle from each other, we can alter the angle to give us different looks. In each of these two examples, we can see how the thickness of the line and the spacing of the lines gives us a graphical illusion of shading. For those of us who are wondering what we meant when we talked about blending one color to another, it's simple. Instead of only having the black ink, as in the analog work, we can use black, white, and even transparent colors in our digital inks. By using the color effects, we can give an inked layer, that is, a uniform color instead of black. Now, before we will get into more esoteric aspects of inking, we need to examine the tools we have at our disposal within Manga Studio. Inking tools in Manga Studio In the following INKING TOOLS diagram, there are explanations about the tools we'll be focusing on in this article: At this point in our learning about Manga Studio, we should know how to make the tool size larger or smaller and change to different subtools and main tools. In the downloaded files, there's a file named ch 07 inking tools.lip that we'll use in the Time for Action exercises. If you want, use your own file. Just keep in mind that this is all practice; we will be deleting layers and experimenting with making different kinds of lines. In the next part, we'll cover the various marking tools in Manga Studio and learn how to create new subtools of each kind. The new tools will be compared with other tools and the differences and similarities will be examined. After creating new tools, we'll render the objects in the ch 07 inking tools.lip file. First, we will outline the object and next we'll render in shading for the light source (from the upper-right side, just above our head). As with the penciling, it's assumed that you are familiar with the basics of the subject at hand. Now let's get onto the tools! Markers We've all used these tools in real (analog) life. From sharpies to microns, these marvels of marking have aided artists in many ways, and hindered them too. For our purposes, markers have a uniform width. They do not get thicker or thinner no matter how hard we bear down on them with our stylus. We'll be using tools such as technical pen. These are the pens that have a metal barrel and are refilled with ink cartridges. In our digital world, the ink never needs to be refilled and the tip never dries out. Even though markers give us a dead-weight line, they do have the uses that are invaluable. Backgrounds, machines, and many other objects are best drawn first with a marker. We can go back and add shading, thicken the lines, or break the lines after we lay down the basic linework. Time for action – making a technical pen subtool In this section, we'll create a technical pen subtool that will be able to step though predetermined sizes and give us consistent widths quickly. Let's get started and perform the following steps: Choose the Marker tool. In the Sub tool palette menu, choose the Marker pen. Go to the Sub tool palette menu. Choose Duplicate sub tool and name it Technical Pen in the dialog box that pops up. Then click OK. Click on the Wrench icon to call up the Tool Settings palette. Make the following settings in the categories that follow. If a category's not mentioned, then it should remain at the default. Brush Size: Set the brush size to 5 Ink: Set the opacity to 100, set the combine mode to normal Anti-Aliasing: Set this to None (the first or leftmost dot) Brush tip: Set the shape to circle Hardness, Thickness, and Brush Density: Set this to 100 Direction: Set this to zero Correction: Make corner pointed should be checked Stabilization: Set this to 6, this is set according to the speed that is unchecked Possible to snap: Make sure that this is checked We will want to click on the eye-con button to show Possible to snap in the Tool Property palette. This way we can turn off snapping for the tool and not for the entire document. Click on the Register all Settings to Default button on the Tool Settings palette. In the Tool Settings palette, click on the tall rectangular button. Choose Settings of Indicator... from the pop-up menu. We have five positions in which we can enter specific sizes. Let's start by entering the following in each of the indicator entry boxes from left to right: 5, 10, 20, 30, and 50 in the last box. Once that's done, click on the OK button. The slider will be replaced by the selection indicator. Each square contains the respective value we just entered. For example, the first square on the left will set our pen size to 5, the third one to 20, and the fifth one to 50. Notice that when we click on the squares to the right of the one we pressed last, the squares get filled with a darker gray color if they are further to the right. What just happened? We duplicated a marker pen, named it Technical Pen, and set it up with specific sizes, which we chose by clicking on a selection indicator. This pen will ignore pressure from the stylus so the line thickness will be consistent. We made sure that it had a stabilization setting to help smooth out some jitter that our hand can introduce. It's nice to have the indicator for the technical pen; however, we must realize that this feature is tool-wide, which means that it's either on for all tools or off for all tools. The settings we entered are also for all tools. So, we must decide whether we want to have the indicators on for every tool or not. Once we're finished with the technical pen, we'll set the indicators back to our friend the slider. Indicators, as implemented in this version (5.0.2) of Manga Studio, are a part of what I think of as feature. It is where a feature that would be great for a few tools affects all tools. The indicator values we entered in are used for all the tools when we choose indicators, a one-size fits all kind of thing. It's a bother to turn on and off because we have to right-click on each tool setting category that uses it and choose Show Slider on the contextual menu. It can't be set via a keyboard command or by an action. It's included here for those who may like it. I think that indicators are poorly thought out and need to be active and customized on a per-tool basis, maybe in a future update. Now we're going to test out this new technical pen! Time for action – inking basic shapes with the technical pen With a new pen tool, let's see what we can do. Open a file to ink, create a new layer, and then begin inking by performing the following steps: Open up the ch 07 inking tools.lip file. We'll be working with a ruler layer, so make sure Snap to rulers is turned on. On the Layers palette, choose the Outlines layer. This is the layer on which we will be creating our outlines. Shading will be done on the layer below it. This way, we can easily correct wayward lines without erasing our outlines. While you're working on the inks for a page, don't hesitate to use more than one layer. It may seem confusing at first, but once we get used to it, it will make our digital inking go much faster. For instance, we can have one layer for background inks and another for foreground inks. This makes cleaning up overlapping lines so much easier. When we're finished with our inks, we can combine the different layers into a single layer. We will treat each of these four objects as foreground objects, so we'll want a thick outline on them. Choose the 30 or 50 size and outline each one. Be careful on the cube, it's made from two separate rulers and we don't want to have any interior lines inked in yet. Keeping in mind that our light source is coming from the upper-left side, we want to have thinner interior lines for the edges on the top 2 objects: the cube and pyramid. We'll use the size 30 to outline the side of the cube that's farthest from the light and the size 20 for the horizontal edge and the edge of the pyramid. In the image for the finished outlines of the cube and pyramid, notice how sharp the corners are. That's because we turned on the Make Corners Pointed option in the Correction category of the Tool Settings palette. We have a few choices for inking the cube. We could render in the shadowed areas by hand. That would be okay, but since this is a mechanical object, let's use a ruler to draw straight lines. Select the Figure tool (it's the second tool below the paint bucket tool). In the subtool palette, choose the Special Ruler subtool. In the Tool Property palette, select the Parallel Line Ruler option from the top drop-down menu. In the Layer palette, select the shading layer, as we're going to do our shading inks on that layer. On the canvas, click-and-drag the ruler to adjust it to the angle of the top-right side of the cube (the edge that's going away from us). Choose the technical pen tool. Make sure that Snap to special ruler is on in the command bar and snapping is on for the tool itself. Now, lay down some of the horizontal receding lines. We can vary the distance between each line a bit. Once we are happy with the horizontal receding lines, we can turn on the parallel ruler for the vertical lines. Choose the Object Selection tool and click on the ruler, as shown in the following figure: As shown in the preceding figure, we can now click-and-drag the object on a hollow dot to adjust the angle of the ruler. Holding down the Shift key while dragging will snap the ruler to the angles we've set up in the Preferences menu. While adjusting the ruler, be careful that we don't accidentally click on the dreaded diamond! This diamond will toggle the snap of the ruler. The only indication of it being on or off is the lines turning from purple to green. So, if the marking tool has snapping on and if we have snapping on in the command bar but we aren't snapping to the ruler, then click once on the diamond with the object selection tool, switch back to the marking tool, and see if we're getting the snapping happiness we want. If the diamonds or the dots aren't visible, then zoom out and see if they are off the canvas area of the document. Use the Object tool to move the entire ruler so the dots are within the canvas area. Now, we can lay down the lines at the sides and front of the cube. Use the same ruler and ink for the pyramid and cone. The sphere will be done free hand, because straight lines will flatten out the sphere and make it look like a disc. When inking, we should be aware that the mind of the eyes that gaze on our work are unconscious artists and will fill in the blanks. That's why the vertical lines in the figure are interrupted. This gives the impression of a vague reflection of the cube's environment, which adds interest to the final work. What just happened? We used rulers as an inking aid for the shape outlines and for the inking of interior shadowed areas. By using interrupted lines, we can make our shading look more random and interesting. Have a go hero Before we leave markers, there's an issue with the sphere. One thing that technical pens excel in is rendering mechanical objects. Unless our hand is supernaturally steady, our poor sphere may not look too good. There needs to be a way to create a curved ruler that would radiate from a single point. There is and it's from the same menu from where we got parallel rulers from the special rulers. It's named Focus Curve. We can select it from the Special Ruler menu. Then, in the shading layer, we click on the area where the sphere is highlighted (brightest area). We also click on a curve of the sphere. Usually two more clicks is enough to get a good curve. Press Enter to commit to the ruler. A focused type of special ruler will have a single point that will be the origin point for all the lines that are drawn. Now, we will use what we know about interrupted lines and cross-hatching to ink that puppy. The difference between the focus curve ruler and the parallel line ruler is that on the focus curve ruler we can have curving lines coming from a point, while the parallel line ruler only allows straight lines. Pens These are the workhorses of inking. With a practiced (steady) hand, there's nothing we cannot ink with these tools. Unlike the metal and hair pens and brushes in the analog world, we don't have to worry about the sharpness of the points or ink clogging up the works. However, we need to learn to work with our graphic tablets and stylus pens. There is something that we need to know while using our tablets and stylus. First, go to the maker of your tablet's website and make sure your drivers are current. Many times, an odd behavior can be attributed to an out-of-date driver. That being said, always have the previous installation file of the driver at hand. Sometimes, an update can cause more problems than it solves. Having the previous version at hand is a great insurance policy. As for the stylus, I use a Wacom Intuos tablet and it's served me very well over the years. The biggest upgrade I did for it was to purchase a second stylus pen and extra nibs for it. While the solid plastic nibs that come with the tablets are fine to use, I purchased the felt nibs and the stroke nibs. The felt nibs adds a bit of friction to the stylus without having to put a sheet of paper on the tablet itself. They tend to get blunt quickly, but they're still usable. I find these excellent to use with the marker tools. Because of the friction, they give a smoother line without adding extra correction to the pen in Manga Studio. The stroke nibs have a pointed end and a spring in the middle of the nib. That spring adds a bit of resistance (almost feeling like a real dip pen nib) to pressure, and the smoothness allows for long strokes that can be very nice looking. Both of these nibs don't seem to interfere with the pressure sensitivity of the stylus. All the stylus nibs will wear out eventually, so don't hesitate to replace them as you would replace a dip pen that doesn't have a sharp point. I bought a large quantity of nibs (the felt, stroke, and default plastic) a few years ago and that's been good, since I've had many 2 a.m. nib changes that helped save me a lot of waiting. If you've lost the nib changer that came with your tablet, a good pair of tweezers will suffice just as well. In fact, I like them better. Our plan for the pens are two fold. First, we'll set up the basic settings for the pen in Manga Studio and then we'll adjust the pressure settings for the specific pen. As mentioned a few times earlier, pressure settings are very personal. Everyone's style and method of inking is unique. Consider the settings we'll be looking at here as a starting point. Never be afraid to change the pressure settings. If one doesn't work, just reset the pen to the initial state and try again. There is nothing wrong with experimenting with settings. Once we create a pen, we'll ink in the four shapes using the pen tool. We'll experiment (there's that word again!) with using transparency to ink in solid blacks for a unique look. Before we embark on our adventure in inking, take a moment to visit http://smudgeguard.com. I've used these for years and still have my initial two pairs. I use one for analog and the other for digital. They are fingerless gloves, except for the pinky and ring finger. For analog penciling and inking, they protect the paper from oils that's on our skin and from getting our hand all smudged up with the graphite from the pencil lead. If we are using one with the Wacom tablet or a display tablet, our hand won't stick to the plastic of the tablet's surface. My hand just glides across the tablet. They've made a big difference in my inking digitally. I didn't want to risk washing them in a machine so I hand wash them every couple of months or so, and they've held up amazingly well. In short, a bandana wrapped around the palm and the heel of the hand can work to avoid the heel of the palm from sticking to the plastic surface.
Read more
  • 0
  • 0
  • 2734

article-image-creating-responsive-magento-theme-bootstrap-3
Packt
21 Apr 2014
13 min read
Save for later

Creating a Responsive Magento Theme with Bootstrap 3

Packt
21 Apr 2014
13 min read
In this article, by Andrea Saccà, the author of Mastering Magento Theme Design, we will learn how to integrate the Bootstrap 3 framework and how to develop the main theme blocks. The following topics will be covered in this article: An introduction to Bootstrap Downloading Bootstrap (the current Version 3.1.1) Downloading and including jQuery Integrating the files into the theme Defining the main layout design template (For more resources related to this topic, see here.) An introduction to Bootstrap 3 Bootstrap is a sleek, intuitive, powerful, mobile-first frontend framework that enables faster and easier web development, as shown in the following screenshot: Bootstrap 3 is the most popular frontend framework that is used to create mobile-first websites. It includes a free collection of buttons, CSS components, and JavaScript to create websites or web applications; it was created by the Twitter team. Downloading Bootstrap (the current Version 3.1.1) First, you need to download the latest version of Bootstrap. The current version is 3.0. You can download the framework from http://getbootstrap.com/. The fastest way to download Bootstrap 3 is to download the precompiled and minified versions of CSS, JavaScript, and fonts. So, click on the Download Bootstrap button and unzip the file you downloaded. Once the archive is unzipped, you will see the following files: We need to take only the minified version of the files, that is, bootstrap.min.css from css, bootstrap.min.js from js, and all the files from font. For development, you can use bootstrap.css so that you can inspect the code and learn, and then switch to bootstrap.min.css when you go live. Copy all the selected files (CSS files inside the css folder, the .js files inside the js folder, and the font files inside the fonts folder) in the theme skin folder at skin/frontend/bookstore/default. Downloading and including jQuery Bootstrap is dependent on jQuery, so we have to download and include it before including boostrap.min.js. So, download jQuery from http://jquery.com/download/. The preceding URL takes us to the following screenshot: We will use the compressed production Version 1.10.2. Once you download jQuery, rename the file as jquery.min.js and copy it into the js skin folder at skin/frontend/bookstore/default/js/. In the same folder, also create the jquery.scripts.js file, where we will insert our custom scripts. Magento uses Prototype as the main JavaScript library. To make jQuery work correctly without conflicts, you need to insert the no conflict code in the jquery.scripts.js file, as shown in the following code: // This is important!jQuery.noConflict(); jQuery(document).ready(function() { // Insert your scripts here }); The following is a quick recap of CSS and JS files: Integrating the files into the theme Now that we have all the files, we will see how to integrate them into the theme. To declare the new JavaScript and CSS files, we have to insert the action in the local.xml file located at app/design/frontend/bookstore/default/layout. In particular, the file declaration needs to be done in the default handle to make it accessible by the whole theme. The default handle is defined by the following tags: <default> . . . </default> The action to insert the JavaScript and CSS files must be placed inside the reference head block. So, open the local.xml file and first create the following block that will define the reference: <reference name="head"> … </reference> Declaring the .js files in local.xml The action tag used to declare a new .js file located in the skin folder is as follows: <action method="addItem"> <type>skin_js</type><name>js/myjavascript.js</name> </action> In our skin folder, we copied the following three .js files: jquery.min.js jquery.scripts.js bootstrap.min.js Let's declare them as follows: <action method="addItem"> <type>skin_js</type><name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/jquery.scripts.js</name> </action> Declaring the CSS files in local.xml The action tag used to declare a new CSS file located in the skin folder is as follows: <action method="addItem"> <type>skin_css</type><name>css/mycss.css</name> </action> In our skin folder, we have copied the following three .css files: bootstrap.min.css styles.css print.css So let's declare these files as follows: <action method="addItem"> <type>skin_css</type><name>css/bootstrap.min.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/styles.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/print.css</name> </action> Repeat this action for all the additional CSS files. All the JavaScript and CSS files that you insert into the local.xml file will go after the files declared in the base theme. Removing and adding the style.css file By default, the base theme includes a CSS file called styles.css, which is hierarchically placed before the bootstrap.min.css. One of the best practices to overwrite the Bootstrap CSS classes in Magento is to remove the default CSS files declared by the base theme of Magento, and declare it after Bootstrap's CSS files. Thus, the styles.css file loads after Bootstrap, and all the classes defined in it will overwrite the boostrap.min.css file. To do this, we need to remove the styles.css file by adding the following action tag in the xml part, just before all the css declaration we have already made: <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> Hence, we removed the styles.css file and added it again just after adding Bootstrap's CSS file (bootstrap.min.css): <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> If it seems a little confusing, the following is a quick view of the CSS declaration: <!-- Removing the styles.css declared in the base theme --> <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css again --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> Adding conditional JavaScript code If you check the Bootstrap documentation, you can see that in the HTML5 boilerplate template, the following conditional JavaScript code is added to make Internet Explorer (IE) HTML 5 compliant: <!--[if lt IE 9]> <script src = "https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"> </script> <script src = "https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"> </script> <![endif]--> To integrate them into the theme, we can declare them in the same way as the other script tags, but with conditional parameters. To do this, we need to perform the following steps: Download the files at https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js and https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js. Move the downloaded files into the js folder of the theme. Always integrate JavaScript through the .xml file, but with the conditional parameters as follows: <action method="addItem"> <type>skin_js</type><name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type><name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> A quick recap of our local.xml file Now, after we insert all the JavaScript and CSS files in the .xml file, the final local.xml file should look as follows: <?xml version="1.0" encoding="UTF-8"?> <layout version="0.1.0"> <default translate="label" module="page"> <reference name="head"> <!-- Adding Javascripts --> <action method="addItem"> <type>skin_js</type> <name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/jquery.scripts.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type> <name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> <!-- Removing the styles.css --> <action method="removeItem"> <type>skin_css</type><name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> </reference> </default> </layout> Defining the main layout design template A quick tip for our theme is to define the main template for the site in the default handle. To do this, we have to define the template into the most important reference, root. In a few words, the root reference is the block that defines the structure of a page. Let's suppose that we want to use a main structure having two columns with the left sidebar for the theme To change it, we should add the setTemplate action in the root reference as follows: <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> You have to insert the reference name "root" tag with the action inside the default handle, usually before every other reference. Defining the HTML5 boilerplate for main templates After integrating Bootstrap and jQuery, we have to create our HTML5 page structure for the entire base template. The following are the structure files that are located at app/design/frontend/bookstore/template/page/: 1column.phtml 2columns-left.phtml 2columns-right.phtml 3columns.phtml The Twitter Bootstrap uses scaffolding with containers, a row, and 12 columns. So, its page layout would be as follows: <div class="container"> <div class="row"> <div class="col-md-3"></div> <div class="col-md-9"></div> </div> </div> This structure is very important to create responsive sections of the store. Now we will need to edit the templates to change to HMTL5 and add the Bootstrap scaffolding. Let's look at the following 2columns-left.phtml main template file: <!DOCTYPE HTML> <html><head> <?php echo $this->getChildHtml('head') ?> </head> <body <?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>> <?php echo $this->getChildHtml('after_body_start') ?> <?php echo $this->getChildHtml('global_notices') ?> <header> <?php echo $this->getChildHtml('header') ?> </header> <section id="after-header"> <div class="container"> <?php echo $this->getChildHtml('slider') ?> </div> </section> <section id="maincontent"> <div class="container"> <div class="row"> <?php echo $this->getChildHtml('breadcrumbs') ?> <aside class="col-left sidebar col-md-3"> <?php echo $this->getChildHtml('left') ?> </aside> <div class="col-main col-md-9"> <?php echo $this->getChildHtml('global_messages') ?> <?php echo $this->getChildHtml('content') ?> </div> </div> </div> </section> <footer id="footer"> <div class="container"> <?php echo $this->getChildHtml('footer') ?> </div> </footer> <?php echo $this->getChildHtml('before_body_end') ?> <?php echo $this->getAbsoluteFooter() ?> </body> </html> You will notice that I removed the Magento layout classes col-main, col-left, main, and so on, as these are being replaced by the Bootstrap classes. I also added a new section, after-header, because we will need it after we develop the home page slider. Don't forget to replicate this structure on the other template files 1column.phtml, 2columns-right.phtml, and 3columns.phtml, changing the columns as you need. Summary We've seen how to integrate Bootstrap and start the development of a Magento theme with the most famous framework in the world. Bootstrap is very neat, flexible, and modular, and you can use it as you prefer to create your custom theme. However, please keep in mind that it can be a big drawback on the loading time of the page. Following these techniques by adding the JavaScript and CSS classes via XML, you can allow Magento to minify them to speed up the loading time of the site. Resources for Article: Further resources on this subject: Integrating Twitter with Magento [article] Magento : Payment and shipping method [article] Magento: Exploring Themes [article]
Read more
  • 0
  • 0
  • 9849
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-microsoft-dynamics-crm-2013-marketing
Packt
21 Apr 2014
11 min read
Save for later

Getting Started with Microsoft Dynamics CRM 2013 Marketing

Packt
21 Apr 2014
11 min read
(For more resources related to this topic, see here.) Present day marketing Marketing is the process of engaging with the target customers to communicate the value of a product or service in order to sell them. Marketing is used to attract new customers, nurture prospects, up-sell, and cross-sell to the existing customers. Companies spend at least five percent of their revenue on marketing efforts to maintain the market share. Any company that wants to grow its market share will spend more than 10 percent of its revenue on marketing. In competitive sectors, such as consumer products and services, the marketing expenditure can go up to 50 percent of the revenue, especially with new products and service offerings. Marketing happens over various channels such as print media, radio, television, and the Internet. Successful marketing strategies target specific audience with targeted messages at high frequency, which is very effective. Before the era of the Internet and social networks, buyers were less informed and the seller had better control over the sales pipeline by exploiting this ignorance. However, in this digital age, buyers are able to research beforehand to get enough information about the products they want and, ultimately, they control the process of buying. Social media has turned out to be a great marketing platform for companies, and it hugely impacts a company's reputation with respect to its products and customer services. Marketing with social media is about creating quality content that attracts the attention of the social platform users, who then share the content with their connections creating the same effect as word of mouth marketing. The target customers learn about the company and its products from their trusted connections. The promotional message is received from user's social circle and not the company itself. Social platforms such as Facebook and Twitter are constantly working towards delivering targeted ads to the users based on their interests and behaviors. Business Insider reports that Facebook generates 1 billion in revenue each quarter from advertisements, and Twitter is estimated to have generated more than 500 million in advertisement revenue in the year 2013, which clearly shows the impact of social media on marketing today. Buyers are able to make well-informed decisions and often choose to engage with a salesperson after due diligence. For example, when buying a new mobile phone, most of us know which model to buy, what the specifications are, and what the best price is in the market before we even go to the retailer. Marketing is now a revenue process that is not about broadcasting the product information to all, it's about targeting and nurturing relationships with the prospects from an early stage until they become ready to buy. Marketing is not just throwing bait and expecting people to buy it. The prospects in today's information age learn at their own pace and want to be reached only when they need more information or are ready to buy. Let's now explore some of the challenges of present day marketing. Marketing automation with Microsoft Dynamics CRM 2013 CRM has been passively used for a long time by marketers as a customer data repository and as a mining source for business intelligence reports as they perceived CRM to be a more sales focused customer data application. The importance of collaboration between the sales and marketing teams has inevitably evolved CRM into a Revenue Performance Management (RPM) platform with marketing features transforming it into a proactive platform. It can not only record data effectively, but also synthesize, correlate, and display the data with the powerful visualization techniques that can identify patterns, relationships, and new sales opportunities. The common steps involved in marketing with Microsoft Dynamics CRM are shown in the following figure: Important marketing steps in Microsoft Dynamcis CRM Targeting CRM can help in filtering and selecting a well-defined target population using advanced filtering and segmentation features on a clean and up-to-date data repository. It can select the prospects based on demographic data such as purchase history and responses to the previous campaigns, which will profile campaign distribution and significantly improve campaign performance. Microsoft Dynamics CRM 2013 can be easily integrated with other lines of business applications, which can help create intelligent marketing lists in CRM from various sources. For example, it can integrate with ERP and other financial software to segment customers into various marketing lists that target very specific customers and prospects. Work flows and automations supported by most of the CRM platforms can be used to build the logic for segmentation and creation of qualified lists. Targeting with Microsoft Dynamics CRM 2013 can help create groups that are likely to respond to certain types of campaigns and help marketers target customers with right campaign types. Automation and execution The CRM applications can help create, manage, and measure your marketing campaigns. It can track current status, messages sent, and responses received against each member of the list and measure real-time performance with reports and dashboards. The Microsoft Dynamics CRM 2013 systems can be used to plan and establish the budget for a campaign, track the expenses, and measure ROI. The steps involved in campaign execution and message distribution can be defined along with the schedule. Message distribution and response capture can be automated with CRM, which can help in running multiple promotions or performing nurture campaigns. Microsoft Dynamics CRM 2013 can help perform marketing tasks in parallel and track which prospect is responding to which campaign to establish the effectiveness of a campaign. With powerful integration with other marketing automation platforms, marketers can create and customize the message, create landing pages for the campaign within CRM, and then use the built-in e-mail marketing engine to distribute the message, which can embed tracking tokens into the e-mail to capture and relate the incoming response. Integration of CRM with popular e-mail clients avoids switching applications and errors in copying data back and forth. The Microsoft Dynamics CRM 2013 system can capture preferences, advise on the best time and channel to engage with the customer, and provide feedback on products and services. Close looping Close loop marketing is a practice of capturing and relating the responses to marketing messages in order to measure the effectiveness, constantly optimize the process, and refine your message to improve its relevancy. This, in turn, increases the rate of conversion and ROI. This also involves an inherent close looping between the marketing and sales teams who collaborate to provide a single view of progression from prospect to sale. The division between the marketing and sales departments leads to lack of visibility and efficiency as they are unable to support each other and cannot measure what works and what does not, eventually reducing the overall efficiency of both teams put together. Close loop marketing has gained great importance because companies have started perceiving the sales and marketing teams together as revenue teams who are jointly responsible to increase revenue. Close looping enables us to compare the outcome of multiple campaigns by multiple factors such as the campaign type, number of responses, type of respondents, and response time. Microsoft Dynamics CRM 2013 can track various parameters such as the types of messages and the frequency of marketing, which can be compared against prior marketing campaigns to identify trends and predict customer behavior. In order to achieve close loop marketing, we need to centralize data. This can bring together the customer's profile, customer's behavioral data, marketing activities, and the sales interactions in one place, so we can use automation to make this data actionable and continuously evolve the marketing processes for the targeting and nurturing of customers. CRM can be the centralized repository for data and can also automate the interactions between the sales and marketing teams. Also, the social CRM features allow users to follow specific records and create connections with unrelated records, which will enable free flow of information between the teams. This elicits great details about the customer and supports actionable use of information to increase revenue efficiently without resorting to marketing myths and assumptions. Revenue management by collaboration The marketing and sales teams together are the revenue team for an organization and are responsible to generate and increase revenue. It is imperative to align the sales and marketing teams for collaboration as the marketing team owns the message and the sales team owns the relationship. Microsoft Dynamics CRM 2013 offers an integrated approach where the lead can be passed from the marketing team to the sales team based on a threshold lead score or other qualification criteria agreed upon by both the sales and marketing teams. This qualification of the lead by the marketing team to the sales team retains all the previous interactions that the marketing team had with the lead, which helps the sales team understand the buyer's interests and motivation better by getting a 360 degree view of the customer. CRM tracks the status, qualification, and activities performed against the lead. This provides a comprehensive history of all the touch points with a lead and brings in transparency and accountability to both the marketing and sales teams. This ensures that only fully qualified leads are sent to sales, resulting in shorter sales cycle and improved efficiency. This strategic collaboration between the sales and marketing teams provides valuable feedback on the effectiveness of the marketing campaigns as well as the sales process. Microsoft Dynamics CRM 2013 can enable interdependence between the marketing and sales teams to share a common revenue goal and receive joint credits for achievement to become the organization's RPM system. To summarize, as a marketing automation platform, Microsoft Dynamics CRM 2013 can create marketing campaigns, identify target customers to create marketing lists, associate relevant products and promotional offers to the lists, develop tailored messages, distribute messages by various channels as per schedule, establish campaign budget and ROI forecast, capture the responses and inquiries while routing them to the right team, track progress and outcome of the sale, and report the campaign ROI. CRM has evolved from being a passive data repository and status tracking system to a tactical and strategic decision support system that provides more than just a 360 degree view of the customer, which is not limited to just tracking opportunities, managing account and contacts, and capturing call notes. CRM can be one of the key applications for an active marketing and revenue performance management that can help relationship building with customer by personalized communications and behavioral tracking, enable automation of marketing programs, measure marketing performance and ROI, and connect the sales and marketing teams to let them function as one accountable revenue team. We will now explore the stages involved in the progression of a lead to customer using a lead funnel. Lead scoring and conversion The sales and marketing teams together come up with a methodology for lead scoring to determine if the lead is sales ready. Scoring can be a manual or automated process that takes into consideration the interest shown by the lead in your product to assign points to a prospect and ranks them as cold, warm, and hot.  When the prospect rank reaches an agreed threshold, it is considered to be qualified and is assigned to the sales team after acceptance by sales. The process of lead scoring can vary from company to company, but some of the general criteria used for scoring are the demography, expense budget, company size, industry, role and designation of the lead contact, and profile completeness. In addition, scoring also take into consideration various behavioral characteristics to measure the frequency and quality of engagement, such as the response to e-mail and contacts, number of visits to website, the pages visited, app downloads, and following on social media. Lead scoring is a critical process that helps align the sales and the marketing teams within the organization by passing quality leads to the sales team and making the sales effective. Summary In this article, we saw the present day marketing and common steps involved in marketing with Microsoft Dynamics CRM 2013, such as targeting, automation and execution, close looping and revenue management by collaboration. Resources for Article: Further resources on this subject: Microsoft Dynamics CRM 2011 Overview [Article] Introduction to Reporting in Microsoft Dynamics CRM [Article] Overview of Microsoft Dynamics CRM 2011 [Article]
Read more
  • 0
  • 0
  • 2001

article-image-bootstrap-3-and-other-applications
Packt
21 Apr 2014
10 min read
Save for later

Bootstrap 3 and other applications

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) Bootstrap 3 Bootstrap 3, formerly known as Twitter's Bootstrap, is a CSS and JavaScript framework for building application frontends. The third version of Bootstrap has important changes over the earlier versions of the framework. Bootstrap 3 is not compatible with the earlier versions. Bootstrap 3 can be used to build great frontends. You can download the complete framework, including CSS and JavaScript, and start using it right away. Bootstrap also has a grid. The grid of Bootstrap is mobile-first by default and has 12 columns. In fact, Bootstrap defines four grids: the extra-small grid up to 768 pixels (mobile phones), the small grid between 768 and 992 pixels (tablets), the medium grid between 992 and 1200 pixels (desktop), and finally, the large grid of 1200 pixels and above for large desktops. The grid, all other CSS components, and JavaScript plugins are described and well documented at http://getbootstrap.com/. Bootstrap's default theme looks like the following screenshot: Example of a layout built with Bootstrap 3 The time when all Bootstrap websites looked quite similar is far behind us now. Bootstrap will give you all the freedom you need to create innovative designs. There is much more to tell about Bootstrap, but for now, let's get back to Less. Working with Bootstrap's Less files All the CSS code of Bootstrap is written in Less. You can download Bootstrap's Less files and recompile your own version of the CSS. The Less files can be used to customize, extend, and reuse Bootstrap's code. In the following sections, you will learn how to do this. To download the Less files, follow the links at http://getbootstrap.com/ to Bootstrap's GitHub pages at https://github.com/twbs/bootstrap. On this page, choose Download Zip on the right-hand side column. Building a Bootstrap project with Grunt After downloading the files mentioned earlier, you can build a Bootstrap project with Grunt. Grunt is a JavaScript task runner; it can be used for the automation of your processes. Grunt helps you when performing repetitive tasks such as minifying, compiling, unit testing, and linting your code. Grunt runs on node.js and uses npm, which you saw while installing the Less compiler. Node.js is a standalone JavaScript interpreter built on Google's V8 JavaScript runtime, as used in Chrome. Node.js can be used for easily building fast, scalable network applications. When you unzip the files from the downloaded file, you will find Gruntfile.js and package.json among others. The package.json file contains the metadata for projects published as npm modules. The Gruntfile.js file is used to configure or define tasks and load Grunt plugins. The Bootstrap Grunt configuration is a great example to show you how to set up automation testing for projects containing HTML, Less (CSS), and JavaScript. The parts that are interesting for you as a Less developer are mentioned in the following sections. In package.json file, you will find that Bootstrap compiles its Less files with grunt-contrib-less. At the time of writing this article, the grunt-contrib-less plugin compiles Less with less.js Version 1.7. In contrast to Recess (another JavaScript build tool previously used by Bootstrap), grunt-contrib-less also supports source maps. Apart from grunt-contrib-less, Bootstrap also uses grunt-contrib-csslint to check the compiled CSS for syntax errors. The grunt-contrib-csslint plugin also helps improve browser compatibility, performance, maintainability, and accessibility. The plugin's rules are based on the principles of object-oriented CSS (http://www.slideshare.net/stubbornella/object-oriented-css). You can find more information by visiting https://github.com/stubbornella/csslint/wiki/Rules. Bootstrap makes heavy use of Less variables, which can be set by the customizer. Whoever has studied the source of Gruntfile.js may very well also find a reference to the BsLessdocParser Grunt task. This Grunt task is used to build Bootstrap's customizer dynamically based on the Less variables used by Bootstrap. Though the process of parsing Less variables to build, for instance, documentation will be very interesting, this task is not discussed here further. This section ends with the part of Gruntfile.js that does the Less compiling. The following code from Gruntfile.js should give you an impression of how this code will look: less: { compileCore: { options: { strictMath: true, sourceMap: true, outputSourceFiles: true, sourceMapURL: '<%= pkg.name %>.css.map', sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map' }, files: { 'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less' } } Last but not least, let's have a look at the basic steps to run Grunt from the command line and build Bootstrap. Grunt will be installed with npm. Npm checks Bootstrap's package.json file and automatically installs the necessary local dependencies listed there. To build Bootstrap with Grunt, you will have to enter the following commands on the command line: > npm install -g grunt-cli > cd /path/to/extracted/files/bootstrap After this, you can compile the CSS and JavaScript by running the following command: > grunt dist This will compile your files into the /dist directory. The > grunt test command will also run the built-in tests. Compiling your Less files Although you can build Bootstrap with Grunt, you don't have to use Grunt. You will find the Less files in a separate directory called /less inside the root /bootstrap directory. The main project file is bootstrap.less; other files will be explained in the next section. You can include bootstrap.less together with less.js into your HTML for the purpose of testing as follows: <link rel="bootstrap/less/bootstrap.less"type="text/css" href="less/styles.less" /> <script type="text/javascript">less = { env: 'development' };</script> <script src = "less.js" type="text/javascript"></script> Of course, you can compile this file server side too as follows: lessc bootstrap.less > bootstrap.css Dive into Bootstrap's Less files Now it's time to look at Bootstrap's Less files in more detail. The /less directory contains a long list of files. You will recognize some files by their names. You have seen files such as variables.less, mixins.less, and normalize.less earlier. Open bootstrap.less to see how the other files are organized. The comments inside bootstrap.less tell you that the Less files are organized by functionality as shown in the following code snippet: // Core variables and mixins // Reset // Core CSS // Components Although Bootstrap is strongly CSS-based, some of the components don't work without the related JavaScript plugins. The navbar component is an example of this. Bootstrap's plugins require jQuery. You can't use the newest 2.x version of jQuery because this version doesn't have support for Internet Explorer 8. To compile your own version of Bootstrap, you have to change the variables defined in variables.less. When using the last declaration wins and lazy loading rules, it will be easy to redeclare some variables. Creating a custom button with Less By default, Bootstrap defines seven different buttons, as shown in the following screenshot: The seven different button styles of Bootstrap 3 Please take a look at the following HTML structure of Bootstrap's buttons before you start writing your Less code: <!-- Standard button --> <button type="button" class="btn btn-default">Default</button> A button has two classes. Globally, the first .btn class only provides layout styles, and the second .btn-default class adds the colors. In this example, you will only change the colors, and the button's layout will be kept intact. Open buttons.less in your text editor. In this file, you will find the following Less code for the different buttons: // Alternate buttons // -------------------------------------------------- .btn-default { .button-variant(@btn-default-color; @btn-default-bg; @btn-default-border); } The preceding code makes it clear that you can use the .button-variant() mixin to create your customized buttons. For instance, to define a custom button, you can use the following Less code: // Customized colored button // -------------------------------------------------- .btn-colored { .button-variant(blue;red;green); } In the preceding case, you want to extend Bootstrap with your customized button, add your code to a new file, and call this file custom.less. Appending @import custom.less to the list of components inside bootstrap.less will work well. The disadvantage of doing this will be that you will have to change bootstrap.less again when updating Bootstrap; so, alternatively, you could create a file such as custombootstrap.less which contains the following code: @import "bootstrap.less"; @import "custom.less"; The previous step extends Bootstrap with a custom button; alternatively, you could also change the colors of the default button by redeclaring its variables. To do this, create a new file, custombootstrap.less again, and add the following code into it: @import "bootstrap.less"; //== Buttons // //## For each of Bootstrap's buttons, define text,background and border color. @btn-default-color: blue; @btn-default-bg: red; @btn-default-border: green; In some situations, you will, for instance, need to use the button styles without everything else of Bootstrap. In these situations, you can use the reference keyword with the @import directive. You can use the following Less code to create a Bootstrap button for your project: @import (reference) "bootstrap.less"; .btn:extend(.btn){}; .btn-colored { .button-variant(blue;red;green); } You can see the result of the preceding code by visiting http://localhost/index.html in your browser. Notice that depending on the version of less.js you use, you may find some unexpected classes in the compiled output. Media queries or extended classes sometimes break the referencing in older versions of less.js. Use CSS source maps for debugging When working with large LESS code bases finding the original source can be become complex when viewing your results in the browsers. Since version 1.5 LESS offers support for CSS source maps. CSS source maps enable developer tools to map calls back to their location in original source files. This also works for compressed files. The latest versions of Google's Chrome Developers Tools offer support for these sources files. Currently CSS source maps debugging won't work for client side compiling as used for the examples in this book. The server-side lessc compiler can generate useful CSS source maps. After installing the lessc compiler you can run: >> lessc –source-map=styles.css.map styles.less > styles.css The preceding code will generate two files: styles.css.map and styles.css. The last line of styles.css contains now an extra line which refers to the source map: /*# sourceMappingURL=boostrap.css.map */ In your HTML you only have to include the styles.css as you used to: <link href="styles.css" rel="stylesheet"> When using CSS source maps as described earlier and inspecting your HTML with Google's Chrome Developers Tools, you will see something like the following screenshot: Inspect source with Google's Chrome Developers Tools and source maps As you see styles now have a reference to their original LESS file such as grid.less, including line number, which helps you in the process of debugging. The styles.css.map file should be in the same directory as the styles.css file. You don't have to include your LESS files in this directory. Summary This article has covered the concept of Bootstrap, how to use Bootstrap's Less files, and how the files can be modified to be used according to your convenience. Resources for Article: Further resources on this subject: Getting Started with Bootstrap [Article] Bootstrap 3.0 is Mobile First [Article] Downloading and setting up Bootstrap [Article]
Read more
  • 0
  • 0
  • 17767

article-image-indexing-data
Packt
18 Apr 2014
10 min read
Save for later

Indexing the Data

Packt
18 Apr 2014
10 min read
(For more resources related to this topic, see here.) Elasticsearch indexing We have our Elasticsearch cluster up and running, and we also know how to use the Elasticsearch REST API to index our data, delete it, and retrieve it. We also know how to use search to get our documents. If you are used to SQL databases, you might know that before you can start putting the data there, you need to create a structure, which will describe what your data looks like. Although Elasticsearch is a schema-less search engine and can figure out the data structure on the fly, we think that controlling the structure and thus defining it ourselves is a better way. In the following few pages, you'll see how to create new indices (and how to delete them). Before we look closer at the available API methods, let's see what the indexing process looks like. Shards and replicas The Elasticsearch index is built of one or more shards and each of them contains part of your document set. Each of these shards can also have replicas, which are exact copies of the shard. During index creation, we can specify how many shards and replicas should be created. We can also omit this information and use the default values either defined in the global configuration file (elasticsearch.yml) or implemented in Elasticsearch internals. If we rely on Elasticsearch defaults, our index will end up with five shards and one replica. What does that mean? To put it simply, we will end up with having 10 Lucene indices distributed among the cluster. Are you wondering how we did the calculation and got 10 Lucene indices from five shards and one replica? The term "replica" is somewhat misleading. It means that every shard has its copy, so it means there are five shards and five copies. Having a shard and its replica, in general, means that when we index a document, we will modify them both. That's because to have an exact copy of a shard, Elasticsearch needs to inform all the replicas about the change in shard contents. In the case of fetching a document, we can use either the shard or its copy. In a system with many physical nodes, we will be able to place the shards and their copies on different nodes and thus use more processing power (such as disk I/O or CPU). To sum up, the conclusions are as follows: More shards allow us to spread indices to more servers, which means we can handle more documents without losing performance. More shards means that fewer resources are required to fetch a particular document because fewer documents are stored in a single shard compared to the documents stored in a deployment with fewer shards. More shards means more problems when searching across the index because we have to merge results from more shards and thus the aggregation phase of the query can be more resource intensive. Having more replicas results in a fault tolerance cluster, because when the original shard is not available, its copy will take the role of the original shard. Having a single replica, the cluster may lose the shard without data loss. When we have two replicas, we can lose the primary shard and its single replica and still everything will work well. The more the replicas, the higher the query throughput will be. That's because the query can use either a shard or any of its copies to execute the query. Of course, these are not the only relationships between the number of shards and replicas in Elasticsearch. So, how many shards and replicas should we have for our indices? That depends. We believe that the defaults are quite good but nothing can replace a good test. Note that the number of replicas is less important because you can adjust it on a live cluster after index creation. You can remove and add them if you want and have the resources to run them. Unfortunately, this is not true when it comes to the number of shards. Once you have your index created, the only way to change the number of shards is to create another index and reindex your data. Creating indices When we created our first document in Elasticsearch, we didn't care about index creation at all. We just used the following command: curl -XPUT http://localhost:9200/blog/article/1 -d '{"title": "New   version of Elasticsearch released!", "content": "...", "tags":["announce", "elasticsearch", "release"] }' This is fine. If such an index does not exist, Elasticsearch automatically creates the index for us. We can also create the index ourselves by running the following command: curl -XPUT http://localhost:9200/blog/ We just told Elasticsearch that we want to create the index with the blog name. If everything goes right, you will see the following response from Elasticsearch: {"acknowledged":true} When is manual index creation necessary? There are many situations. One of them can be the inclusion of additional settings such as the index structure or the number of shards. Altering automatic index creation Sometimes, you can come to the  conclusion that automatic index creation is a bad thing. When you have a big system with many processes sending data into Elasticsearch, a simple typo in the index name can destroy hours of script work. You can turn off automatic index creation by adding the following line in the elasticsearch.yml configuration file: action.auto_create_index: false Note that action.auto_create_index is more complex than it looks. The value can be set to not only false or true. We can also use index name patterns to specify whether an index with a given name can be created automatically if it doesn't exist. For example, the following definition allows automatic creation of indices with the names beginning with a, but disallows the creation of indices starting with an. The other indices aren't allowed and must be created manually (because of -*). action.auto_create_index: -an*,+a*,-* Note that the order of pattern definitions matters. Elasticsearch checks the patterns up to the first pattern that matches, so if you move -an* to the end, it won't be used because of +a* , which will be checked first. Settings for a newly created index The manual creation of an index is also necessary when you want to set some configuration options, such as the number of shards and replicas. Let's look at the following example: curl -XPUT http://localhost:9200/blog/ -d '{     "settings" : {         "number_of_shards" : 1,         "number_of_replicas" : 2     } }' The preceding command will result in the creation of the blog index with one shard and two replicas, so it makes a total of three physical Lucene indices. Also, there are other values that can be set in this way. So, we already have our new, shiny index. But there is a problem; we forgot to provide the mappings, which are responsible for describing the index structure. What can we do? Since we have no data at all, we'll go for the simplest approach – we will just delete the index. To do that, we will run a command similar to the preceding one, but instead of using the PUT HTTP method, we use DELETE. So the actual command is as follows: curl –XDELETE http://localhost:9200/posts And the response will be the same as the one we saw earlier, as follows: {"acknowledged":true} Now that we know what an index is, how to create it, and how to delete it, we are ready to create indices with the mappings we have defined. It is a very important part because data indexation will affect the search process and the way in which documents are matched. Mappings configuration If you are used to SQL databases, you may know that before you can start inserting the data in the database, you need to create a schema, which will describe what your data looks like. Although Elasticsearch is a schema-less search engine and can figure out the data structure on the fl y, we think that controlling the structure and thus defining it ourselves is a better way. In the following few pages, you'll see how to create new indices (and how to delete them) and how to create mappings that suit your needs and match your data structure. Type determining mechanism Before we  start describing how to create mappings  manually, we wanted to write about one thing. Elasticsearch can guess the document structure by looking at JSON, which defines the document. In JSON, strings are surrounded by quotation marks, Booleans are defined using specific words, and numbers are just a few digits. This is a simple trick, but it usually works. For example, let's look at the following document: {   "field1": 10, "field2": "10" } The preceding document has two fields. The field1 field will be determined as a number (to be precise, as long type), but field2 will be determined as a string, because it is surrounded by quotation marks. Of course, this can be the desired behavior, but sometimes the data source may omit the information about the data type and everything may be present as strings. The solution to this is to enable more aggressive text checking in the mapping definition by setting the numeric_detection property to true. For example, we can execute the following command during the creation of the index: curl -XPUT http://localhost:9200/blog/?pretty -d '{   "mappings" : {     "article": {       "numeric_detection" : true     }   } }' Unfortunately, the problem still exists if we want the Boolean type to be guessed. There is no option to force the guessing of Boolean types from the text. In such cases, when a change of source format is impossible, we can only define the field directly in the mappings definition. Another type that causes trouble is a date-based one. Elasticsearch tries to guess dates given as timestamps or strings that match the date format. We can define the list of recognized date formats using the dynamic_date_formats property, which allows us to specify the formats array. Let's look at the following command for creating the index and type: curl -XPUT 'http://localhost:9200/blog/' -d '{   "mappings" : {     "article" : {       "dynamic_date_formats" : ["yyyy-MM-dd hh:mm"]     }   } }' The preceding command will result in the creation of an index called blog with the single type called article. We've also used the dynamic_date_formats property with a single date format that will result in Elasticsearch using the date core type for fields matching the defined format. Elasticsearch uses the joda-time library to define date formats, so please visit http://joda-time.sourceforge.net/api-release/org/joda/time/format/DateTimeFormat.html if you are interested in finding out more about them. Remember that the dynamic_date_format property accepts an array of values. That means that we can handle several date formats simultaneously.
Read more
  • 0
  • 0
  • 4183

article-image-skeuomorphic-versus-flat
Packt
18 Apr 2014
8 min read
Save for later

Skeuomorphic versus flat

Packt
18 Apr 2014
8 min read
(For more resources related to this topic, see here.) Skeuomorphism is defined as an element of design or structure that serves little or no purpose in the artifact fashioned from the new material but was essential to the object made from the original material (courtesy: Wikipedia — http://en.wikipedia.org/wiki/Skeuomorph). Apple created several skeuomorphic interfaces for their desktop and mobile apps; apps such as iCal, iBooks, Find My Friends, Podcast apps, and several others. This kind of interface was both loved and hated among the design community and users. It was a style that focused a lot on the detail and texture, making the interface heavier and often more complex, but interesting because of the clear connection to the real objects depicted here. It was an enjoyable and rich experience for the user due to the high detail and interaction that a skeuomorphic interface presented, which served to attract the eye to the detail and care put into these designs; for example, the page flip in iBooks, visually representing the swipe of a page as in a traditional book. But this style also had its downsides. Besides being a harsh transition from the traditional interfaces (as in the case of Apple, in which it meant coming from its famous glassy and clean looking Aqua interface), several skeuomorphic applications on the desktop didn't seem to fit in the overall OS look. Apart from stylistic preferences and incoherent looks, skeuomorphic design is also a bad design choice because the style in itself is a limitation to innovation. By replicating the traditional and analogical designs, the designer doesn't have the option or the freedom to imagine, create, and design new interfaces and interactions with the user. Flat design, being the extremely simple and clear style that it is, gives all the freedom to the designer by ignoring any kind of limitations and effects. But both styles have a place and time to be used, and skeuomorphic is great for applications such as Propellerheads that are directly replacing hardware, such as audio mixers. Using these kinds of interfaces makes it easier for new users to learn how to use the real hardware counterpart, while at the same time previous users of the hardware will already know how to use the interface with ease. Regardless of the style, a good designer must be ready to create an interface that is adapted to the needs of the user and the market. To exemplify this and to better learn the basic differences between flat and skeuomorphic, let's do a quick exercise. Exercise – the skeuomorphic and flat buttons In this exercise, we'll create a simple call to an action button, the copy of Buy Now. We'll create this element twice; first we'll take a look at the skeuomorphic approach by creating a realistic looking button with texture, shadow, and depth. Next, we will simply convert it to its flat counterpart by removing all those extra elements and adapting it to a minimalistic style. You should have all the materials you'll need for this exercise. We will use the typeface Lato, also available for free on Google Fonts, and the image wood.jpg for the texture on the skeuomorphic button. We'll just need Photoshop for this exercise, so let's open it up and use the following steps: Create a new Photoshop document with 800 x 600 px. This is where we will create our buttons. Let's start by creating the skeuomorphic one. We start by creating a rectangle with the rounded rectangle tool, with a radius of 20 px. This will be the face of our button. To make it easier to visualize the element while we create it, let's make it gray (#a2a2a2). Now that we have our button face created, let's give some depth to this button. Just duplicate the layer (command + J on Mac or Ctrl + J on Windows) and pull it down to 10 or 15 px, whichever you prefer. Let's make this new rectangle a darker shade of gray (#393939) and make sure that this layer is below the face layer. You should now have a simple gray button with some depth. The side layer simulates the depth of the button by being pulled down for just a couple of pixels, and since we made it darker, it resembles a shadow. Now for the call to action. Create a textbox on top of the button face, set its width to that of the button, and center the text. In there, write Buy Now, and set the text to Lato, weight to Black, and size to 50 pt. Center it vertically just by looking at the screen, until you find that it sits correctly in the center of the button. Now to make this button really skeuomorphic, let's get our image wood.jpg, and let's use it as our texture. Create a new layer named wood-face and make sure it's above our face layer. Now to define the layer as a texture and use our button as a mask, we're going to right-click on the layer and click on Create clipping mask. This will mask our texture to overlay the button face. For the side texture, duplicate the wood-face layer, rename it to wood-side and repeat the preceding instructions for the side layer. After that, and to have a different look, move the wood-face layer around and look for a good area of the texture to use on the side, ideally something with some up strips to make it look more realistic. To finish the side, create a new layer style in the side layer, gradient overlay, and make a gradient from black to transparent and change the settings as shown in the following screenshot. This will make a shadow effect on top of the wood, making it look a lot better. To finish our skeuomorphic button, let's go back to the text and define the color as #7b3201 (or another shade of brown; try to pick from the button and make it slightly darker until you find that it looks good), so that it looks like the text is carved in the wood. The last touch will be to add an Inner Shadow layer style in the text with the settings shown. Group all the layers and name it Skeuomorphic and we're done. And now we have our skeuomorphic button. It's a really simple way of doing it but we recreated the look of a button made out of wood just by using shapes, texture, and some layer styles. Now for our flat version: Duplicate the group we just created and name it flat. Move it to the other half of the workspace. Delete the following layers: wood-face, wood-side, and side. This button will not have any depth, so we do not need the side layer as well as the textures. To keep the button in the same color scheme as our previous one, we'll use the color #7b3201 for our text and face. Your document should look like what is shown in the following screenshot: Create a new layer style and choose Stroke with the following settings. This will create the border of our button. To make the button transparent, let's reduce the Layer Fill option to 0 percent, which will leave only the layer styles applied. Let's remove the layer styles from our text to make it flat, reduce the weight of the font to Bold to make it thinner and roughly the same weight of the border, and align it visually, and our flat button is done! This type of a transparent button is great for flat interfaces, especially when used over a blurred color background. This is because it creates an impactful button with very few elements to it, creating a transparent control and making great use of the white space in the design. In design, especially when designing flat, remember that less is more. With this exercise, you were able to build a skeuomorphic element and deconstruct it down to its flat version, which is as simple as a rounded rectangle with border and text. The font we chose is frequently used for flat design layouts; it's simple but rounded and it works great with rounded-corner shapes such as the ones we just created. Summary Flat design is a digital style of design that has been one of the biggest trends in recent years in web and user interface design. It is famous for its extremely minimalistic style. It has appeared at a time when skeuomorphic, a style of creating realistic interfaces, was considered to be the biggest and most famous trend, making this a really rough and extreme transition for both users and designers. We covered how to design in skeuomorphic and in flat, and what their main differences are. Resources for Article: Further resources on this subject: Top Features You Need to Know About – Responsive Web Design [Article] Web Design Principles in Inkscape [Article] Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 2 [Article]
Read more
  • 0
  • 0
  • 11168
article-image-load-balancing-mssql
Packt
18 Apr 2014
5 min read
Save for later

Load balancing MSSQL

Packt
18 Apr 2014
5 min read
NetScaler is the only certified load balancer that can load balance the MySQL and MSSQL services. It can be quite complex and there are many requirements that need to be in place in order to set up a proper load-balanced SQL server. Let us go through how to set up a load-balanced Microsoft SQL Server running on 2008 R2. Now, it is important to remember that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized. This is to ensure that the content that the user is requesting is available on all the backend servers. Microsoft SQL Server supports different types of availability solutions, such as replication. You can read more about it at http://technet.microsoft.com/en-us/library/ms152565(v=sql.105).aspx. Using transactional replication is recommended, as this replicates changes to different SQL servers as they occur. As of now, the load balancing solution for MSSQL, also called DataStream, supports only certain versions of SQL Server. They can be viewed at http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-dbproxy-reference-protocol-con.html. Also, only certain authentication methods are supported. As of now, only SQL authentication is supported for MSSQL. The steps to set up load balancing are as follows: We need to add the backend SQL servers to the list of servers. Next, we need to create a custom monitor that we are going to use against the backend servers. Before we create the monitor, we can create a custom database within SQL Server that NetScaler can query. Open Object Explorer in the SQL Management Studio, and right-click on the Database folder. Then, select New Database, as shown in the following screenshot: We can name it ns and leave the rest at their default values, and then click on OK. After that is done, go to the Database folder in Object Explorer. Then, right-click on Tables, and click on Create New Table. Here, we need to enter a column name (for example, test), and choose nchar(10) as the data type. Then, click on Save Table and we are presented with a dialog box, which gives us the option to change the table name. Here, we can type test again. We have now created a database called ns with a table called test, which contains a column also called test. This is an empty database that NetScaler will query to verify connectivity to the SQL server. Now, we can go back to NetScaler and continue with the set up. First, we need to add a DBA user. This can be done by going to System | User Administration | Database Users, and clicking on Add. Here, we need to enter a username and password for a SQL user who is allowed to log in and query the database. After that is done, we can create a monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. As the type, choose MSSQL-ECV, and then go to the Special Parameters pane. Here, we need to enter the following information: Database: This is ns in this example. Query: This is a SQL query, which is run against the database. In our example, we type select * from test. User Name: Here we need to enter the name of the DBA user we created earlier. In my case, it is sa. Rule: Here, we enter an expression that defines how NetScaler will verify whether the SQL server is up or not. In our example, it is MSSQL.RES. ATLEAST_ROWS_COUNT(0), which means that when NetScaler runs the query against the database, it should return zero rows from that table. Protocol Version: Here, we need to choose the one that works with the SQL Server version we are running. In my case, I have SQL Server 2012. So, the monitor now looks like the following screenshot: It is important that the database we created earlier should be created on all the SQL servers we are going to load balance using NetScaler. So, now that we are done with the monitor, we can bind it to a service. When setting up the services against the SQL servers, remember to choose MSSQL as the protocol and 1433 as the port, and then bind the custom-made monitor to it. After that, we need to create a virtual load-balanced service. An important point to note here is that we choose MSSQL as the protocol and use the same port nr as we used before 1433. We can use NetScaler to proxy connections between different versions of SQL Server. As our backend servers are not set up to connect the SQL 2012 version, we can present the vServer as a 2008 server. For example, if we have an application that runs on SQL Server 2008, we can make some custom changes to the vServer. To create the load-balanced vServer, go to Advanced | MSSQL | Server Options. Here, we can choose different versions, as shown in the following screenshot: After we are done with the creation of the vServer, we can test it by opening a connection using the SQL Management Server to the VIP address. We can verify whether the connection is load balancing properly by running the following CLI command: Stat lb vserver nameofvserver Summary In this article, we followed steps for setting up a load-balanced Microsoft SQL Server running on 2008 R2 remembering that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized to ensure that the content that the user is requesting is available on all the backend servers. Resources for Article: Understanding Citrix Provisioning Services 7.0 Getting Started with the Citrix Access Gateway Product Family Managing Citrix Policies
Read more
  • 0
  • 0
  • 13786

article-image-using-persistent-connection
Packt
17 Apr 2014
16 min read
Save for later

Using a Persistent Connection

Packt
17 Apr 2014
16 min read
(For more resources related to this topic, see here.) In our journey through all the features exposed by SignalR, so far we have been analyzing and dissecting how we can use the Hubs API to deliver real-time, bidirectional messaging inside our applications. The Hubs API offers a high-level abstraction over the underlying connection, effectively introducing a Remote Procedure Call (RPC) model on top it. SignalR is also offering a low-level API, which is the foundation Hubs is built on top of. However, it's also available to be used directly from inside our applications and it's called the Persistent Connection API. A Persistent Connection API is a more basic representation of what the concrete network connection really is, and exposes a simpler and rawer API to interact with. We hook into this API by inheriting from a base class (PersistentConnection) in order to add our own custom Persistent Connection manager. It is the same approach we have been using when defining our custom Hubs by deriving from the Hub class. Through a registration step, we'll then let SignalR know about our custom type in order to expose it to our clients. The PersistentConnection type exposes a few properties supplying entry points from where we send messages to the connected clients, and a set of overridable methods we can use to hook into the connection and messaging pipeline in order to handle the incoming traffic. This article will be an introduction to the Persistent Connection API, hence we'll be looking at just a portion of it in order to deliver some basic functionalities. We will also try to highlight what the peculiar attributes of this API are, which characterize it as a low-level one—still delivering powerful features, but without some of the nice mechanisms we have for free when using the Hubs API. For the next four recipes, we will always start with empty web application projects, and we'll use simple HTML pages for the client portions. However, for the last one, we'll be building a console application. Adding and registering a persistent connection In this first recipe of the article, we'll introduce the PersistentConnection type, and we'll go into the details about how it works, how to use it, and how to register it into SignalR to expose it to the connected clients. Our goal for the sample code is to trace any attempt of connection from a client towards the exposed endpoint. We will also write a minimal JavaScript client in order to actually test its behavior. Getting ready Before writing the code of this recipe, we need to create a new empty web application that we'll call Recipe25. How to do it... Let's run through the steps necessary to accomplish our goal: We first need to add a new class derived from PersistentConnection. We can navigate to Add | New Item... in the context menu of the project from the Solution Explorer window, and from there look for the SignalR Persistent Connection Class (v2) template and use it to create our custom type, which we'll name EchoConnection. Visual Studio will create the related file for us, adding some plumbing code in it. For our recipe, we want to keep it simple and see how things work step by step, so we'll remove both methods added by the wizard and leave the code as follows: using Microsoft.AspNet.SignalR; namespace Recipe25 { public class EchoConnection : PersistentConnection { } } We could reach the same result manually, but in that case we would have to take care to add the necessary package references from NuGet first. The simplest way in this case would be to reference the Microsoft ASP.NET SignalR package, which will then bring down any other related and necessary module. The process of doing that has already been described earlier, and it should already be well known. With that reference in place, we could then add our class, starting with a simple empty file, with no need for any wizard. We then need to add an OWIN Startup class and set it up so that the Configuration() method calls app.MapSignalR<...>(...); in order to properly initiate the persistent connection class we just added. We should end up with a Startup.cs file that looks like the following code: using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(Recipe25.Startup))] namespace Recipe25 { public class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR<EchoConnection>("/echo"); } } } The usual Configuration() method calls MapSignalR(), but this time it uses a different overload, supplying EchoConnection as the type of our connection class and /echo as the name of the related endpoint. Behind the scenes, what MapSignalR() does is quite similar to what was happening when we were using the Hubs API; however, in this case, there is a little bit less magic because we have to explicitly specify both the type of the connection class and the URL from where it will be exposed. If we had more than one connection class to expose, we would have to call the MapSignalR() method once for each of them, and we would have to specify different URL addresses each time. It's worth noting that the endpoint name must begin with a / character. If we omit it, SignalR will raise an exception. Let's go back to our EchoConnection class and write some code to be executed when a new connection is performed. In order to achieve that, we have to override the OnConnected() method from PersistentConnection as shown in the following code: protected override Task OnConnected(IRequest request, string connectionId) { System.Diagnostics.Trace.WriteLine("Connected"); return null; } OnConnected() is one of the overridable methods mentioned in the Introduction section of the article, and its role is to give us a point from where we can handle every incoming connection. The first argument of the method is an instance of an object implementing the IRequest interface, which is a SignalR interface whose role is to describe the HTTP incoming request. It's similar to the very well-known ASP.NET Request object, but this version is much lighter and focused on what's needed in the context of SignalR, without exposing anything that could harm its efficiency, such as the (somewhat infamous) Session object or a read-write Cookies collection. The second argument is of type string and contains the connection identifier generated during the handshaking steps performed while setting up the connection. OnConnected() returns a Task instance, and in this way it's declaring its asynchronous nature. But, in our case, we do not really have anything special to send back to its caller, so finalizing the code of the method with return null; will be just fine. The only relevant line of code in our method is simply sending a diagnostic message to any Trace listener available. We are done, and it's been pretty straightforward. Now any client hitting the /echo endpoint will be handled by this class, and at every new connection, the OnConnect() method will be called. When we say new connection, we actually mean whenever SignalR performs a full handshaking cycle, selecting and applying a connection strategy. If a specific client (for example, a browser window) loses its connection and is able to reconnect just after the reconnection retry timeout has expired, a new full-blown connection will be built, a new connectionId value will be assigned to it and OnConnect() will be called again. We need a client to test if everything is working properly. To achieve this goal, let's add an HTML page called index.html, which we'll use to build our client. In this page, we'll link all the necessary JavaScript files that have already been added to the project when the Microsoft ASP.NET SignalR package has been referenced. Let's proceed. We first reference jQuery using the following code: <script src = "Scripts/jquery-2.1.0.js"></script> Then we need to link the SignalR JavaScript library with the following line of code: <script src = "Scripts/jquery.signalR-2.0.2.js"></script> At this point, we used to add a reference to the dynamic hubs endpoint in the previous recipes, but this is not the case here. We do not need it because we are not using the Hubs API, and therefore we don't need dynamic proxies that are typical of that way of using SignalR. The SignalR JavaScript library that we just added contains all that's necessary to use the Persistent Connection API. We finally add a simple script to connect to the server as follows: <script> $(function() { var c = $.connection('echo'); c.start() .done(function(x) { console.log(c); console.log(x); //x and c are the same! }); }); </script> Here we first interact with the $.connection member we already saw when describing the Hubs API, but in this case we do not use it to access the hubs property. Instead, we use it as a function to ask SignalR to create a connection object pointing at the endpoint we specify as its only argument (echo). The returned object has a similar role to the one the hubs member has; in fact, we can call start() on it to launch the actual connection process, and the returned value is again a JavaScript promise object whose completion we can wait for, thanks to the done() method, in order to ensure that a connection has been fully established. Our simple code prints out both the connection object we obtained (the c variable) and the argument supplied to the done() callback (the x variable) just to show that they are actually the same object, and we are free to pick any of them when a connection reference is needed. To test what we did, we just need to run the project from Visual Studio and open the index.html page using the Developer Tools of the browser of choice to check the messages printed on the browser console. How it works... The SignalR $.connection object exposes the low-level connection object, which is the real subject of this article. Its server-side counterpart can be any type derived from PersistentConnection that has been previously registered under the same endpoint address targeted when calling the $.connection() function. Any major event happening on the connection is then exposed on the server by events that can be intercepted by overriding a bunch of protected methods. In this recipe, we saw how we can be notified about new connections just by overriding the OnConnected() method. We'll see a few more of those in future recipes. The rest of the client-side code is very simple, and it's very similar to what we have been doing when starting connections with the Hubs API; the only big difference so far is the fact that we do not use the dynamically generated hubs member anymore. Sending messages from the server Now that we have a good idea about how PersistentConnection can be registered and then used in our applications, let's go deeper and start sending some data with it. As usual, in SignalR, we need a client and a server to establish a connection, and both parts can send and receive data. Here we'll see how a server can push messages to any connected client, and we'll analyze what a message looks like. We already mentioned the fact that any communication from client to server could, of course, be performed using plain old HTTP, but pushing information from a server to any connected client without any specific client request for that data is not normally possible, and that's where SignalR really helps. We'll also start appreciating the fact that this API stands at a lower level when compared to the Hubs API because its features are more basic, but at the same time we'll see that it's still a very useful, powerful, and easy-to-use API. Getting ready Before writing the code for this recipe, we need to create a new empty web application, which we'll call Recipe26. How to do it... The following are the steps to build the server part: We first need to add a new class derived from PersistentConnection, which we'll name EchoConnection. We can go back to the previous recipe to see the options we have to accomplish that, always paying attention to every detail in order to have the right package references in place. We want to end up with an EchoConnection.cs file containing an empty class having the following code: using Microsoft.AspNet.SignalR; namespace Recipe26 { public class EchoConnection : PersistentConnection { } } We then need to add the usual OWIN Startup class and set it up so that the Configuration() method calls app.MapSignalR<EchoConnection> in order to properly wire the persistent connection class we just added ("/echo"); to our project. This is actually the same step we took in the previous recipe, and we will be doing the same through the remainder of this article. The final code will look like the following: using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(Recipe26.Startup))] namespace Recipe26 { public class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR<EchoConnection>("/echo"); } } } Back to our EchoConnection class, we want to redefine the OnConnected() overridable method, whose function we already described earlier, and from there send a message to the client that just connected. This is as simple as coding the following: protected override Task OnConnected(IRequest request, string connectionId) { return Connection.Send(connectionId, "Welcome!"); } The PersistentConnection class exposes several useful members, and one of the most important is for sure the Connection property, of type IConnection, which returns an object that allows us to communicate with the connected clients. A bunch of methods on the IConnection interface let us send data to specific sets of connections (Send()), or target all the connected clients at the same time (Broadcast()). In our example, we use the simplest overload of Send() to push a string payload down to the client that just connected. Both Send() and Broadcast() run asynchronously and return a Task instance, which we can directly use as the return value of the OnConnect() method. Another relevant member inherited from PersistentConnection is the Groups member, which exposes a couple of Send() overloads to push messages down to connections belonging to a specific group. Groups also exposes a set of capabilities to manage the members of specific groups the same way the Hubs API does. There is no conceptual difference here with what we explained earlier, therefore we'll skip any further explanation about it. All the methods we just mentioned expect a last parameter of type object, which is the actual payload of the call. This value is automatically serialized before going on the wire to reach the clients. When there, the involved client library will deserialize it, giving it back its original shape using the best data type available according to the actual client library used (JavaScript or .NET). In our example, we used a simple string type, but any serializable type would do, even an anonymous type. Back to this sample, any client connecting to the /echo endpoint will be handled by the OnConnect() method exposed by EchoConnection, whose body will send back the message Welcome! using the Send() call. Let's build an HTML page called index.html to host our client code. In this page, we'll link all the necessary JavaScript files as we already did in the previous recipe, and then we'll add a few lines of code to enable the client to receive and log whatever the server sends. We first reference jQuery and SignalR with the following code: <script src = "Scripts/jquery-2.1.0.js"></script> <script src = "Scripts/jquery.signalR-2.0.2.js"></script> We then add the following simple piece of script: <script> $(function () { var c = $.connection('echo'); c.received(function(message) { console.log(message); }) .start(); }); </script> Here, we first interact with the $.connection member in the same way we did in the previous recipe, in order to create a connection object towards the endpoint we specify as its argument (echo). We can then call start() on the returned object to perform the actual connection, and again the returned value is a JavaScript promise object whose completion we could wait for in order to ensure that a connection has been fully established. Before starting up the connection, we use the received() method to set up a callback that SignalR will trigger whenever a message will be pushed from the type derived from PersistentConnection down to this client, regardless of the actual method used on the server side (Send() or Broadcast()). Our sample code will just log the received string onto the console. We'll dig more into the received() method later in this article. We can test our code by running the project from Visual Studio and opening the index.html page using the Developer Tools of the browser to see the received message printed onto the console. How it works... Any type derived from PersistentConnection that has been correctly registered behind an endpoint can be used to send and receive messages. The underlying established connection is used to move bits back and forth, leveraging a low-level and highly optimized application protocol used by SignalR to correctly represent every call, regardless of the underlying networking protocol. As usual, everything happens in an asynchronous manner. We have been mentioning the fact that this is a low-level API compared to Hubs; the reason is that we don't have a concept of methods that we can call on both sides. In fact, what we are allowed to exchange are just data structures; we can do that using a pair of general-purpose methods to send and receive them. We are missing a higher abstraction that will allow us to give more semantic meaning to what we exchange, and we have to add that ourselves in order to coordinate any complex custom workflow we want to implement across the involved parts. Nevertheless, this is the only big difference between the two APIs, and almost every other characteristic or capability is exposed by both of them, apart from a couple of shortcomings when dealing with serialization, which we'll illustrate later in this article. You might think that there is no real need to use the Persistent Connection API because Hubs is more powerful and expressive, but that's not always true. For example, you might imagine a scenario where you want your clients to be in charge of dynamically deciding which Hubs to load among a set of available ones, and for that they might need to contact the server anyway to get some relevant information. Hubs are loaded all at once when calling start(), so you would not be able to use a Hub to do the first handshaking. But a persistent connection would be just perfect for the job, because that can be made available and used before starting any hub.
Read more
  • 0
  • 1
  • 4446

Packt
17 Apr 2014
10 min read
Save for later

Designing a XenDesktop® Site

Packt
17 Apr 2014
10 min read
(For more resources related to this topic, see here.) The core components of a XenDesktop® Site Before we get started with the designing of the XenDesktop Site, we need to understand the core components that go into building it. XenDesktop can support all types of workers—from task workers who run Microsoft Office applications to knowledge users who host business applications, to mobile workshifting users, and to high-end 3D application users. It scales from small businesses that support five to ten users to large enterprises that support thousands of users. Please follow the steps in the guide in the order in which they are presented; do not skip steps or topics for a successful implementation of XenDesktop. The following is a simple diagram to illustrate the components that make up the XenDesktop architecture: If you have the experience of using XenDesktop and XenApp, you will be pleased to learn that XenDesktop and XenApp now share management and delivery components to give you a unified management experience. Now that you have a visual of how a simple Site will look when it is completed, let's take a look at each individual component so that you can understand their roles. Terminology and concepts We will cover some commonly used terminology and concepts used with XenDesktop. Server side It is important to understand the terminology and concepts as they apply to the server side of the XenDesktop architecture, so we will cover them. Hypervisor A Hypervisor is an operating system that hosts multiple instances of other operating systems. XenDesktop is supported by three Hypervisors—Citrix XenServer, VMware ESX, and Microsoft Hyper-V. Database In XenDesktop, we use the Microsoft SQL Server. The database is sometimes referred to as the data store. Almost everything in XenDesktop is database driven, and the SQL database holds all state information in addition to the session and configuration information. The XenDesktop Site is only available if the database is available. If the database server fails, existing connections to virtual desktops will continue to function until the user either logs off or disconnects from their virtual desktop; new connections cannot be established if the database server is unavailable. There is no caching in XenDesktop 7.x, so Citrix recommends that you implement SQL mirroring and clustering for High Availability. The IMA data store is no longer used, and everything is now done in the SQL database for both session and configuration information. The data collector is shared evenly across XenDesktop controllers. Delivery Controller The Delivery Controller distributes desktops and applications, manages user access, and optimizes connections to applications. Each Site has one or more Delivery Controllers. Studio Studio is the management console that enables you to configure and manage your XenDesktop and XenApp deployment, eliminating the need for two separate management consoles to manage the delivery of desktops and applications. Studio provides you with various wizards to guide you through the process of setting up your environment, creating your workloads to host and assign applications and desktops, and assigning applications and desktops to users. Citrix Studio replaces the Delivery Services Console and the Citrix AppCenter from previous XenDesktop versions. Director Director is used to monitor and troubleshoot the XenDesktop deployment. StoreFront StoreFront authenticates users to Site(s) hosting the XenApp and XenDesktop resources and manages the stores of desktops and applications that users access. Virtual machines A virtual machine (VM) is a software-implemented version of the hardware. For example, Windows Server 2012 R2 is installed as a virtual machine running in XenServer. In fact, every server and desktop will be installed as a VM with the exception of the Hypervisor, which obviously needs to be installed on the server hardware before we can install any VMs. The Virtual Desktop Agent The Virtual Desktop Agent (VDA) has to be installed on the VM to which users will connect. It enables the machines to register with controllers and manages the ICA/HDX connection between the machines and the user devices. The VDA is installed on the desktop operating system VM, such as Windows 7 or Windows 8, which is served to the client. The VDA maintains a heartbeat with the Delivery Controller, updates policies, and registers the controllers with the Delivery Controller. Server OS machines VMs or physical machines based on the Windows Server operating system are used to deliver applications or host shared desktops to users. Desktop OS machines VMs or physical machines based on the Windows desktop operating system are used to deliver personalized desktops to users or applications from desktop operating systems. Active Directory Microsoft Active Directory is required for authentication and authorization. Active Directory can also be used for controller discovery by desktops to discover the controllers within a Site. Desktops determine which controllers are available by referring to information that controllers publish in Active Directory. Active Directory's built-in security infrastructure is used by desktops to verify whether communication between controllers comes from authorized controllers in the appropriate Site. Active Directory's security infrastructure also ensures that the data exchanged between desktops and controllers is confidential. Installing XenDesktop or SQL Server on the domain controller is not supported; in fact, it is not even possible. Desktop A desktop is the instantiation of a complete Windows operating system, typically Windows 7 or Windows 8. In XenDesktop, we install the Windows 7 or Windows 8 desktop in a VM and add the VDA to it so that it can work with XenDesktop and can be delivered to clients. This will be the end user's virtual desktop. XenApp® Citrix XenApp is an on-demand application delivery solution that enables any Windows application to be virtualized, centralized, and managed in the data center and instantly delivered as a service. Prior to XenDesktop 7.x, XenApp delivered applications and XenDesktop delivered desktops. Now, with the release of XenDesktop 7.x, XenApp delivers both desktops and applications. Edgesight® Citrix Edgesight is a performance and availability management solution for XenDesktop, XenApp, and endpoint systems. Edgesight monitors applications, devices, sessions, license usage, and the network in real time. Edgesight will be phased out as a product. FlexCast® Don't let the term FlexCast confuse you. FlexCast is just a marketing term designed to encompass all of the different architectures that XenDesktop can be deployed in. FlexCast allows you to deliver virtual desktops and applications according to the needs of diverse performance, security, and flexibility requirements of every type of user in your organization. FlexCast is a way of describing the different ways to deploy XenDesktop. For example, task workers who use low-end thin clients in remote offices will use a different FlexCast model than a group of HDX 3D high-end graphics users. The following table lists the FlexCast models you may want to consider; these are available at http://flexcast.citrix.com: FlexCast model Use case Citrix products used Local VM Local VM desktops extend the benefit of a centralized, single-instance management to mobile workers who need to use their laptops offline. Changes to the OS, apps, and data are synchronized when they connect to the network. XenClient Streamed VHD Streamed VHDs leverage the local processing power of rich clients, which provides a centralized, single-image management of the desktop. It is an easy, low-cost way to get started with desktop virtualization (rarely used). Receiver XenApp Hosted VDI Hosted VDI desktops offer a personalized Windows desktop experience typically required by office workers, which can be delivered to any device. This combines the central management of the desktop with complete user personalization. The user's desktop runs in a virtual machine. Users get the same high-definition experience that they had with a local PC but with a centralized management. The VDI approach provides the best combination of security and customization. Personalization is stored in the Personal vDisk. VDI desktops can be accessed from any device, such as thin clients, laptops, PCs, and mobile devices (most common). Receiver XenDesktop Personal vDisk Hosted shared Hosted shared desktops provide a locked-down, streamlined, and standardized environment with a core set of applications. This is ideal for task workers where personalization is not required. All the users share a single desktop image. These desktops cannot be modified, except by the IT personnel. It is not appropriate for mobile workers or workers who need personalization, but it is appropriate for task workers who use thin clients. Receiver XenDesktop On-demand applications This allows any Windows application to be centralized and managed in the data center, which is hosted on either multiuser terminal servers or virtual machines, and delivered as a service to physical and virtual desktops. Receiver XenApp and XenDesktop App Edition Storage All of the XenDesktop components use storage. Storage is managed by the Hypervisor, such as Citrix XenServer. There is a personalization feature to store personal data from virtual desktops called the Personal vDisk (PvD). The client side For a complete end-to-end solution, an important part of the architecture that needs to be mentioned is the end user device or client. There isn't much to consider here; however, the client devices can range from a high-powered Windows desktop to low-end thin clients and to mobile devices. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications to any device anywhere. Receiver is platform agnostic. Citrix Receiver is device agnostic, meaning that there is a Receiver for just about every device out there, from Windows to Linux-based thin clients and to mobile devices including iOS and Android. In fact, some thin-client vendors have performed a close integration with the Citrix Ready program to embed the Citrix Receiver code directly into their homegrown operating system for seamless operation with XenDesktop. Citrix Receiver must be installed on the end user client device in order to receive the desktop and applications from XenDesktop. It must also be installed on the virtual desktop in order to receive applications from the application servers (XenApp or XenDesktop), and this is taken care of for you automatically when you install the VDA on the virtual desktop machine. System requirements Each component has its requirements in terms of operating system and licensing. You will need to build these operating systems on VMs before installing each component. For help in creating VMs, look at the relevant Hypervisor documentation. We have used Citrix XenServer as the Hypervisor. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications. Receiver is available for Windows, Mac, mobile devices such as iOS and Android, HTML5, Chromebook, and Java 10.1. You will need to install the Citrix Receiver twice for a complete end-to-end connection to be made. Once on the end user's client device—there are many supported devices including iOS and Android—and once on the Windows virtual desktop (for Windows) that you will serve your users. This is done automatically when you install the Virtual Desktop Agent (VDA) on the Windows virtual desktop. You need this Receiver to access the applications that are running on a separate application server (XenApp or XenDesktop). StoreFront 2.1 StoreFront replaces the web interface. StoreFront 2.1 can also be used with XenApp and XenDesktop 5.5 and above. The operating systems that are supported are as follows: Windows Server 2012 R2, Standard or Data center Windows Server 2012, Standard or Data center Windows Server 2008 R2 SP1, Standard or Enterprise System requirements are as follows: RAM: 2 GB Microsoft Internet Information Services (IIS) Microsoft Internet Information Services Manager .NET Framework 4.0 Firewall ports – external: As StoreFront is the gateway to the Site, you will need to open specific ports on the firewall to allow connections in, mentioned as follows: Ports: 80 (http) and 443 (https) Firewall ports – internal: By default, StoreFront communicates with the internal XenDesktop Delivery Controller servers using the following ports: 80 (for StoreFront servers) and 8080 (for HTML5 clients) You can specify different ports. For more information on StoreFront and how to plug it into the architecture, refer to http://support.citrix.com/article/CTX136547.
Read more
  • 0
  • 0
  • 9281
Packt
16 Apr 2014
9 min read
Save for later

Introduction to Veeam® Backup & Replication for VMware

Packt
16 Apr 2014
9 min read
(For more resources related to this topic, see here.) Veeam Backup & Replication v7 for VMware is a modern solution for data protection and disaster recovery for virtualized VMware vSphere environments of any size. Veeam Backup & Replication v7 for VMware supports VMware vSphere and VMware Infrastructure 3 (VI3), including the latest version VMware vSphere 5.5 and Microsoft Windows Server 2012 R2 as the management server(s). Its modular approach and scalability make it an obvious choice regardless of the environment size or complexity. As your data center grows, Veeam Backup & Replication grows with it to provide complete protection for your environment. Remember, your backups aren't really that important, but your restore is! Backup strategies A common train of thought when dealing with backups is to follow the 3-2-1 rule: 3: Keep three copies of your data—one primary and two backups 2: Store the data in two different media types 1: Store at least one copy offsite This simple approach ensures that no matter what happens, you will be able to have a recoverable copy of your data. Veeam Backup & Replication lets you accomplish this goal by utilizing the backup copy jobs. Back up your production environment once, then use the backup copy jobs to copy the backed-up data to a secondary location, utilizing the Built-in WAN Acceleration features and to tape for long-term archival. You can even "daisy-chain" these jobs to each other, which ensures that as soon as the backup job is finished, the copy jobs are fired automatically. This allows you to easily accomplish the 3-2-1 rule without the need for complex configurations that makes it hard to manage. Combining this with a Grandfather-Father-Son (GFS) backup media rotation scheme, for tape-based archiving, ensures that you always have a recoverable media available. In such a scheme, there are three, or more, backup cycles: daily, weekly, and monthly. The following table shows how you might create a GFS rotation schedule: Monday Tuesday Wednesday Thursday Friday         WEEK 1 MON TUE WED THU WEEK 2 MON TUE WED THU WEEK 3 MON TUE WED THU WEEK 4 MON TUE WED THU MONTH 1 "Grandfather" tapes are kept for a year, "Father" tapes for a month, and "Son" tapes for a week. In addition, quarterly, half-yearly, and/or annual backups could also be separately retained if required. Recovery point objective and recovery time objective Both these terms come into play when defining your backup strategy. The recovery point objective (RPO) is a definition of how much data you can afford to lose. If you run backups every 24 hours, you have, in effect, defined that you can afford to lose up to a day's worth of data for a given application or infrastructure. If that is not the case, you need to have a look at how often you back up that particular application. The recovery time objective (RTO) is a measure of the amount of time it should take to restore your data and return the application to a steady state. How long can your business afford to be without a given application? 2 hours? 24 hours? A week? It all depends, and it is very important that you as a backup administrator have a clear understanding of the business you are supporting to evaluate these important parameters. Basically, it boils down to this: If there is a disaster, how much downtime can your business afford? If you don't know, talk to the people in your organization who know. Gather information from the various business units in order to assist in determining what they consider acceptable. Odds are that your views as an IT professional might not coincide with the views of the business units; determine their RPO and RTO values, and determine a backup strategy based on that. Native tape support By popular demand, native tape support was introduced in Veeam Backup & Replication v7. While the most effective method of backup might be disk based, lots and lots of customers still want to make use of their existing investment in tape technology. Standalone drives, tape libraries, and Virtual Tape Libraries (VTL) are all supported and make it possible to use tape-based solutions for long-term archival of backup data. Basically any tape device recognized by the Microsoft Windows server on which Backup & Replication is installed is also supported by Veeam. If Microsoft Windows recognizes the tape device, so will Backup & Replication. It is recommended that customers check the user guide and Veeam Forums (http://forums.veeam.com) for more information on native tape support. Veeam Backup & Replication architecture Veeam Backup & Replication consists of several components that together make up the complete architecture required to protect your environment. This distributed backup architecture leaves you in full control over the deployment, and the licensing options make it easy to scale the solution to fit your needs. Since it works on the VM layer, it uses advanced technologies such as VMware vSphere Changed Block Tracking (CBT) to ensure that only the data blocks that have changed since the last run are backed up, ensuring that the backup is performed as quickly as possible and that the least amount of data needs to be transferred each time. By talking directly to the VMware vStorage APIs for Data Protection (VADP), Veeam Backup & Replication can back up VMs without the need to install agents or otherwise touch the VMs directly. It simply tells the vSphere environment that it wants to take a backup of a given VM; vSphere then creates a snapshot of the VM, and the VM is read from the snapshot to create the backup. Once the backup is finished, the snapshot is removed, and changes that happened to the VM while it was backed up are rolled back into the production VM. By integrating with VMware Tools and Microsoft Windows VSS, application-consistent backups are provided if available in the VMs that are being backed up. For Linux-based VMs, VMware Tools are required and its native quiescence option is used. Not only does it let you back up your VMs and restore them if required, but you can also use it to replicate your production environment to a secondary location. If your secondary location has a different network topology, it helps you remap and re-IP your VMs in case there is a need to failover a specific VM or even an entire datacenter. Of course, failback is also available once the reason for the failover is rectified and normal operations can resume. Veeam Backup & Replication components The Veeam Backup & Replication suite consists of several components, which in combination, make up the backup and replication architecture. Veeam backup server: This is installed on a physical or virtual Microsoft Windows server. Veeam backup server is the core component of an implementation, and it acts as the configuration and control center that coordinates backup, replication, recovery verification, and restore tasks. It also controls jobs scheduling and resource allocation, and is the main entry point configuring the global settings for the backup infrastructure. The backup server uses the following services and components: Veeam Backup Service: This is the main components that coordinates all operations, such as backup, replication, recovery verification, and restore tasks. Veeam Backup Shell: This is the application user interface. Veeam Backup SQL Database: This is used by the other components to store data about the backup infrastructure, backup and restore jobs, and component configuration. This database instance can be installed locally or on a remote server. Veeam Backup PowerShell Snap-in: These are extensions to Microsoft Windows PowerShell that add a set of cmdlets for management of backup, replication, and recovery tasks through automation. Backup proxy Backup proxies are used to offload the Veeam backup server and are essential as you scale your environment. Backup proxies can be seen as data movers, physical or virtual, that run a subset of the components required on the Veeam backup server. These components, which include the Veeam transport service, can be installed in a matter of seconds and are fully automated from the Veeam backup server. You can deploy and remove proxy servers as you see fit, and Veeam Backup &Replication will distribute the backup workload between available backup proxies, thus reducing the load on the backup server itself and increasing the amount of simultaneous backup jobs that can be performed. Backup repository A backup repository is just a location where Veeam Backup & Replication can store backup files, copies of VMs, and metadata. Simply put, it's nothing more than a folder on the assigned disk-based backup storage. Just as you can offload the backup server with multiple proxies, you can add multiple repositories to your infrastructure and direct backup jobs directly to them to balance the load. The following repository types are supported: Microsoft Windows or Linux server with local or directly attached storage: Any storage that is seen as a local/directly attached storage on a Microsoft Windows or Linux server can be used as a repository. That means that there is great flexibility when it comes to selecting repository storage; it can be locally installed storage, iSCSI/FC SAN LUNs, or even locally attached USB drives. When a server is added as a repository, Veeam Backup & Replication deploys and starts the Veeam transport service, which takes care of the communication between the source-side transport service on the Veeam backup server (or proxy) and the repository. This ensures efficient data transfer over both LAN and WAN connections. Common Internet File System (CIFS) shares: CIFS (also known as Server Message Block (SMB)) shares are a bit different as Veeam cannot deploy transport services to a network share directly. To work around this, the transport service installed on a Microsoft Windows proxy server handles the connection between the repository and the CIFS share. Summary In this article, we will learned about various backup strategies and also went through some components of Veeam® Backup and Replication. Resources for Article: Further resources on this subject: VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article] Use Of ISO Image for Installation of Windows8 Virtual Machine [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 5196

article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 4767
Modal Close icon
Modal Close icon