Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-advanced-less-coding
Packt
09 Feb 2015
40 min read
Save for later

Advanced Less Coding

Packt
09 Feb 2015
40 min read
In this article by Bass Jobsen, author of the book Less Web Development Cookbook, you will learn: Giving your rules importance with the !important statement Using mixins with multiple parameters Using duplicate mixin names Building a switch leveraging argument matching Avoiding individual parameters to leverage the @arguments variable Using the @rest... variable to use mixins with a variable number of arguments Using mixins as functions Passing rulesets to mixins Using mixin guards (as an alternative for the if…else statements) Building loops leveraging mixin guards Applying guards to the CSS selectors Creating color contrasts with Less Changing the background color dynamically Aggregating values under a single property (For more resources related to this topic, see here.) Giving your rules importance with the !important statement The !important statement in CSS can be used to get some style rules always applied no matter where that rules appears in the CSS code. In Less, the !important statement can be applied with mixins and variable declarations too. Getting ready You can write the Less code for this recipe with your favorite editor. After that, you can use the command-line lessc compiler to compile the Less code. Finally, you can inspect the compiled CSS code to see where the !important statements appear. To see the real effect of the !important statements, you should compile the Less code client side, with the client-side compiler less.js and watch the effect in your web browser. How to do it… Create an important.less file that contains the code like the following snippet: .mixin() { color: red; font-size: 2em; } p { &.important {    .mixin() !important; } &.unimportant {    .mixin(); } } After compiling the preceding Less code with the command-line lessc compiler, you will find the following code output produced in the console: p.important { color: red !important; font-size: 2em !important; } p.unimportant { color: red; font-size: 2em; } You can, for instance, use the following snippet of the HTML code to see the effect of the !important statements in your browser: <p class="important"   style="color:green;font-size:4em;">important</p> <p class="unimportant"   style="color:green;font-size:4em;">unimportant</p> Your HTML document should also include the important.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="important.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in the following screenshot:  How it works… In Less, you can use the !important statement not only for properties, but also with mixins. When !important is set for a certain mixin, all properties of this mixin will be declared with the !important statement. You can easily see this effect when inspecting the properties of the p.important selector, both the color and size property got the !important statement after compiling the code. There's more… You should use the !important statements with care as the only way to overrule an !important statement is to use another !important statement. The !important statement overrules the normal CSS cascading, specificity rules, and even the inline styles. Any incorrect or unnecessary use of the !important statements in your Less (or CCS) code will make your code messy and difficult to maintain. In most cases where you try to overrule a style rule, you should give preference to selectors with a higher specificity and not use the !important statements at all. With Less V2, you can also use the !important statement when declaring your variables. A declaration with the !important statement can look like the following code: @main-color: darkblue !important; Using mixins with multiple parameters In this section, you will learn how to use mixins with more than one parameter. Getting ready For this recipe, you will have to create a Less file, for instance, mixins.less. You can compile this mixins.less file with the command-line lessc compiler. How to do it… Create the mixins.less file and write down the following Less code into it: .mixin(@color; @background: black;) { background-color: @background; color: @color; } div { .mixin(red; white;); } Compile the mixins.less file by running the command shown in the console, as follows: lessc mixins.less Inspect the CSS code output on the console, and you will find that it looks like that shown, as follows: div { background-color: #ffffff; color: #ff0000; } How it works… In Less, parameters are either semicolon-separated or comma-separated. Using a semicolon as a separator will be preferred because the usage of the comma will be ambiguous. The comma separator is not used only to separate parameters, but is also used to define a csv list, which can be an argument itself. The mixin in this recipe accepts two arguments. The first parameter sets the @color variable, while the second parameter sets the @background variable and has a default value that has been set to black. In the argument list, the default values are defined by writing a colon behind the variable's name, followed by the value. Parameters with a default value are optional when calling the mixins. So the .color mixin in this recipe can also be called with the following line of code: .mixin(red); Because the second argument has a default value set to black, the .mixin(red); call also matches the .mixin(@color; @background:black){} mixin, as described in the Building a switch leveraging argument matching recipe. Only variables set as parameter of a mixin are set inside the scope of the mixin. You can see this when compiling the following Less code: .mixin(@color:blue){ color2: @color; } @color: red; div { color1: @color; .mixin; } The preceding Less code compiles into the following CSS code: div { color1: #ff0000; color2: #0000ff; } As you can see in the preceding example, setting @color inside the mixin to its default value does not influence the value of @color assigned in the main scope. So lazy loading is applied on only variables inside the same scope; nevertheless, you will have to note that variables assigned in a mixin will leak into the caller. The leaking of variables can be used to use mixins as functions, as described in the Using mixins as functions recipe. There's more… Consider the mixin definition in the following Less code: .mixin(@font-family: "Helvetica Neue", Helvetica, Arial,   sans-serif;) { font-family: @font-family; } The semicolon added at the end of the list prevents the fonts after the "Helvetica Neue" font name in the csv list from being read as arguments for this mixin. If the argument list contains any semicolon, the Less compiler will use semicolons as a separator. In the CSS3 specification, among others, the border and background shorthand properties accepts csv. Also, note that the Less compiler allows you to use the named parameters when calling mixins. This can be seen in the following Less code that uses the @color variable as a named parameter: .mixin(@width:50px; @color: yellow) { width: @width; color: @color; } span { .mixin(@color: green); } The preceding Less code will compile into the following CSS code: span { width: 50px; color: #008000; } Note that in the preceding code, #008000 is the hexadecimal representation for the green color. When using the named parameters, their order does not matter. Using duplicate mixin names When your Less code contains one or more mixins with the same name, the Less compiler compiles them all into the CSS code. If the mixin has parameters (see the Building a switch leveraging argument matching recipe) the number of parameters will also match. Getting ready Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a file called mixins.less that contains the following Less code: .mixin(){ height:50px; } .mixin(@color) { color: @color; }   .mixin(@width) { color: green; width: @width; }   .mixin(@color; @width) { color: @color; width: @width; }   .selector-1 { .mixin(red); } .selector-2 { .mixin(red; 500px); } Compile the Less code from step 1 by running the following command in the console: lessc mixins.less After running the command from the previous step, you will find the following Less code output on the console: .selector-1 { color: #ff0000; color: green; width: #ff0000; } .selector-2 { color: #ff0000; width: 500px; } How it works… The .selector-1 selector contains the .mixin(red); call. The .mixin(red); call does not match the .mixin(){}; mixin as the number of arguments does not match. On the other hand, both .mixin(@color){}; and .mixin(@width){}; match the color. For this reason, these mixins will compile into the CSS code. The .mixin(red; 500px); call inside the .selector-2 selector will match only the .mixin(@color; @width){}; mixin, so all other mixins with the same .mixin name will be ignored by the compiler when building the .selector-2 selector. The compiled CSS code for the .selector-1 selector also contains the width: #ff0000; property value as the .mixin(@width){}; mixin matches the call too. Setting the width property to a color value makes no sense in CSS as the Less compiler does not check for this kind of errors. In this recipe, you can also rewrite the .mixin(@width){}; mixin, as follows: .mixin(@width) when (ispixel(@width)){};. There's more… Maybe you have noted that the .selector-1 selector contains two color properties. The Less compiler does not remove duplicate properties unless the value also is the same. The CSS code sometimes should contain duplicate properties in order to provide a fallback for older browsers. Building a switch leveraging argument matching The Less mixin will compile into the final CSS code only when the number of arguments of the caller and the mixins match. This feature of Less can be used to build switches. Switches enable you to change the behavior of a mixin conditionally. In this recipe, you will create a mixin, or better yet, three mixins with the same name. Getting ready Use the command-line lessc compiler to evaluate the effect of this mixin. The compiler will output the final CSS to the console. You can use your favorite text editor to edit the Less code. This recipe makes use of browser-vendor prefixes, such as the -ms-transform prefix. CSS3 introduced vendor-specific rules, which offer you the possibility to write some additional CSS, applicable for only one browser. These rules allow browsers to implement proprietary CSS properties that would otherwise have no working standard (and might never actually become the standard). To find out which prefixes should be used for a certain property, you can consult the Can I use database (available at http://caniuse.com/). How to do it… Create a switch.less Less file, and write down the following Less code into it: @browserversion: ie9; .mixin(ie9; @degrees){ transform:rotate(@degrees); -ms-transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(ie10; @degrees){ transform:rotate(@degrees); -webkit-transform:rotate(@degrees); } .mixin(@_; @degrees){ transform:rotate(@degrees); } div { .mixin(@browserversion; 70deg); } Compile the Less code from step 1 by running the following command in the console: lessc switch.less Inspect the compiled CSS code that has been output to the console, and you will find that it looks like the following code: div { -ms-transform: rotate(70deg); -webkit-transform: rotate(70deg); transform: rotate(70deg); } Finally, run the following command and you will find that the compiled CSS wll indeed differ from that of step 2: lessc --modify-var="browserversion=ie10" switch.less Now the compiled CSS code will look like the following code snippet: div { -webkit-transform: rotate(70deg); transform: rotate(70deg); } How it works… The switch in this recipe is the @browserversion variable that can easily be changed just before compiling your code. Instead of changing your code, you can also set the --modify-var option of the compiler. Depending on the value of the @browserversion variable, the mixins that match will be compiled, and the other mixins will be ignored by the compiler. The .mixin(ie10; @degrees){} mixin matches the .mixin(@browserversion; 70deg); call only when the value of the @browserversion variable is equal to ie10. Note that the first ie10 argument of the mixin will be used only for matching (argument = ie10) and does not assign any value. You will note that the .mixin(@_; @degrees){} mixin will match each call no matter what the value of the @browserversion variable is. The .mixin(ie9,70deg); call also compiles the .mixin(@_; @degrees){} mixin. Although this should result in the transform: rotate(70deg); property output twice, you will find only one. Since the property got exactly the same value twice, the compiler outputs the property only once. There's more… Not only switches, but also mixin guards, as described in the Using mixin guards (as an alternative for the if…else statements) recipe, can be used to set some properties conditionally. Current versions of Less also support JavaScript evaluating; JavaScript code put between back quotes will be evaluated by the compiler, as can be seen in the following Less code example: @string: "example in lower case"; p { &:after { content: "`@{string}.toUpperCase()`"; } } The preceding code will be compiled into CSS, as follows: p:after { content: "EXAMPLE IN LOWER CASE"; } When using client-side compiling, JavaScript evaluating can also be used to get some information from the browser environment, such as the screen width (screen.width), but as mentioned already, you should not use client-side compiling for production environments. Because you can't be sure that future versions of Less still support JavaScript evaluating, and alternative compilers not written in JavaScript cannot evaluate the JavaScript code, you should always try to write your Less code without JavaScript. Avoiding individual parameters to leverage the @arguments variable In the Less code, the @arguments variable has a special meaning inside mixins. The @arguments variable contains all arguments passed to the mixin. In this recipe, you will use the @arguments variable together with the the CSS url() function to set a background image for a selector. Getting ready You can inspect the compiled CSS code in this recipe after compiling the Less code with the command-line lessc compiler. Alternatively, you can inspect the results in your browser using the client-side less.js compiler. When inspecting the result in your browser, you will also need an example image that can be used as a background image. Use your favorite text editor to create and edit the Less files used in this recipe. How to do it… Create a background.less file that contains the following Less code: .background(@color; @image; @repeat: no-repeat; @position:   top right;) { background: @arguments; }   div { .background(#000; url("./images/bg.png")); width:300px; height:300px; } Finally, inspect the compiled CSS code, and you will find that it will look like the following code snippet: div { background: #000000 url("./images/bg.png") no-repeat top     right; width: 300px; height: 300px; } How it works… The four parameters of the .background() mixin are assigned as a space-separated list to the @arguments variable. After that, the @arguments variable can be used to set the background property. Also, other CSS properties accept space-separated lists, for example, the margin and padding properties. Note that the @arguments variable does not contain only the parameters that have been set explicit by the caller, but also the parameters set by their default value. You can easily see this when inspecting the compiled CSS code of this recipe. The .background(#000; url("./images/bg.png")); caller doesn't set the @repeat or @position argument, but you will find their values in the compiled CSS code. Using the @rest... variable to use mixins with a variable number of arguments As you can also see in the Using mixins with multiple parameters and Using duplicate mixin names recipes, only matching mixins are compiled into the final CSS code. In some situations, you don't know the number of parameters or want to use mixins for some style rules no matter the number of parameters. In these situations, you can use the special ... syntax or the @rest... variable to create mixins that match independent of the number of parameters. Getting ready You will have to create a file called rest.less, and this file can be compiled with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called rest.less that contains the following Less code: .mixin(@a...) { .set(@a) when (iscolor(@a)) {    color: @a; } .set(@a) when (length(@a) = 2) {    margin: @a; } .set(@a); } p{ .mixin(red); } p { .mixin(2px;4px); } Compile the rest.less file from step 1 using the following command in the console: lessc rest.less Inspect the CSS code output to the console that will look like the following code: p { color: #ff0000; } p { margin: 2px 4px; } How it works… The special ... syntax (three dots) can be used as an argument for a mixin. Mixins with the ... syntax in their argument list match any number of arguments. When you put a variable name starting with an @ in front of the ... syntax, all parameters are assigned to that variable. You will find a list of examples of mixins that use the special ... syntax as follows: .mixin(@a; ...){}: This mixin matches 1-N arguments .mixin(...){}: This mixin matches 0-N arguments; note that mixin() without any argument matches only 0 arguments .mixin(@a: 1; @rest...){}: This mixin matches 0-N arguments; note that the first argument is assigned to the @a variable, and all other arguments are assigned as a space-separated list to @rest Because the @rest... variable contains a space-separated list, you can use the Less built-in list function. Using mixins as functions People who are used to functional programming expect a mixin to change or return a value. In this recipe, you will learn to use mixins as a function that returns a value. In this recipe, the value of the width property inside the div.small and div.big selectors will be set to the length of the longest side of a right-angled triangle based on the length of the two shortest sides of this triangle using the Pythagoras theorem. Getting ready The best and easiest way to inspect the results of this recipe will be compiling the Less code with the command-line lessc compiler. You can edit the Less code with your favorite editor. How to do it… Create a file called pythagoras.less that contains the following Less code: .longestSide(@a,@b) { @length: sqrt(pow(@a,2) + pow(@b,2)); } div { &.small {    .longestSide(3,4);    width: @length; } &.big {    .longestSide(6,7);    width: @length; } } Compile the pythagoras.less file from step 1 using the following command in the console: lessc pyhagoras.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code snippet: div.small { width: 5; } div.big { width: 9.21954446; } How it works… Variables set inside a mixin become available inside the scope of the caller. This specific behavior of the Less compiler was used in this recipe to set the @length variable and to make it available in the scope of the div.small and div.big selectors and the caller. As you can see, you can use the mixin in this recipe more than once. With every call, a new scope is created and both selectors get their own value of @length. Also, note that variables set inside the mixin do not overwrite variables with the same name that are set in the caller itself. Take, for instance, the following code: .mixin() { @variable: 1; } .selector { @variable: 2; .mixin; property: @variable; } The preceding code will compile into the CSS code, as follows: .selector { property: 2; } There's more… Note that variables won't leak from the mixins to the caller in the following two situations: Inside the scope of the caller, a variable with the same name already has been defined (lazy loading will be applied) The variable has been previously defined by another mixin call (lazy loading will not be applied) Passing rulesets to mixins Since Version 1.7, Less allows you to pass complete rulesets as an argument for mixins. Rulesets, including the Less code, can be assigned to variables and passed into mixins, which also allow you to wrap blocks of the CSS code defined inside mixins. In this recipe, you will learn how to do this. Getting ready For this recipe, you will have to create a Less file called keyframes.less, for instance. You can compile this mixins.less file with the command-line lessc compiler. Finally, inspect the Less code output to the console. How to do it… Create the keyframes.less file, and write down the following Less code into it: // Keyframes .keyframe(@name; @roules) { @-webkit-keyframes @name {    @roules(); } @-o-keyframes @name {    @roules(); } @keyframes @name {    @roules(); } } .keyframe(progress-bar-stripes; { from { background-position: 40px 0; } to   { background-position: 0 0; } }); Compile the keyframes.less file by running the following command shown in the console: lessc keyframes.less Inspect the CSS code output on the console and you will find that it looks like the following code: @-webkit-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @-o-keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } @keyframes progress-bar-stripes { from {    background-position: 40px 0; } to {    background-position: 0 0; } } How it works… Rulesets wrapped between curly brackets are passed as an argument to the mixin. A mixin's arguments are assigned to a (local) variable. When you assign the ruleset to the @ruleset variable, you are enabled to call @ruleset(); to "mixin" the ruleset. Note that the passed rulesets can contain the Less code, such as built-in functions too. You can see this by compiling the following Less code: .mixin(@color; @rules) { @othercolor: green; @media (print) {    @rules(); } }   p { .mixin(red; {color: lighten(@othercolor,20%);     background-color:darken(@color,20%);}) } The preceding Less code will compile into the following CSS code: @media (print) { p {    color: #00e600;    background-color: #990000; } } A group of CSS properties, nested rulesets, or media declarations stored in a variable is called a detached ruleset. Less offers support for the detached rulesets since Version 1.7. There's more… As you could see in the last example in the previous section, rulesets passed as an argument can be wrapped in the @media declarations too. This enables you to create mixins that, for instance, wrap any passed ruleset into a @media declaration or class. Consider the example Less code shown here: .smallscreens-and-olderbrowsers(@rules) { .lt-ie9 & {    @rules(); } @media (min-width:768px) {    @rules(); } } nav { float: left; width: 20%; .smallscreens-and-olderbrowsers({    float: none;    width:100%; }); } The preceding Less code will compile into the CSS code, as follows: nav { float: left; width: 20%; } .lt-ie9 nav { float: none; width: 100%; } @media (min-width: 768px) { nav {    float: none;    width: 100%; } } The style rules wrapped in the .lt-ie9 class can, for instance, be used with Paul Irish's <html> conditional classes technique or Modernizr. Now you can call the .smallscreens-and-olderbrowsers(){} mixin anywhere in your code and pass any ruleset to it. All passed rulesets get wrapped in the .lt-ie9 class or the @media (min-width: 768px) declaration now. When your requirements change, you possibly have to change only these wrappers once. Using mixin guards (as an alternative for the if…else statements) Most programmers are used to and familiar with the if…else statements in their code. Less does not have these if…else statements. Less tries to follow the declarative nature of CSS when possible and for that reason uses guards for matching expressions. In Less, conditional execution has been implemented with guarded mixins. Guarded mixins use the same logical and comparison operators as the @media feature in CSS does. Getting ready You can compile the Less code in this recipe with the command-line lessc compiler. Also, check the compiler options; you can find them by running the lessc command in the console without any argument. In this recipe, you will have to use the –modify-var option. How to do it… Create a Less file named guards.less, which contains the following Less code: @color: white; .mixin(@color) when (luma(@color) >= 50%) { color: black; } .mixin(@color) when (luma(@color) < 50%) { color: white; }   p { .mixin(@color); } Compile the Less code in the guards.less using the command-line lessc compiler with the following command entered in the console: lessc guards.less Inspect the output written on the console, which will look like the following code: p { color: black; } Compile the Less code with different values set for the @color variable and see how to output change. You can use the command as follows: lessc --modify-var="color=green" guards.less The preceding command will produce the following CSS code: p {   color: white;   } Now, refer to the following command: lessc --modify-var="color=lightgreen" guards.less With the color set to light green, it will again produce the following CSS code: p {   color: black;   }   How it works… The use of guards to build an if…else construct can easily be compared with the switch expression, which can be found in the programming languages, such as PHP, C#, and pretty much any other object-oriented programming language. Guards are written with the when keyword followed by one or more conditions. When the condition(s) evaluates true, the code will be mixed in. Also note that the arguments should match, as described in the Building a switch leveraging argument matching recipe, before the mixin gets compiled. The syntax and logic of guards is the same as that of the CSS @media feature. A condition can contain the following comparison operators: >, >=, =, =<, and < Additionally, the keyword true is the only value that evaluates as true. Two or more conditionals can be combined with the and keyword, which is equivalent to the logical and operator or, on the other hand, with a comma as the logical or operator. The following code will show you an example of the combined conditionals: .mixin(@a; @color) when (@a<10) and (luma(@color) >= 50%) { } The following code contains the not keyword that can be used to negate conditions: .mixin(@a; @color) when not (luma(@color) >= 50%) { } There's more… Inside the guard conditions, (global) variables can also be compared. The following Less code example shows you how to use variables inside guards: @a: 10; .mixin() when (@a >= 10) {} The preceding code will also enable you to compile the different CSS versions with the same code base when using the modify-var option of the compiler. The effect of the guarded mixin described in the preceding code will be very similar with the mixins built in the Building a switch leveraging argument matching recipe. Note that in the preceding example, variables in the mixin's scope overwrite variables from the global scope, as can be seen when compiling the following code: @a: 10; .mixin(@a) when (@a < 10) {property: @a;} selector { .mixin(5); } The preceding Less code will compile into the following CSS code: selector { property: 5; } When you compare guarded mixins with the if…else constructs or switch expressions in other programming languages, you will also need a manner to create a conditional for the default situations. The built-in Less default() function can be used to create such a default conditional that is functionally equal to the else statement in the if…else constructs or the default statement in the switch expressions. The default() function returns true when no other mixins match (matching also takes the guards into account) and can be evaluated as the guard condition. Building loops leveraging mixin guards Mixin guards, as described besides others in the Using mixin guards (as an alternative for the if…else statements) recipe, can also be used to dynamically build a set of CSS classes. In this recipe, you will learn how to do this. Getting ready You can use your favorite editor to create the Less code in this recipe. How to do it… Create a shadesofblue.less Less file, and write down the following Less code into it: .shadesofblue(@number; @blue:100%) when (@number > 0) {   .shadesofblue(@number - 1, @blue - 10%);   @classname: e(%(".color-%a",@number)); @{classname} {    background-color: rgb(0, 0, @blue);    height:30px; } } .shadesofblue(10); You can, for instance, use the following snippet of the HTML code to see the effect of the compiled Less code from the preceding step: <div class="color-1"></div> <div class="color-2"></div> <div class="color-3"></div> <div class="color-4"></div> <div class="color-5"></div> <div class="color-6"></div> <div class="color-7"></div> <div class="color-8"></div> <div class="color-9"></div> <div class="color-10"></div> Your HTML document should also include the shadesofblue.less and less.js files, as follows: <link rel="stylesheet/less" type="text/css"   href="shadesofblue.less"> <script src="less.js" type="text/javascript"></script> Finally, the result will look like that shown in this screenshot: How it works… The CSS classes in this recipe are built with recursion. The recursion here has been done by the .shadesofblue(){} mixin calling itself with different parameters. The loop starts with the .shadesofblue(10); call. When the compiler reaches the .shadesofblue(@number - 1, @blue – 10%); line of code, it stops the current code and starts compiling the .shadesofblue(){} mixin again with @number decreased by one and @blue decreased by 10 percent. The process will be repeated till @number < 1. Finally, when the @number variable becomes equal to 0, the compiler tries to call the .shadesofblue(0,0); mixin, which does not match the when (@number > 0) guard. When no matching mixin is found, the compiler stops, compiles the rest of the code, and writes the first class into the CSS code, as follows: .color-1 { background-color: #00001a; height: 30px; } Then, the compiler starts again where it stopped before, at the .shadesofblue(2,20); call, and writes the next class into the CSS code, as follows: .color-2 { background-color: #000033; height: 30px; } The preceding code will be repeated until the tenth class. There's more… When inspecting the compiled CSS code, you will find that the height property has been repeated ten times, too. This kind of code repeating can be prevented using the :extend Less pseudo class. The following code will show you an example of the usage of the :extend Less pseudo class: .baseheight { height: 30px; } .mixin(@i: 2) when(@i > 0) { .mixin(@i - 1); .class@{i} {    width: 10*@i;    &:extend(.baseheight); } } .mixin(); Alternatively, in this situation, you can create a more generic selector, which sets the height property as follows: div[class^="color"-] { height: 30px; } Recursive loops are also useful when iterating over a list of values. Max Mikhailov, one of the members of the Less core team, wrote a wrapper mixin for recursive Less loops, which can be found at https://github.com/seven-phases-max. This wrapper contains the .for and .-each mixins that can be used to build loops. The following code will show you how to write a nested loop: @import "for"; #nested-loops { .for(3, 1); .-each(@i) {    .for(0, 2); .-each(@j) {      x: (10 * @i + @j);    } } } The preceding Less code will produce the following CSS code: #nested-loops { x: 30; x: 31; x: 32; x: 20; x: 21; x: 22; x: 10; x: 11; x: 12; } Finally, you can use a list of mixins as your data provider in some situations. The following Less code gives an example about using mixins to avoid recursion: .data() { .-("dark"; black); .-("light"; white); .-("accent"; pink); }   div { .data(); .-(@class-name; @color){    @class: e(@class-name);    &.@{class} {      color: @color;    } } } The preceding Less code will compile into the CSS code, as follows: div.dark { color: black; } div.light { color: white; }   div.accent { color: pink; } Applying guards to the CSS selectors Since Version 1.5 of Less, guards can be applied not only on mixins, but also on the CSS selectors. This recipe will show you how to apply guards on the CSS selectors directly to create conditional rulesets for these selectors. Getting ready The easiest way to inspect the effect of the guarded selector in this recipe will be using the command-line lessc compiler. How to do it… Create a Less file named darkbutton.less that contains the following code: @dark: true; button when (@dark){ background-color: black; color: white; } Compile the darkbutton.less file with the command-line lessc compiler by entering the following command into the console: lessc darkbutton.less Inspect the CSS code output on the console, which will look like the following code: button { background-color: black; color: white; } Now try the following command and you will find that the button selector is not compiled into the CSS code: lessc --modify-var="dark=false" darkbutton.less How it works… The guarded CSS selectors are ignored by the compiler and so not compiled into the CSS code when the guard evaluates false. Guards for the CSS selectors and mixins leverage the same comparison and logical operators. You can read in more detail how to create guards with these operators in Using mixin guards (as an alternative for the if…else statements) recipe. There's more… Note that the true keyword will be the only value that evaluates true. So the following command, which sets @dark equal to 1, will not generate the button selector as the guard evaluates false: lessc --modify-var="dark=1" darkbutton.less The following Less code will give you another example of applying a guard on a selector: @width: 700px; div when (@width >= 600px ){ border: 1px solid black; } The preceding code will output the following CSS code: div {   border: 1px solid black;   } On the other hand, nothing will be output when setting @width to a value smaller than 600 pixels. You can also rewrite the preceding code with the & feature referencing the selector, as follows: @width: 700px; div { & when (@width >= 600px ){    border: 1px solid black; } } Although the CSS code produced of the latest code does not differ from the first, it will enable you to add more properties without the need to repeat the selector. You can also add the code in a mixin, as follows: .conditional-border(@width: 700px) {    & when (@width >= 600px ){    border: 1px solid black; } width: @width; } Creating color contrasts with Less Color contrasts play an important role in the first impression of your website or web application. Color contrasts are also important for web accessibility. Using high contrasts between background and text will help the visually disabled, color blind, and even people with dyslexia to read your content more easily. The contrast() function returns a light (white by default) or dark (black by default) color depending on the input color. The contrast function can help you to write a dynamical Less code that always outputs the CSS styles that create enough contrast between the background and text colors. Setting your text color to white or black depending on the background color enables you to meet the highest accessibility guidelines for every color. A sample can be found at http://www.msfw.com/accessibility/tools/contrastratiocalculator.aspx, which shows you that either black or white always gives enough color contrast. When you use Less to create a set of buttons, for instance, you don't want some buttons with white text while others have black text. In this recipe, you solve this situation by adding a stroke to the button text (text shadow) when the contrast ratio between the button background and button text color is too low to meet your requirements. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. You will have to create some HTML and Less code, and you can use your favorite editor to do this. You will have to create the following file structure: How to do it… Create a Less file named contraststrokes.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @buttonTextColor: white; @ContrastRatio: 7; //AAA, small texts   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px black; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   =< luma(@buttonTextColor)) and     (((luma(@buttonTextColor)+5)/     (luma(@backgroundcolor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) { color:@buttonTextColor; text-shadow: 0 0 2px white; } .setcontrast(@backgroundcolor) when (luma(@backgroundcolor)   >= luma(@buttonTextColor)) and     (((luma(@backgroundcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) { color:@buttonTextColor; }   button { padding:10px; border-radius:10px; color: @buttonTextColor; width:200px; }   .safe { .setcontrast(@safe); background-color: @safe; }   .danger { .setcontrast(@danger); background-color: @danger; }   .warning { .setcontrast(@warning); background-color: @warning; } Create an HTML file, and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>    <link rel="stylesheet/less" type="text/css"       href="contraststrokes.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">     warning</button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like what's shown in the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. How it works… The main purpose of this recipe is to show you how to write dynamical code based on the color contrast ratio. Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations to make web content more accessible. They have defined the following three conformance levels: Conformance Level A: In this level, all Level A success criteria are satisfied Conformance Level AA: In this level, all Level A and AA success criteria are satisfied Conformance Level AAA: In this level, all Level A, AA, and AAA success criteria are satisfied If you focus only on the color contrast aspect, you will find the following paragraphs in the WCAG 2.0 guidelines. 1.4.1 Use of Color: Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. (Level A) 1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (Level AA) 1.4.6 Contrast (Enhanced): The visual presentation of text and images of text has a contrast ratio of at least 7:1 (Level AAA) The contrast ratio can be calculated with a formula that can be found at http://www.w3.org/TR/WCAG20/#contrast-ratiodef: (L1 + 0.05) / (L2 + 0.05) In the preceding formula, L1 is the relative luminance of the lighter of the colors, and L2 is the relative luminance of the darker of the colors. In Less, the relative luminance of a color can be found with the built-in luma() function. In the Less code of this recipe are the four guarded .setcontrast(){} mixins. The guard conditions, such as (luma(@backgroundcolor) =< luma(@buttonTextColor)) are used to find which of the @backgroundcolor and @buttonTextColor colors is the lighter one. Then the (((luma({the lighter color})+5)/(luma({the darker color})+5)) < @ContrastRatio) condition can, according to the preceding formula, be used to determine whether the contrast ratio between these colors meets the requirement (@ContrastRatio) or not. When the value of the calculated contrast ratio is lower than the value set by the @ContrastRatio, the text-shadow: 0 0 2px {color}; ruleset will be mixed in, where {color} will be white or black depending on the relative luminance of the color set by the @buttonTextColor variable. There's more… In this recipe, you added a stroke to the web text to improve the accessibility. First, you will have to bear in mind that improving the accessibility by adding a stroke to your text is not a proven method. Also, automatic testing of the accessibility (by calculating the color contrast ratios) cannot be done. Other options to solve this issue are to increase the font size or change the background color itself. You can read how to change the background color dynamically based on color contrast ratios in the Changing the background color dynamically recipe. When you read the exceptions of the 1.4.6 Contrast (Enhanced) paragraph of the WCAG 2.0 guidelines, you will find that large-scale text requires a color contrast ratio of 4.5 instead of 7.0 to meet the requirements of the AAA Level. Large-scaled text is defined as at least 18 point or 14 point bold or font size that would yield the equivalent size for Chinese, Japanese, and Korean (CJK) fonts. To try this, you could replace the text-shadow properties in the Less code of step 1 of this recipe with the font-size, 14pt, and font-weight, bold; declarations. After this, you can inspect the results in your browser again. Depending on, among others, the values you have chosen for the @buttonTextColor and @ContrastRatio variables, you will find something like the following screenshot: On the left-hand side of the preceding screenshot, you will see the original colored buttons, and on the right-hand side, you will find the high-contrast buttons. Note that when you set the @ContrastRatio variable to 7.0, the code does not check whether the larger font indeed meets the 4.5 contrast ratio requirement. Changing the background color dynamically When you define some basic colors to generate, for instance, a set of button elements, you can use the built-in contrast() function to set the font color. The built-in contrast() function provides the highest possible contrast, but does not guarantee that the contrast ratio is also high enough to meet your accessibility requirements. In this recipe, you will learn how to change your basic color automatically to meet the required contrast ratio. Getting ready You can inspect the results of this recipe in your browser using the client-side less.js compiler. Use your favorite editor to create the HTML and Less code in this recipe. You will have to create the following file structure: How to do it… Create a Less file named backgroundcolors.less, and write down the following Less code into it: @safe: green; @danger: red; @warning: orange; @ContrastRatio: 7.0; //AAA @precision: 1%; @buttonTextColor: black; @threshold: 43;   .setcontrastcolor(@startcolor) when (luma(@buttonTextColor)   < @threshold) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) < @ContrastRatio) {    .contrastcolor(lighten(@startcolor,@precision)); } .contrastcolor(@startcolor) when (@startcolor =     color("white")),(((luma(@startcolor)+5)/     (luma(@buttonTextColor)+5)) >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   .setcontrastcolor(@startcolor) when (default()) { .contrastcolor(@startcolor) when (luma(@startcolor) < 100     ) and (((luma(@buttonTextColor)+5)/     (luma(@startcolor)+5)) < @ContrastRatio) {    .contrastcolor(darken(@startcolor,@precision)); } .contrastcolor(@startcolor) when (luma(@startcolor) = 100     ),(((luma(@buttonTextColor)+5)/(luma(@startcolor)+5))       >= @ContrastRatio) {    @contrastcolor: @startcolor; } .contrastcolor(@startcolor); }   button { padding:10px; border-radius:10px; color:@buttonTextColor; width:200px; }   .safe { .setcontrastcolor(@safe); background-color: @contrastcolor; }   .danger { .setcontrastcolor(@danger); background-color: @contrastcolor; }   .warning { .setcontrastcolor(@warning); background-color: @contrastcolor; } Create an HTML file and save this file as index.html. Write down the following HTML code into this index.html file: <!DOCTYPE html> <html> <head>    <meta charset="utf-8">    <title>High contrast buttons</title>      <link rel="stylesheet/less" type="text/css"       href="backgroundcolors.less">    <script src="less.min.js"       type="text/javascript"></script> </head> <body>    <button style="background-color:green;">safe</button>    <button class="safe">safe</button><br>    <button style="background-color:red;">danger</button>    <button class="danger">danger</button><br>    <button style="background-color:orange;">warning     </button>    <button class="warning">warning</button> </body> </html> Now load the index.html file from step 2 in your browser. When all has gone well, you will see something like the following screenshot: On the left-hand side of the preceding figure, you will see the original colored buttons, and on the right-hand side, you will find the high contrast buttons. How it works… The guarded .setcontrastcolor(){} mixins are used to determine the color set depending upon whether the @buttonTextColor variable is a dark color or not. When the color set by @buttonTextColor is a dark color, with a relative luminance below the threshold value set by the @threshold variable, the background colors should be made lighter. For light colors, the background colors should be made darker. Inside each .setcontrastcolor(){} mixin, a second set of mixins has been defined. These guarded .contrastcolor(){} mixins construct a recursive loop, as described in the Building loops leveraging mixin guards recipe. In each step of the recursion, the guards test whether the contrast ratio that is set by the @ContrastRatio variable has been reached or not. When the contrast ratio does not meet the requirements, the @startcolor variable will darken or lighten by the number of percent set by the @precision variable with the built-in darken() and lighten() functions. When the required contrast ratio has been reached or the color defining the @startcolor variable has become white or black, the modified color value of @startcolor will be assigned to the @contrastcolor variable. The guarded .contrastcolor(){} mixins are used as functions, as described in the Using mixins as functions recipe to assign the @contrastcolor variable that will be used to set the background-color property of the button selectors. There's more… A small value of the @precision variable will increase the number of recursions (possible) needed to find the required colors as there will be more and smaller steps needed. With the number of recursions also, the compilation time will increase. When you choose a bigger value for @precision, the contrast color found might differ from the start color more than needed to meet the contrast ratio requirement. When you choose, for instance, a dark button text color, which is not black, all or some base background colors will be set to white. The chances of finding the highest contrast for white increase for high values of the @ContrastRatio variable. The recursions will stop when white (or black) has been reached as you cannot make the white color lighter. When the recursion stops on reaching white or black, the colors set by the mixins in this recipe don't meet the required color contrast ratios. Aggregating values under a single property The merge feature of Less enables you to merge property values into a list under a single property. Each list can be either space-separated or comma-separated. The merge feature can be useful to define a property that accepts a list as a value. For instance, the background accepts a comma-separated list of backgrounds. Getting ready For this recipe, you will need a text editor and a Less compiler. How to do it… Create a file called defaultfonts.less that contains the following Less code: .default-fonts() { font-family+: Helvetica, Arial, sans-serif; } p { font-family+: "Helvetica Neue"; .default-fonts(); } Compile the defaultfonts.less file from step 1 using the following command in the console: lessc defaultfonts.less Inspect the CSS code output on the console after compilation and you will see that it looks like the following code: p { font-family: "Helvetica Neue", Helvetica, Arial, sans-   serif; } How it works… When the compiler finds the plus sign (+) before the assignment sign (:), it will merge the values into a CSV list and will not create a new property into the CSS code. There's more… Since Version 1.7 of Less, you can also merge the property's values separated by a space instead of a comma. For space-separated values, you should use the +_ sign instead of a + sign, as can be seen in the following code: .text-overflow(@text-overflow: ellipsis) { text-overflow+_ : @text-overflow; } p, .text-overflow { .text-overflow(); text-overflow+_ : ellipsis; } The preceding Less code will compile into the CSS code, as follows: p, .text-overflow { text-overlow: ellipsis ellipsis; } Note that the text-overflow property doesn't force an overflow to occur; you will have to explicitly set, for instance, the overflow property to hidden for the element. Summary This article walks you through the process of parameterized mixins and shows you how to use guards. A guard can be used with as if-else statements and make it possible to construct interactive loops in Less. Resources for Article: Further resources on this subject: Web Application Testing [article] LESS CSS Preprocessor [article] Bootstrap 3 and other applications [article]
Read more
  • 0
  • 0
  • 3502

article-image-ceph-instant-deployment
Packt
09 Feb 2015
14 min read
Save for later

Ceph Instant Deployment

Packt
09 Feb 2015
14 min read
In this article by Karan Singh, author of the book, Learning Ceph, we will cover the following topics: Creating a sandbox environment with VirtualBox From zero to Ceph – deploying your first Ceph cluster Scaling up your Ceph cluster – monitor and OSD addition (For more resources related to this topic, see here.) Creating a sandbox environment with VirtualBox We can test deploy Ceph in a sandbox environment using Oracle VirtualBox virtual machines. This virtual setup can help us discover and perform experiments with Ceph storage clusters as if we are working in a real environment. Since Ceph is an open source software-defined storage deployed on top of commodity hardware in a production environment, we can imitate a fully functioning Ceph environment on virtual machines, instead of real-commodity hardware, for our testing purposes. Oracle VirtualBox is a free software available at http://www.virtualbox.org for Windows, Mac OS X, and Linux. We must fulfil system requirements for the VirtualBox software so that it can function properly during our testing. We assume that your host operating system is a Unix variant; for Microsoft windows, host machines use an absolute path to run the VBoxManage command, which is by default c:Program FilesOracleVirtualBoxVBoxManage.exe. The system requirement for VirtualBox depends upon the number and configuration of virtual machines running on top of it. Your VirtualBox host should require an x86-type processor (Intel or AMD), a few gigabytes of memory (to run three Ceph virtual machines), and a couple of gigabytes of hard drive space. To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded. We will also need to download the CentOS 6.4 Server ISO image from http://vault.centos.org/6.4/isos/. To set up our sandbox environment, we will create a minimum of three virtual machines; you can create even more machines for your Ceph cluster based on the hardware configuration of your host machine. We will first create a single VM and install OS on it; after this, we will clone this VM twice. This will save us a lot of time and increase our productivity. Let's begin by performing the following steps to create the first virtual machine: The VirtualBox host machine used throughout in this demonstration is a Mac OS X which is a UNIX-type host. If you are performing these steps on a non-UNIX machine that is, on Windows-based host then keep in mind that virtualbox hostonly adapter name will be something like VirtualBox Host-Only Ethernet Adapter #<adapter number>. Please run these commands with the correct adapter names. On windows-based hosts, you can check VirtualBox networking options in Oracle VM VirtualBox Manager by navigating to File | VirtualBox Settings | Network | Host-only Networks. After the installation of the VirtualBox software, a network adapter is created that you can use, or you can create a new adapter with a custom IP:For UNIX-based VirtualBox hosts # VBoxManage hostonlyif remove vboxnet1 # VBoxManage hostonlyif create # VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.1 --netmask 255.255.255.0 For Windows-based VirtualBox hosts # VBoxManage.exe hostonlyif remove "VirtualBox Host-Only Ethernet Adapter" # VBoxManage.exe hostonlyif create # VBoxManage hostonlyif ipconfig "VirtualBox Host-Only Ethernet Adapter" --ip 192.168.57.1 --netmask 255.255.255. VirtualBox comes with a GUI manager. If your host is running Linux OS, it should have the X-desktop environment (Gnome or KDE) installed. Open Oracle VM VirtualBox Manager and create a new virtual machine with the following specifications using GUI-based New Virtual Machine Wizard, or use the CLI commands mentioned at the end of every step: 1 CPU 1024 MB memory 10 GB X 4 hard disks (one drive for OS and three drives for Ceph OSD) 2 network adapters CentOS 6.4 ISO attached to VM The following is the step-by-step process to create virtual machines using CLI commands: Create your first virtual machine: # VBoxManage createvm --name ceph-node1 --ostype RedHat_64 --register # VBoxManage modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1 For Windows VirtualBox hosts: # VBoxManage.exe modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 "VirtualBox Host-Only Ethernet Adapter" Create CD-Drive and attach CentOS ISO image to first virtual machine: # VBoxManage storagectl ceph-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on # VBoxManage storageattach ceph-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso Make sure you execute the preceding command from the same directory where you have saved CentOS ISO image or you can specify the location where you saved it. Create SATA interface, OS hard drive and attach them to VM; make sure the VirtualBox host has enough free space for creating vm disks. If not, select the host drive which have free space: # VBoxManage storagectl ceph-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on # VBoxManage createhd --filename OS-ceph-node1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-ceph-node1.vdi Create SATA interface, first ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ceph-node1-osd1.vdi Create SATA interface, second ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd2.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ceph-node1-osd2.vdi Create SATA interface, third ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd3.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium ceph-node1-osd3.vdi Now, at this point, we are ready to power on our ceph-node1 VM. You can do this by selecting the ceph-node1 VM from Oracle VM VirtualBox Manager, and then clicking on the Start button, or you can run the following command: # VBoxManage startvm ceph-node1 --type gui As soon as you start your VM, it should boot from the ISO image. After this, you should install CentOS on VM. If you are not already familiar with Linux OS installation, you can follow the documentation at https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/index.html. Once you have successfully installed the operating system, edit the network configuration of the machine: Edit /etc/sysconfig/network and change the hostname parameter HOSTNAME=ceph-node1 Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and add: ONBOOT=yes BOOTPROTO=dhcp Edit the /etc/sysconfig/network-scripts/ifcfg-eth1 file and add:< ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.101 NETMASK=255.255.255.0 Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 Once the network settings have been configured, restart VM and log in via SSH from your host machine. Also, test the Internet connectivity on this machine, which is required to download Ceph packages: # ssh root@192.168.57.101 Once the network setup has been configured correctly, you should shut down your first VM so that we can make two clones of your first VM. If you do not shut down your first VM, the cloning operation might fail. Create clone of ceph-node1 as ceph-node2: # VBoxManage clonevm --name ceph-node2 ceph-node1 --register Create clone of ceph-node1 as ceph-node3: # VBoxManage clonevm --name ceph-node3 ceph-node1 --register After the cloning operation is complete, you can start all three VMs: # VBoxManage startvm ceph-node1 # VBoxManage startvm ceph-node2 # VBoxManage startvm ceph-node3 Set up VM ceph-node2 with the correct hostname and network configuration: Edit /etc/sysconfig/network and change the hostname parameter: HOSTNAME=ceph-node2 Edit the /etc/sysconfig/network-scripts/ifcfg-<first interface name> file and add: DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a > Edit the /etc/sysconfig/network-scripts/ifcfg-<second interface name> file and add: DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.102 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a > Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 After performing these changes, you should restart your virtual machine to bring the new hostname into effect. The restart will also update your network configurations. Set up VM ceph-node3 with the correct hostname and network configuration: Edit /etc/sysconfig/network and change the hostname parameter:HOSTNAME=ceph-node3 Edit the /etc/sysconfig/network-scripts/ifcfg-<first interface name> file and add: DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a > Edit the /etc/sysconfig/network-scripts/ifcfg-<second interface name> file and add: DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.103 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a > Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 After performing these changes, you should restart your virtual machine to bring a new hostname into effect; the restart will also update your network configurations. At this point, we prepare three virtual machines and make sure each VM communicates with each other. They should also have access to the Internet to install Ceph packages. From zero to Ceph – deploying your first Ceph cluster To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster. Since we created three virtual machines that run CentOS 6.4 and have connectivity with the Internet as well as private network connections, we will configure these machines as Ceph storage clusters as mentioned in the following diagram: Configure ceph-node1 for an SSH passwordless login to other nodes. Execute the following commands from ceph-node1: While configuring SSH, leave the paraphrase empty and proceed with the default settings: # ssh-keygen Copy the SSH key IDs to ceph-node2 and ceph-node3 by providing their root passwords. After this, you should be able to log in on these nodes without a password: # ssh-copy-id ceph-node2 Installing and configuring EPEL on all Ceph nodes: Install EPEL which is the repository for installing extra packages for your Linux system by executing the following command on all Ceph nodes: # rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Make sure the baserul parameter is enabled under the /etc/yum.repos.d/epel.repo file. The baseurl parameter defines the URL for extra Linux packages. Also make sure the mirrorlist parameter must be disabled (commented) under this file. Problems been observed during installation if the mirrorlist parameter is enabled under epel.repo file. Perform this step on all the three nodes. Install ceph-deploy on the ceph-node1 machine by executing the following command from ceph-node1: # yum install ceph-deploy Next, we will create a Ceph cluster using ceph-deploy by executing the following command from ceph-node1: # ceph-deploy new ceph-node1 ## Create a directory for ceph # mkdir /etc/ceph # cd /etc/ceph The new subcommand of ceph-deploy deploys a new cluster with ceph as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf and ceph.mon.keyring files. In this testing, we will intentionally install the Emperor release (v0.72) of Ceph software, which is not the latest release. Later in this book, we will demonstrate the upgradation of Emperor to Firefly release of Ceph. To install Ceph software binaries on all the machines using ceph-deploy; execute the following command from ceph-node1:ceph-deploy install --release emperor ceph-node1 ceph-node2 ceph-node3 The ceph-deploy tool will first install all the dependencies followed by the Ceph Emperor binaries. Once the command completes successfully, check the Ceph version and Ceph health on all the nodes, as follows: # ceph –v Create your first monitor on ceph-node1: # ceph-deploy mon create-initial Once monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage: # ceph status Create an object storage device (OSD) on the ceph-node1 machine, and add it to the Ceph cluster executing the following steps: List the disks on VM: # ceph-deploy disk list ceph-node1 From the output, carefully identify the disks (other than OS-partition disks) on which we should create Ceph OSD. In our case, the disk names will ideally be sdb, sdc, and sdd. The disk zap subcommand will destroy the existing partition table and content from the disk. Before running the following command, make sure you use the correct disk device name. # ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd The osd create subcommand will first prepare the disk, that is, erase the disk with a filesystem, which is xfs by default. Then, it will activate the disk's first partition as data partition and second partition as journal: # ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd Check the cluster status for new OSD entries: # ceph status At this stage, your cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy. Scaling up your Ceph cluster – monitor and OSD addition Now we have a single-node Ceph cluster. We should scale it up to make it a distributed, reliable storage cluster. To scale up a cluster, we should add more monitor nodes and OSD. As per our plan, we will now configure ceph-node2 and ceph-node3 machines as monitor as well as OSD nodes. Adding the Ceph monitor A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors that's more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster: The firewall rules should not block communication between Ceph monitor nodes. If they do, you need to adjust the firewall rules in order to let monitors form a quorum. Since this is our test setup, let's disable firewall on all three nodes. We will run these commands from the ceph-node1 machine, unless otherwise specified: # service iptables stop # chkconfig iptables off # ssh ceph-node2 service iptables stop # ssh ceph-node2 chkconfig iptables off # ssh ceph-node3 service iptables stop # ssh ceph-node3 chkconfig iptables off Deploy a monitor on ceph-node2 and ceph-node3: # ceph-deploy mon create ceph-node2 # ceph-deploy mon create ceph-node3 The deploy operation should be successful; you can then check your newly added monitors in the Ceph status: You might encounter warning messages related to clock skew on new monitor nodes. To resolve this, we need to set up Network Time Protocol (NTP) on new monitor nodes: # chkconfig ntpd on # ssh ceph-node2 chkconfig ntpd on # ssh ceph-node3 chkconfig ntpd on # ntpdate pool.ntp.org # ssh ceph-node2 ntpdate pool.ntp.org # ssh ceph-node3 ntpdate pool.ntp.org # /etc/init.d/ntpd start # ssh ceph-node2 /etc/init.d/ntpd start # ssh ceph-node3 /etc/init.d/ntpd start Adding the Ceph OSD At this point, we have a running Ceph cluster with three monitors OSDs. Now we will scale our cluster and add more OSDs. To accomplish this, we will run the following commands from the ceph-node1 machine, unless otherwise specified. We will follow the same method for OSD addition: # ceph-deploy disk list ceph-node2 ceph-node3 # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph status Check the cluster status for a new OSD. At this stage, your cluster will be healthy with nine OSDs in and up: Summary The software-defined nature of Ceph provides a great deal of flexibility to its adopters. Unlike other proprietary storage systems, which are hardware dependent, Ceph can be easily deployed and tested on almost any computer system available today. Moreover, if getting physical machines is a challenge, you can use virtual machines to install Ceph, as mentioned in this article, but keep in mind that such a setup should only be used for testing purposes. In this article, we learned how to create a set of virtual machines using the VirtualBox software, followed by Ceph deployment as a three-node cluster using the ceph-deploy tool. We also added a couple of OSDs and monitor machines to our cluster in order to demonstrate its dynamic scalability. We recommend you deploy a Ceph cluster of your own using the instructions mentioned in this article. Resources for Article: Further resources on this subject: Linux Shell Scripting - various recipes to help you [article] GNU Octave: Data Analysis Examples [article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 6282

article-image-migrating-wordpress-blog-middleman-and-deploying-to-amazon-part3
Mike Ball
09 Feb 2015
9 min read
Save for later

Part 3: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
09 Feb 2015
9 min read
Part 3: Migrating WordPress blog content and deploying to production In parts 1 and 2 of this series, we created middleman-demo, a basic Middleman-based blog, imported content from WordPress, and deployed middleman-demo to Amazon S3. Now that middleman-demo has been deployed to production, let’s design a continuous integration workflow that automates builds and deployments. In part 3, we’ll cover the following: Testing middleman-demo with RSpec and Capybara Integrating with GitHub and Travis CI Configuring automated builds and deployments from Travis CI If you didn’t follow parts 1 and 2, or you no longer have your original middleman-demo code, you can clone mine and check out the part3 branch:  $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part3 Create some automated tests In software development, the practice of continuous delivery serves to frequently deploy iterative software bug fixes and enhancements, such that users enjoy an ever-improving product. Automated processes, such as tests, assist in rapidly validating quality with each change. middleman-demo is a relatively simple codebase, though much of its build and release workflow can still be automated via continuous delivery. Let’s write some automated tests for middleman-demo using RSpec and Capybara. These tests can assert that the site continues to work as expected with each change. Add the gems to the middleman-demo Gemfile:  gem 'rspec'gem 'capybara' Install the gems: $ bundle install Create a spec directory to house tests: $ mkdir spec As is the convention in RSpec, create a spec/spec_helper.rb file to house the RSpec configuration: $ touch spec/spec_helper.rb Add the following configuration to spec/spec_helper.rb to run middleman-demo during test execution: require "middleman" require "middleman-blog" require 'rspec' require 'capybara/rspec' Capybara.app = Middleman::Application.server.inst do set :root, File.expand_path(File.join(File.dirname(__FILE__), '..')) set :environment, :development end Create a spec/features directory to house the middleman-demo RSpec test files: $ mkdir spec/features Create an RSpec spec file for the homepage: $ touch spec/features/index_spec.rb Let’s create a basic test confirming that the Middleman Demo heading is present on the homepage. Add the following to spec/features/index_spec.rb: require "spec_helper" describe 'index', type: :feature do before do visit '/' end it 'displays the correct heading' do expect(page).to have_selector('h1', text: 'Middleman Demo') end end Run the test and confirm that it passes: $ rspec You should see output like the following: Finished in 0.03857 seconds (files took 6 seconds to load) 1 example, 0 failures Next, add a test asserting that the first blog post is listed on the homepage; confirm it passes by running the rspec command: it 'displays the "New Blog" blog post' do expect(page).to have_selector('ul li a[href="/blog/2014/08/20/new-blog/"]', text: 'New Blog') end As an example, let’s add one more basic test, this time asserting that the New Blog text properly links to the corresponding blog post. Add the following to spec/features/index_spec.rb and confirm that the test passes: it 'properly links to the "New Blog" blog post' do click_link 'New Blog' expect(page).to have_selector('h2', text: 'New Blog') end middleman-demo can be further tested in this fashion. The extent to which the specs test every element of the site’s functionality is up to you. At what point can it be confidently asserted that the site looks good, works as expected, and can be publicly deployed to users? Push to GitHub Next, push your middleman-demo code to GitHub. If you forked my original github.com/mdb/middleman-demo repository, skip this section. 1. Create a GitHub repositoryIf you don’t already have a GitHub account, create one. Create a repository through GitHub’s web UI called middleman-demo. 2. What should you do if your version of middleman-demo is not a git repository?If your middleman-demo is already a git repository, skip to step 3. If you started from scratch and your code isn’t already in a git repository, let’s initialize one now. I’m assuming you have git installed and have some basic familiarity with it. Make a middleman-demo git repository: $ git init && git add . && git commit -m 'initial commit' Declare your git origin, where <your_git_url_from_step_1> is your GitHub middleman-demo repository URL: $ git remote add origin <your_git_url_from_step_1> Push to your GitHub repository: $ git push origin master You’re done; skip step 3 and move on to Integrate with Travis CI. 3. If you cloned my mdb/middleman-demo repository…If you cloned my middleman-demo git repository, you’ll need to add your newly created middleman-demo GitHub repository as an additional remote: $ git remote add my_origin <your_git_url_from_step_1> If you are working in a branch, merge all your changes to master. Then push to your GitHub repository: $ git push -u my_origin master Integrate with Travis CI Travis CI is a distributed continuous integration service that integrates with GitHub. It’s free for open source projects. Let’s configure Travis CI to run the middleman-demo tests when we push to the GitHub repository. Log in to Travis CI First, sign in to Travis CI using your GitHub credentials. Visit your profile. Find your middleman-demo repository in the “Repositories” list. Activate Travis CI for middleman-demo; click the toggle button “ON.” Create a .travis.ymlfile Travis CI looks for a .travis.yml YAML file in the root of a repository. YAML is a simple, human-readable markup language; it’s a popular option in authoring configuration files. The .travis.yml file informs Travis how to execute the project’s build. Create a .travis.yml file in the root of middleman-demo: $ touch .travis.yml Configure Travis CI to use Ruby 2.1 when building middleman-demo. Add the following YAML to the .travis.yml file: language: rubyrvm: 2.1 Next, declare how Travis CI can install the necessary gem dependencies to build middleman-demo; add the following: install: bundle install Let’s also add before_script, which runs the middleman-demo tests to ensure all tests pass in advance of a build: before_script: bundle exec rspec Finally, add a script that instructs Travis CI how to build middleman-demo: script: bundle exec middleman build At this point, the .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build Commit the .travis.yml file: $ git add .travis.yml && git commit -m "added basic .travis.yml file" Now, after pushing to GitHub, Travis CI will attempt to install middleman-demo dependencies using Ruby 2.1, run its tests, and build the site. Travis CI’s command build output can be seen here: https://travis-ci.org/<your_github_username>/middleman-demo Add a build status badge Assuming the build passes, you should see a green build passing badge near the top right corner of the Travis CI UI on your Travis CI middleman-demo page. Let’s add this badge to the README.md file in middleman-demo, such that a build status badge reflecting the status of the most recent Travis CI build displays on the GitHub repository’s README. If one does not already exist, create a README.md file: $ touch README.md Add the following markdown, which renders the Travis CI build status badge: [![Build Status](https://travis-ci.org/<your_github_username>/middleman-demo.svg?branch=master)](https://travis-ci.org/<your_github_username>/middleman-demo) Configure continuous deployments Through continuous deployments, code is shipped to users as soon as a quality-validated change is committed. Travis CI can be configured to deploy a middleman-demo build with each successful build. Let’s configure Travis CI to continuously deploy middleman-demo to the S3 bucket created in part 2 of this tutorial series. First, install the travis command-line tools: $ gem install travis Use the travis command-line tools to set S3 deployments. Enter the following; you’ll be prompted for your S3 details (see the example below if you’re unsure how to answer): $ travis setup s3 An example response is: Access key ID: <your_aws_access_key_id> Secret access key: <your_aws_secret_access_key_id> Bucket: <your_aws_bucket> Local project directory to upload (Optional): build S3 upload directory (Optional): S3 ACL Settings (private, public_read, public_read_write, authenticated_read, bucket_owner_read, bucket_owner_full_control): |private| public_read Encrypt secret access key? |yes| yes Push only from <your_github_username>/middleman-demo? |yes| yes This automatically edits the .travis.yml file to include the following deploy information: deploy: provider: s3 access_key_id: <your_aws_access_key_id> secret_access_key: secure: <your_encrypted_aws_secret_access_key_id> bucket: <your_s3_bucket> local-dir: build acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Add one additional option, informing Travis to preserve the build directory for use during the deploy process: skip_cleanup: true The final .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build deploy: provider: s3 access_key_id: <your_aws_access_key> secret_access_key: secure: <your_encrypted_aws_secret_access_key> bucket: <your_aws_bucket> local-dir: build skip_cleanup: true acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Confirm that your continuous integration works Commit your changes: $ git add .travis.yml && git commit -m "added travis deploy configuration" Push to GitHub and watch the build output on Travis CI: https://travis-ci.org/<your_github_username>/middleman-demo If all works as expected, Travis CI will run the middleman-demo tests, build the site, and deploy to the proper S3 bucket. Recap Throughout this series, we’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a WordPress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). We’ve configured Travis CI to run automated tests, produce a build, and automate deployments. Beyond what’s been covered within this series, there’s an extensive Middleman ecosystem worth exploring, as well as numerous additional features. Middleman’s custom extensions seek to extend basic Middleman functionality through third-party gems. Read more about Middleman at Middlemanapp.com. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications. 
Read more
  • 0
  • 0
  • 5275

article-image-jenkins-continuous-integration
Packt
09 Feb 2015
17 min read
Save for later

Jenkins Continuous Integration

Packt
09 Feb 2015
17 min read
This article by Alan Mark Berg, the author of Jenkins Continuous Integration Cookbook Second Edition, outlines the main themes surrounding the correct use of a Jenkins server. Overview Jenkins (http://jenkins-ci.org/) is a Java-based Continuous Integration (CI) server that supports the discovery of defects early in the software cycle. Thanks to over 1,000 plugins, Jenkins communicates with many types of systems building and triggering a wide variety of tests. CI involves making small changes to software and then building and applying quality assurance processes. Defects do not only occur in the code, but also appear in the naming conventions, documentation, how the software is designed, build scripts, the process of deploying the software to servers, and so on. CI forces the defects to emerge early, rather than waiting for software to be fully produced. If defects are caught in the later stages of the software development life cycle, the process will be more expensive. The cost of repair radically increases as soon the bugs escape to production. Estimates suggest it is 100 to 1,000 times cheaper to capture defects early. Effective use of a CI server, such as Jenkins, could be the difference between enjoying a holiday and working unplanned hours to heroically save the day. And as you can imagine, in my day job as a senior developer with aspirations to quality assurance, I like long boring days, at least for mission-critical production environments. Jenkins can automate the building of software regularly and trigger tests pulling in the results and failing based on defined criteria. Failing early via build failure lowers the costs, increases confidence in the software produced, and has the potential to morph subjective processes into an aggressive metrics-based process that the development team feels is unbiased. Jenkins is: A proven technology that is deployed at large scale in many organizations. Open source, so the code is open to review and has no licensing costs. Has a simple configuration through a web-based GUI. This speeds up job creation, improves consistency, and decreases the maintenance costs. A master slave topology that distributes the build and testing effort over slave servers with the results automatically accumulated on the master. This topology ensures a scalable, responsive, and stable environment. Has the ability to call slaves from the cloud. Jenkins can use Amazon services or an Application Service Provider (ASP) such as CloudBees. (http://www.cloudbees.com/). No fuss installation. Installation is as simple as running only a single downloaded file named jenkins.war. Has many plugins. Over 1,000 plugins supporting communication, testing, and integration to numerous external applications (https://wiki.jenkins-ci.org/display/JENKINS/Plugins). A straightforward plugin framework. For Java programmers, writing plugins is straightforward. Jenkins plugin framework has clear interfaces that are easy to extend. The framework uses XStream (http://xstream.codehaus.org/) for persisting configuration information as XML and Jelly (http://commons.apache.org/proper/commons-jelly/) for the creation of parts of the GUI. Runs Groovy scripts. Jenkins has the facility to support running Groovy scripts both in the master and remotely on slaves. This allows for consistent scripting across operating systems. However, you are not limited to scripting in Groovy. Many administrators like to use Ant, Bash, or Perl scripts and statisticians and developers with complex analytics requirements the R language. Not just Java. Though highly supportive of Java, Jenkins also supports other languages. Rapid pace of improvement. Jenkins is an agile project; you can see numerous releases in the year, pushing improvements rapidly at http://jenkins-ci.org/changelog.There is also a highly stable Long-Term Support Release for the more conservative. Jenkins pushes up code quality by automatically testing within a short period after code commit and then shouting loudly if build failure occurs. The importance of continuous testing In 2002, NIST estimated that software defects were costing America around 60 billion dollars per year (http://www.abeacha.com/NIST_press_release_bugs_cost.htm). Expect the cost to have increased considerably since. To save money and improve quality, you need to remove defects as early in the software lifecycle as possible. The Jenkins test automation creates a safety net of measurements. Another key benefit is that once you have added tests, it is trivial to develop similar tests for other projects. Jenkins works well with best practices such as Test Driven Development (TDD) or Behavior Driven Development (BDD). Using TDD, you write tests that fail first and then build the functionality needed to pass the tests. With BDD, the project team writes the description of tests in terms of behavior. This makes the description understandable to a wider audience. The wider audience has more influence over the details of the implementation. Regression tests increase confidence that you have not broken code while refactoring software. The more coverage of code by tests, the more confidence. There are a number of good introductions to software metrics. These include a wikibook on the details of the metrics (http://en.wikibooks.org/wiki/Introduction_to_Software_Engineering/Quality/Metrics). And a well written book is by Diomidis Spinellis Code Quality: The Open Source Perspective. Remote testing through Jenkins considerably increases the number of dependencies in your infrastructure and thus the maintenance effort. Remote testing is a problem that is domain specific, decreasing the size of the audience that can write tests. You need to make test writing accessible to a large audience. Embracing the largest possible audience improves the chances that the tests defend the intent of the application. The technologies highlighted in the Jenkins book include: Fitnesse: It is a wiki with which you can write different types of tests. Having a WIKI like language to express and change tests on the fly gives functional administrators, consultants, and the end-user a place to express their needs. You will be shown how to run Fitnesse tests through Jenkins. Fitnesse is also a framework where you can extend Java interfaces to create new testing types. The testing types are called fixtures; there are a number of fixtures available, including ones for database testing, running tools from the command line and functional testing of web applications. JMeter: It is a popular open source tool for stress testing. It can also be used to functionally test through the use of assertions. JMeter has a GUI that allows you to build test plans. The test plans are then stored in XML format. Jmeter is runnable through a Maven or Ant scripts. JMeter is very efficient and one instance is normally enough to hit your infrastructure hard. However, for super high load scenarios Jmeter can trigger an array of JMeter instances. Selenium: It is the de-facto industrial standard for functional testing of web applications. With Selenium IDE, you can record your actions within Firefox, saving them in HTML format to replay later. The tests can be re-run through Maven using Selenium Remote Control (RC). It is common to use Jenkins slaves with different OS's and browser types to run the tests. The alternative is to use Selenium Grid (https://code.google.com/p/selenium/wiki/Grid2). Selenium Webdriver and TestNG unit tests: A programmer-specific approach to functional testing is to write unit tests using the TestNG framework. The unit tests apply the Selenium WebDriver framework. Selenium RC is a proxy that controls the web browser. In contrast, the WebDriver framework uses native API calls to control the web browser. You can even run the HtmlUnit framework removing the dependency of a real web browser. This enables OS independent testing, but removes the ability to test for browser specific dependencies. WebDriver supports many different browser types. SoapUI: It simplifies the creation of functional tests for web services. The tool can read Web Service Definition Language (WSDL) files publicized by web services, using the information to generate the skeleton for functional tests. The GUI makes it easy to understand the process. Plugins Jenkins is not only a CI server, it is also a platform to create extra functionality. Once a few concepts are learned, a programmer can adapt available plugins to their organization's needs. If you see a feature that is missing, it is normally easier to adapt an existing one than to write from scratch. If you are thinking of adapting then the plugin tutorial (https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial) is a good starting point. The tutorial is relevant background information on the infrastructure you use daily. There is a large amount of information available on plugins. Here are some key points: There are many plugins and more will be developed. To keep up with these changes, you will need to regularly review the available section of the Jenkins plugin manager. Work with the community: If you centrally commit your improvements then they become visible to a wide audience. Under the careful watch if the community, the code is more likely to be reviewed and further improved. Don't reinvent the wheel: With so many plugins, in the majority of situations, it is easier to adapt an already existing plugin than write from scratch. Pinning a plugin occurs when you cannot update the plugin to a new version through the Jenkins plugin manager. Pinning helps to maintain a stable Jenkins environment. Most plugin workflows are easy to understand. However, as the number of plugins you use expands, the likelihood of an inadvertent configuration error increases. The Jenkins Maven Plugin allows you to run a test Jenkins server from within a Maven build without risk. Conventions save effort: The location of files in plugins matters. For example, you can find the description of a plugin displayed in Jenkins at the file location /src/main/resources/index.jelly. By keeping to Jenkins conventions, the amount of source code you write is minimized and the readability is improved. The three frameworks that are heavily used in Jenkins are as follows: Jelly for the creation of the GUI Stapler for the binding of the Java classes to the URL space XStream for persistence of configuration into XML Maintenance In the Jenkins' book, we will also provide recipes that support maintenance cycles. For large scale deployments of Jenkins within diverse enterprise infrastructure, proper maintenance of Jenkins is crucial to planning predictable software cycles. Proper maintenance lowers the risk of failures: Storage over-flowing with artifacts: If you keep a build history that includes artifacts such as WAR files, large sets of JAR files, or other types of binaries, then your storage space is consumed at a surprising rate. Storage costs have decreased tremendously, but storage usage equates to longer backup times and more communication from slave to master. To minimize the risk of disk, overflowing you will need to consider your backup and also restore policy and the associated build retention policy expressed in the advanced options of jobs. Script spaghetti: As jobs are written by various development teams, the location and style of the included scripts vary. This makes it difficult for you to keep track. Consider using well-defined locations for your scripts and a scripts repository managed through a plugin. Resource depletion: As memory is consumed or the number of intense jobs increases, then Jenkins slows down. Proper monitoring and quick reaction reduce impact. A general lack of consistency between Jobs due to organic growth: Jenkins is easy to install and use. The ability to seamlessly turn on plugins is addictive. The pace of adoption of Jenkins within an organization can be breath taking. Without a consistent policy, your teams will introduce lots of plugins and also lots of ways of performing the same work. Conventions improve consistency and readability of Jobs and thus decrease maintenance. New plugins causing exceptions: There are a lot of good plugins being written with rapid version change. In this situation, it is easy for you to accidentally add new versions of plugins with new defects. There have been a number of times during upgrading plugins that suddenly the plugin does not work. To combat the risk of plugin exceptions, consider using a sacrificial Jenkins instance before releasing to a critical system. Jack of all trades Jenkins has many plugins that allow it to integrate easily into complex and diverse environments. If there is a need that is not directly supported you can always use a scripting language of choice and wire that into your jobs. In this section, we'll explore the R plugin and see how it can help you generate great graphics. R is a popular programming language for statistics (http://en.wikipedia.org/wiki/R_programming_language). It has many hundreds of extensions and has a powerful set of graphical capabilities. In this recipe, we will show you how to use the graphical capabilities of R within your Jenkins Jobs and then point you to some excellent starter resources. For a full list of plugins that improve the UI of Jenkins including Jenkins' graphical capabilities, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-UIplugins. Getting ready Install the R plugin (https://wiki.jenkins-ci.org/display/JENKINS/R+Plugin). Review the R installation documentation (http://cran.r-project.org/doc/manuals/r-release/R-admin.html). How to do it... From the command line, install the R language: sudo apt-get install r-base Review the available R packages: apt-cache search r-cran | less Create a free-style job with the name ch4.powerfull.visualizations. In the Build section, under Add build step, select Execute R script. In the Script text area, add the following lines of code: paste('======================================='); paste('WORKSPACE: ', Sys.getenv('WORKSPACE')) paste('BUILD_URL: ', Sys.getenv('BUILD_URL')) print('ls /var/lib/jenkins/jobs/R-ME/builds/') paste('BUILD_NUMBER: ', Sys.getenv('BUILD_NUMBER')) paste('JOB_NAME: ', Sys.getenv('JOB_NAME')) paste('JENKINS_HOME: ', Sys.getenv('JENKINS_HOME')) paste( 'JOB LOCATION: ', Sys.getenv('JENKINS_HOME'),'/jobs/',Sys.getenv('JOB_NAME'),'/builds/', Sys.getenv('BUILD_NUMBER'),"/test.pdf",sep="") paste('=======================================');   filename<-paste('pie_',Sys.getenv('BUILD_NUMBER'),'.pdf',sep="") pdf(file=filename) slices<- c(1,2,3,3,6,2,2) labels <- c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday","Saturday","Sunday") pie(slices, labels = labels, main="Number of failed jobs for each day of the week")   filename<-paste('freq_',Sys.getenv('BUILD_NUMBER'),'.pdf',sep="") pdf(file=filename) Number_OF_LINES_OF_ACTIVE_CODE=rnorm(10000, mean=200, sd=50) hist(Number_OF_LINES_OF_ACTIVE_CODE,main="Frequency plot of Class Sizes")   filename<-paste('scatter_',Sys.getenv('BUILD_NUMBER'),'.pdf',sep="") pdf(file=filename) Y <- rnorm(3000) plot(Y,main='Random Data within a normal distribution') Click on the Save button. Click on the Build Now icon Beneath Build History, click on the Workspace button. Review the generated graphics by clicking on the freq_1.pdf, pie_1.pdf, and scatter_1.pdf links, as shown in the following screenshot: The following screenshot is a histogram of the values from the random data generated by the R script during the build process. The data simulates class sizes within a large project: Another view is a pie chart. The fake data representing the number of failed jobs for each day of the week. If you make this plot against your own values, you might see particularly bad days, such as the day before or after the weekend. This might have implications about how developers work or motivation is distributed through the week. Perform the following steps: Run the job and review the Workspace. Click on Console Output. You will see output similar to: Started by user anonymousBuilding in workspace /var/lib/jenkins/workspace/ch4.Powerfull.Visualizations[ch4.Powerfull.Visualizations] $ Rscript /tmp/hudson6203634518082768146.R [1] "=======================================" [1] "WORKSPACE: /var/lib/jenkins/workspace/ch4.Powerfull.Visualizations" [1] "BUILD_URL: " [1] "ls /var/lib/jenkins/jobs/R-ME/builds/" [1] "BUILD_NUMBER: 9" [1] "JOB_NAME: ch4.Powerfull.Visualizations" [1] "JENKINS_HOME: /var/lib/jenkins" [1] "JOB LOCATION: /var/lib/jenkins/jobs/ch4.Powerfull.Visualizations/builds/9/test.pdf" [1] "=======================================" Finished: SUCCESS Click on Back to Project Click on Workspace. How it works... With a few lines of R code, you have generated three different well-presented PDF graphs. The R plugin ran a script as part of the build. The script printed out the WORKSPACE and other Jenkins environment variables to the console: paste ('WORKSPACE: ', Sys.getenv('WORKSPACE')) Next, a filename is set with the build number appended to the pie_ string. This allows the script to generate a different filename each time it is run, as shown: filename <-paste('pie_',Sys.getenv('BUILD_NUMBER'),'.pdf',sep="") The script now opens output to the location defined in the filename variable through the pdf(file=filename) command. By default, the output directory is the job's workspace. Next, we define fake data for the graph, representing the number of failed jobs on any given day of the week. Note that in the simulated world, Friday is a bad day: slices <- c(1,2,3,3,6,2,2)labels <- c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday","Saturday","Sunday") We can also plot a pie graph, as follows: pie(slices, labels = labels, main="Number of failed jobs for each day of the week") For the second graph, we generated 10,000 pieces of random data within a normal distribution.The fake data represents the number of lines of active code that ran for a give job: Number_OF_LINES_OF_ACTIVE_CODE=rnorm(10000, mean=200, sd=50) The hist command generates a frequency plot: hist(Number_OF_LINES_OF_ACTIVE_CODE,main="Frequency plot of Class Sizes") The third graph is a scatter plot with 3,000 data points generated at random within a normal distribution. This represents a typical sampling process, such as the number of potential defects found using Sonar or FindBugs: Y <- rnorm(3000)plot(Y,main='Random Data within a normal distribution') We will leave it to an exercise for the reader to link real data to the graphing capabilities of R. There's more... Here are a couple more points for you to think about. R studio or StatET A popular IDE for R is RStudio (http://www.rstudio.com/). The open source edition is free. The feature set includes a source code editor with code completion and syntax highlighting, integrated help, solid debugging features and a slew of other features. An alternative for the Eclipse environment is the StatET plugin (http://www.walware.de/goto/statet). Quickly getting help The first place to start to learn R is by typing help.start() from the R console. The command launches a browser with an overview of the main documentation If you want descriptions of R commands then typing ? before a command generates detailed help documentation. For example, in the recipe we looked at the use of the rnorm command. Typing ?rnorm produces documentation similar to: The Normal DistributionDescriptionDensity, distribution function, quantile function and random generationfor the normal distribution with mean equal to mean and standard deviation equal to sd.Usagednorm(x, mean = 0, sd = 1, log = FALSE)pnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE)qnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE)rnorm(n, mean = 0, sd = 1) The Community Jenkins is not just a CI server, it is also a vibrant and highly active community. Enlightened self-interest dictates participation. There are a number of ways to do this: Participate on the mailing lists and Twitter (https://wiki.jenkins-ci.org/display/JENKINS/Mailing+Lists). First, read the postings and as you get to understand what is needed then participate in the discussions. Consistently reading the lists will generate many opportunities to collaborate. Improve code, write plugins (https://wiki.jenkins-ci.org/display/JENKINS/Help+Wanted). Test Jenkins and especially the plugins and write bug reports, donating your test plans. Improve documentation by writing tutorials and case studies. Sponsor and support events. Final Comments An efficient approach to learning how to effectively use Jenkins is to download and install the server and then trying out recipes you find in books, the Internet or developed by your fellow developers. I wish you good fortune and an interesting learning experience. Resources for Article: Further resources on this subject: Creating a basic JavaScript plugin [article] What is continuous delivery and DevOps? [article] Continuous Integration [article]
Read more
  • 0
  • 0
  • 4493

article-image-looking-back-looking-forward
Packt
09 Feb 2015
24 min read
Save for later

Looking Back, Looking Forward

Packt
09 Feb 2015
24 min read
In this article by Simon Jackson, author of the book Unity 3D UI Essentials we will see how the new Unity UI has long been sought by developers; it has been announced and re-announced over several years, and now it is finally here. The new UI system is truly awesome and (more importantly for a lot of developers on a shoestring budget) it's free. (For more resources related to this topic, see here.) As we start to look forward to the new UI system, it is very important to understand the legacy GUI system (which still exists for backwards compatibility) and all it has to offer, so you can fully understand just how powerful and useful the new system is. It's crucial to have this understanding, especially since most tutorials will still speak of the legacy GUI system (so you know what on earth they are talking about). With an understanding of the legacy system, you will then peer over the diving board and walk through a 10,000-foot view of the new system, so you get a feel of what to expect from the rest of this book. The following is the list of topics that will be covered in this article: A look back into what legacy Unity GUI is Tips, tricks, and an understanding of legacy GUI and what it has done for us A high level overview of the new UI features State of play You may not expect it, but the legacy Unity GUI has evolved over time, "adding new features and improving performance. However, because it has "kept evolving based on the its original implementation, it has been hampered "with many constraints and the ever pressing need to remain backwards compatible (just look at Windows, which even today has to cater for" programs written in BASIC (http://en.wikipedia.org/wiki/BASIC)). Not to say the old system is bad, it's just not as evolved as some of the newer features being added to the Unity 4.x and Unity 5.x series, which are based on newer and more enhanced designs, and more importantly, a new core. The main drawback of the legacy GUI system is that it is only drawn in screen space (drawn on" the screen instead of within it) on top of any 3D elements or drawing in your scenes. This is fine if you want menus or overlays in your title but if you want to integrate it further within your 3D scene, then it is a lot more difficult. For more information about world space and screen space, see this Unity Answers article (http://answers.unity3d.com/questions/256817/about-world-space-and-local-space.html). So before we can understand how good the new system is, we first need to get to grips with where we are coming from. (If you are already familiar with the legacy GUI system, feel free to skip over this section.) A point of reference Throughout this book, we will refer to the legacy GUI simply as GUI. When we talk about the new system, it will be referred to as UI or Unity UI, just so you don't get mixed-up when reading. When looking around the Web (or even in the Unity support forums), you may hear about or see references to uGUI, which was the development codename for the new Unity UI system. GUI controls The legacy" GUI controls provide basic and stylized controls for use in your titles. All legacy GUI controls are drawn during the GUI rendering phase from the built-in "OnGUI method. In the sample that accompanies this title, there are examples of all the controls in the AssetsBasicGUI.cs script. For GUI controls to function, a camera in the scene must have the GUILayer component attached to it. It is there by default on any Camera in a scene, so for most of the time you won't notice it. However, if you have removed it, then you will have to add it back for GUI to work. The component is just the hook for the OnGUI delegate handler, to ensure it has called each frame. Like "the Update method in scripts, the OnGUI method can be called several times per frame if rendering is slowing things down. Keep this in mind when building your own legacy GUI scripts. The controls that are available in the legacy GUI are: Label Texture Button Text fields (single/multiline and password variant) Box Toolbars Sliders ScrollView Window So let's go through them in more detail: All the following code is implemented in the sample project in the basic GUI script located in the AssetsScripts folder of the downloadable code. To experiment yourself, create a new project, scene, and script, placing the code for each control in the script and attach the script to the camera (by dragging it from the project view on to the Main Camera GameObject in the scene hierarchy). You can then either run the project or adorn the class in the script with the [ExecuteInEditMode] attribute to see it in the game view. The Label control Most "GUI systems start with a Label control; this simply "provides a stylized control to display read-only text on the screen, it is initiated by including the following OnGUI method in your script: void OnGUI() {GUI.Label(new Rect(25, 15, 100, 30), "Label");} This results in the following on-screen display: The Label control "supports altering its font settings through the" use of the guiText GameObject property (guiText.font) or GUIStyles. To set guiText.font in your script, you would simply apply the following in your script, either in the Awake/Start functions or before drawing the next section of text you want drawn in another font: public Font myFont = new Font("arial");guiText.font = myFont; You can also set the myFont property in Inspector using an "imported font. The Label control forms the basis for all controls to display text, and as such, "all other controls inherit from it and have the same behaviors for styling the displayed text. The Label control also supports using a Texture for its contents, but not both text "and a texture at the same time. However, you can layer Labels and other controls on top of each other to achieve the same effect (controls are drawn implicitly in the order they are called), for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {//Draw a textureGUI.Label(new Rect(125, 15, 100, 30), myTexture);//Draw some text on top of the texture using a labelGUI.Label(new Rect(125, 15, 100, 30), "Text overlay");} You can override the order in which controls are drawn by setting GUI.depth = /*<depth number>*/; in between calls; however, I would advise against this unless you have a desperate need. The texture" will then be drawn to fit the dimensions of the Label field, By" default it scales on the shortest dimension appropriately. This too can be altered using GUIStyle to alter the fixed width and height or even its stretch characteristics. Texture drawing Not "specifically a control in itself, the GUI framework" also gives you the ability to simply draw a Texture to the screen Granted there is little difference to using DrawTexture function instead of a Label with a texture or any other control. (Just another facet of the evolution of the legacy GUI). This is, in effect, the same as the previous Label control but instead of text it only draws a texture, for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTexture(new Rect(325, 15, 100, 15), myTexture);} Note that in all the examples providing a Texture, I have provided a basic template to initialize an empty texture. In reality, you would be assigning a proper texture to be drawn. You can also provide scaling and alpha blending values when drawing the texture to make it better fit in the scene, including the ability to control the aspect ratio that the texture is drawn in. A warning though, when you scale the image, it affects the rendering properties for that image under the legacy GUI system. Scaling the image can also affect its drawn position. You may have to offset the position it is drawn at sometimes. For" example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTexture(new Rect(325, 15, 100, 15), myTexture, "   ScaleMode.ScaleToFit,true,0.5f);} This will do its best to draw the source texture with in the drawn area with alpha "blending (true) and an aspect ratio of 0.5. Feel free to play with these settings "to get your desired effect. This is useful in certain scenarios in your game when you want a simple way to "draw a full screen image on the screen on top of all the 3D/2D elements, such as pause or splash screen. However, like the Label control, it does not receive any "input events (see the Button control for that). There is also a variant of the DrawTexture function called DrawTextureWithTexCoords. This allows you to not only pick where you want the texture drawn on to the screen, but also from which part of the source texture you want to draw, for example: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {GUI.DrawTextureWithTexCoords (new Rect(325, 15, 100, 15), "   myTexture ,new Rect(10, 10, 50, 5));} This will pick a region from the source texture (myTexture) 50 pixels wide by "5 pixels high starting from position 10, 10 on the texture. It will then draw this to "the Rect region specified. However, the DrawTextureWithTexCoords function cannot perform scaling, it "can only perform alpha blending! It will simply draw to fit the selected texture region to the size specified in the initial Rect. DrawTextureWithTexCoords has also been used to draw individual sprites using the legacy GUI, which has a notion of what a sprite is. The Button control Unity "also provides a Button control, which comes in two" variants. The basic "Button control which only supports a single click, whereas RepeatButton supports "holding down the button. They are both instantiated the same way by using an if statement to capture "when the button is clicked, as shown in the following script: void OnGUI() {if (GUI.Button(new Rect(25, 40, 120, 30), "Button")){   //The button has clicked, holding does nothing}if (GUI.RepeatButton(new Rect(170, 40, 170, 30), "   "RepeatButton")){   //The button has been clicked or is held down}} Each will result in a simple button on screen as follows:   The controls also support using a Texture for the button content as well by providing a texture value for the second parameter as follows: public Texture2D myTexture;void Start() {myTexture = new Texture2D(125, 15);}void OnGUI() {if (GUI.Button(new Rect(25, 40, 120, 30), myTexture)){ }} Like the Label, the "font of the text can be altered using GUIStyle or" by setting the guiText property of the GameObject. It also supports using textures in the same "way as the Label. The easiest way to look at this control is that it is a Label that "can be clicked. The Text control Just as "there is a need to display text, there is also a need for a "user to be able to enter text, and the legacy GUI provides the following controls to do just that: Control Description TextField This" is a basic text box, it supports a single line of text, no new lines (although, if the text contains end of line characters, it will draw the extra lines down). TextArea This "is an extension of TextField that supports entering of multiple lines of text; new lines will be added when the user "hits the enter key. PasswordField This is" a variant of TextField; however, it won't display "the value entered, it will replace each character with a replacement character. Note, the password itself is still stored in text form and if you are storing users' passwords, you should encrypt/decrypt the actual password when using it. Never store characters in plain text.  Using the TextField control is simple, as it returns the final state of the value that has been entered and you have to pass that same variable as a parameter for the current text to be displayed. To use the controls, you apply them in script as follows: string textString1 = "Some text here";string textString2 = "Some more text here";string textString3 = "Even more text here";void OnGUI() {textString = GUI.TextField(new Rect(25, 100, 100, 30), "   textString1);textString = GUI.TextArea(new Rect(150, 100, 200, 75), "   textString2);textString = GUI.PasswordField(new Rect(375, 100, 90, 30), "   textString3, '*');} A note about strings in Unity scripts Strings are immutable. Every time you change their value they create a new string in memory by having the textString variable declared at the class level it is a lot more memory efficient. If you declare the textString variable in the OnGUI method, "it will generate garbage (wasted memory) in each frame. "Worth keeping in mind. When" displayed, the textbox by default looks like this:   As with" the Label and Button controls, the font of the text displayed can be altered using either a GUIStyle or guiText GameObject property. Note that text will overflow within the field if it is too large for the display area, but it will not be drawn outside the TextField dimensions. The same goes for multiple lines. The Box control In the "midst of all the controls is a generic purpose control that "seemingly "can be used for a variety of purposes. Generally, it's used for drawing a "background/shaded area behind all other controls. The Box control implements most of the features mentioned in the controls above controls (Label, Texture, and Text) in a single control with the same styling and layout options. It also supports text and images as the other controls do. You can draw it with its own content as follows: void OnGUI() {GUI.Box (new Rect (350, 350, 100, 130), "Settings");} This gives you the following result:   Alternatively, you" can use it to decorate the background" of other controls, "for example: private string textString = "Some text here";void OnGUI() {GUI.Box (new Rect (350, 350, 100, 130), "Settings");GUI.Label(new Rect(360, 370, 80, 30), "Label");textString = GUI.TextField(new Rect(360, 400, 80, 30), "   textString);if (GUI.Button (new Rect (360, 440, 80, 30), "Button")) {}} Note that using the Box control does not affect any controls you draw upon it. It is drawn as a completely separate control. This statement will be made clearer when you look at the Layout controls later in this article. The example will draw the box background and the Label, Text, and Button controls on top of the box area and looks like this:   The box control can" be useful to highlight groups of controls or providing "a simple background (alternatively, you can use an image instead of just text and color). As with the other controls, the Box control supports styling using GUIStyle. The Toggle/checkbox control If checking "on / checking off is your thing, then the legacy" GUI also has a checkbox control for you, useful for those times when" you need to visualize the on/off state "of an option. Like the TextField control, you "pass the variable that manages Togglestate as "a parameter and it returns the new value (if it changes), so it is applied in code "as follows: bool blnToggleState = false;void OnGUI() {blnToggleState = GUI.Toggle(new Rect(25, 150, 250, 30),blnToggleState, "Toggle");} This results in the following on-screen result: As with the Label and Button controls, the font of the text displayed can be altered using either a GUIStyle or guiText GameObject property. Toolbar panels The basic "GUI controls also come with some very basic" automatic layout panels: "the first handles an arrangement of buttons, however these buttons are grouped "and only one can be selected at a time. As with other controls, the style of the button can be altered using a GUIStyles definition, including fixing the width of the buttons and spacing. There are two layout options available, these are: The Toolbar control The Selection grid control The Toolbar control The Toolbar control "arranges buttons in a horizontal "pattern (vertical is "not supported). Note that it is possible to fake a vertical toolbar by using a selection grid with just one item per row. See the Selection grid section later in this article for more details. As with other controls, the Toolbar returns the index of the currently selected button in the toolbar. The buttons are also the same as the base button control so it also offers options to support either text or images for the button content. The RepeatButton control however is not supported. To implement the toolbar, you specify an array of the content you wish to display in each button and the integer value that controls the selected button, as follows: private int toolbarInt;private string[] toolbarStrings ;Void Start() {toolbarInt = 0;toolbarStrings = { "Toolbar1", "Toolbar2", "Toolbar3" };}void OnGUI() {toolbarInt = GUI.Toolbar(new Rect(25, 200, 200, 30), "   toolbarInt, toolbarStrings);} The main" difference between the preceding controls is that you" have to pass the currently selected index value in to the control and it then returns the new value. The Toolbar when drawn looks as follows:   As stated, you can also pass an array of textures as well and they will be displayed instead of the text content. The SelectionGrid control The "SelectionGrid control is a customization "of the previous standard Toolbar control, it is able to arrange the buttons in a grid layout and resize the buttons appropriately to fill the target area. In code, SelectionGrid is used in a very similar format to the Toolbar code shown previously, for example: private int selectionGridInt ;private string[] selectionStrings;Void Start() {selectionGridInt = 0;selectionStrings = { "Grid 1", "Grid 2", "Grid 3", "Grid 4" };}void OnGUI() {selectionGridInt = GUI.SelectionGrid(new Rect(250, 200, 200, 60), selectionGridInt, selectionStrings, 2);} The main difference between SelectionGrid and Toolbar in code is that with SelectionGrid you can specify the number of items in a single row and the control will automatically lay out the buttons accordingly (unless overridden using GUIStyle). The preceding code will result in the following layout:   On "their own, they provide a little more flexibility" than just using buttons alone. The Slider/Scrollbar controls When "you need to control a range in your games. GUI or add "a handle to control moving properties between two values, like" moving an object around in your scene, this is "where the Slider and Scrollbar controls come in. They provide two similar out-of-the-box implementations that give a scrollable region and a handle to control the value behind the control. Here, they are presented side by side:   The slimmer Slider and chunkier Scrollbar controls can both work in either horizontal or vertical modes and have presets for the minimum and maximum values allowed. Slider control code In code, the "Slider control code is represented as follows: private float fltSliderValue = 0.5f;void OnGUI() {fltSliderValue = GUI.HorizontalSlider(new Rect(25, 250, 100,30), "   fltSliderValue, 0.0f, 10.0f);fltSliderValue = GUI.VerticalSlider(new Rect(150, 250, 25, 50), "   fltSliderValue, 10.0f, 0.0f);} Scrollbar control code In code, the "Scrollbar control code is represented as follows: private float fltScrollerValue = 0.5f;void OnGUI() {fltScrollerValue = GUI.HorizontalScrollbar(new Rect(25, 285,"   100, 30), fltScrollerValue, 1.0f, 0.0f, 10.0f);fltScrollerValue = GUI.VerticalScrollbar(new Rect(200, 250, 25,"   50), fltScrollerValue, 1.0f, 10.0f, 0.0f);} Like Toolbar and SelectionGrid, you are required to pass in the current value and it will return the updated value. However, unlike all the other controls, you actually have two style points, so you can style the bar and the handle separately, giving you a little more freedom and control over the look and feel of the sliders. Normally, you would only use the Slider control; The main reason the Scrollbar is available is that it forms the basis for the next control, the ScrollView control. The ScrollView control The last" of the displayable controls is the ScrollView control, "which gives you masking abilities over GUI elements with" optional horizontal and vertical Scrollbars. Simply put, it allows you to define an area larger for controls behind a smaller window on the screen, for example:   Example list implementation using a ScrollView control Here we have a list that has many items that go beyond the drawable area of the ScrollView control. We then have the two scrollbars to move the content of the scroll viewer up/down and left/right to change the view. The background content is hidden behind a viewable mask that is the width and height of the ScrollView control's main window. Styling the "control is a little different as there is no base style" for it; it relies on the currently set default GUISkin. You can however set separate GUIStyles for each of the sliders but only over the whole slider, not its individual parts (bar and handle). If you don't specify styles for each slider, it will also revert to the base GUIStyle. Implementing a ScrollView is fairly easy, for example: Define the visible area along with a virtual background layer where the controls will be drawn using a BeginScrollView function. Draw your controls in the virtual area. Any GUI drawing between the ScrollView calls is drawn within the scroll area. Note that 0,0 in the ScrollView is from the top-left of the ScrollView area and not the top-left-hand corner of the screen. Complete it by closing the control with the EndScrollView function. For example, the previous example view was created with the following code: private Vector2 scrollPosition = Vector2.zero;private bool blnToggleState = false;void OnGUI(){scrollPosition = GUI.BeginScrollView(" new Rect(25, 325, 300, 200), " scrollPosition, " new Rect(0, 0, 400, 400)); for (int i = 0; i < 20; i++){   //Add new line items to the background   addScrollViewListItem(i, "I'm listItem number " + i);}GUI.EndScrollView();} //Simple function to draw each list item, a label and checkboxvoid addScrollViewListItem(int i, string strItemName){GUI.Label(new Rect(25, 25 + (i * 25), 150, 25), strItemName);blnToggleState = GUI.Toggle(" new Rect(175, 25 + (i * 25), 100, 25), " blnToggleState, "");} In this "code, we define a simple function (addScrollViewListItem) to "draw a list item (consisting of a label and checkbox). We then begin the ScrollView control by passing the visible area (the first Rect parameter), the starting scroll position, and finally the virtual area behind the control (the second Rect parameter). Then we "use that to draw 20 list items inside the virtual area of the ScrollView control using our helper function before finishing off and closing the control with the EndScrollView command. If you want to, you can also nest ScrollView controls within each other. The ScrollView control also has some actions to control its use like the ScrollTo command. This command will move the visible area to the coordinates of the virtual layer, bringing it into focus. (The coordinates for this are based on the top-left position of the virtual layer; 0,0 being top-left.) To use the ScrollTo function on ScrollView, you must use it within the Begin and End ScrollView commands. For example, we can add a button in ScrollView to jump to the top-left of the virtual area, for example: public Vector2 scrollPosition = Vector2.zero;void OnGUI(){scrollPosition = GUI.BeginScrollView(" new Rect(10, 10, 100, 50)," scrollPosition, " new Rect(0, 0, 220, 10)); if (GUI.Button(new Rect(120, 0, 100, 20), "Go to Top Left"))   GUI.ScrollTo(new Rect(0, 0, 100, 20));GUI.EndScrollView();} You can also additionally turn on/off the sliders on either side of the control by specifying "the BeginScrollView statement "using the alwayShowHorizontal and alwayShowVertical properties; these are highlighted here in an updated "GUI.BeginScrollView call: Vector2 scrollPosition = Vector2.zero;bool ShowVertical = false; // turn off vertical scrollbarbool ShowHorizontal = false; // turn off horizontal scrollbarvoid OnGUI() {scrollPosition = GUI.BeginScrollView(new Rect(25, 325, 300, 200),scrollPosition,new Rect(0, 0, 400, 400),ShowHorizontal,ShowVertical);GUI.EndScrollView ();}GUI.EndScrollView ();} Rich Text Formatting Now" having just plain text everywhere would not look" that great and would likely force developers to create images for all the text on their screens (granted a fair few still do so for effect). However, Unity does provide a way to enable richer text display using a style akin to HTML wherever you specify text on a control (only for label and display purposes; getting it to" work with input fields is not recommended). In this HTML style of writing text, we have the following tags we can use to liven up the text displayed. This gives" text a Bold format, for example: The <b>quick</b> brown <b>Fox</b> jumped over the <b>lazy Frog</b> This would result in: The quick brown Fox jumped over the lazy Frog Using this tag "will give text an Italic format, for example: The <b><i>quick</i></b> brown <b>Fox</b><i>jumped</i> over the <b>lazy Frog</b> This would result in: The quick brown Fox jumped over the lazy Frog As you can "probably guess, this tag will alter the Size of the text it surrounds. For reference, the default size for the font is set by the font itself. For example: The <b><i>quick</i></b> <size=50>brown <b>Fox</b></size> <i>jumped</i> over the <b>lazy Frog</b> This would result in: Lastly, you can specify different colors for text surrounded by the Color tag. "The color itself is" denoted using its 8-digit RGBA hex value, for example: The <b><i>quick</i></b> <size=50><color=#a52a2aff>brown</color> <b>Fox</b></size> <i>jumped</i> over the <b>lazy Frog</b> Note that the "color is defined using normal RGBA color space notation (http://en.wikipedia.org/wiki/RGBA_color_space) in hexadecimal form with two characters per color, for example, RRGGBBAA. Although the color property does also support the shorter RGB color space, which is the same notation "but without the A (Alpha) component, for example,. RRGGBB The preceding code would result in: (If you are reading this in print, the previous word brown is in the color brown.) You can also use a color name to reference it but the pallet is quite limited; for more "details, see the Rich Text manual reference page at http://docs.unity3d.com/Manual/StyledText.html. For text meshes, there are two additional tags: <material></material> <quad></quad> These only apply when associated to an existing mesh. The material is one of the materials assigned to the mesh, which is accessed using the mesh index number "(the array of materials applied to the mesh). When applied to a quad, you can also specify a size, position (x, y), width, and height to the text. The text mesh isn't well documented and is only here for reference; as we delve deeper into the new UI system, we will find much better ways of achieving this. Summary So now we have a deep appreciation of the past and a glimpse into the future. One thing you might realize is that there is no one stop shop when it comes to Unity; each feature has its pros and cons and each has its uses. It all comes down to what you want to achieve and the way you want to achieve it; never dismiss anything just because it's old. In this article, we covered GUI controls. It promises to be a fun ride, and once we are through that, we can move on to actually building some UI and then placing it in your game scenes in weird and wonderful ways. Now stop reading this and turn the page already!! Resources for Article: Further resources on this subject: What's Your Input? [article] Parallax scrolling [article] Unity Networking – The Pong Game [article]
Read more
  • 0
  • 0
  • 23149

article-image-controls-and-player-movement
Packt
09 Feb 2015
17 min read
Save for later

Controls and player movement

Packt
09 Feb 2015
17 min read
In this article by Miguel DeQuadros, author of the book GameSalad Essentials, you will learn how to create your playable character. We are going to cover a couple of different control schemes, depending on the platform you want to release your awesome game on. We will deal with some keyboard controls, mouse controls, and even some touch controls for mobile devices. (For more resources related to this topic, see here.) Let's go into our project, and open up level 1. First we are going to add some gravity to the scene. In the scene editor, click on the Scene button in the Inspector window. In the Attributes window, expand the Gravity drop down, and change the Y value to 900. Now, on to our actor! We have already created our player actor in the Library, so let's drag him into our level. Once he's placed exactly where you want him to be, double-click on him to open up the actor editor. Instead of clicking the lock button, click the Edit Prototype... button on the upper left-hand corner of the screen, just above the actor's image. After doing this, we will edit the main actor within the Library; so instead of having to program each actor in every scene, we just drag the main actor in the library and boom! It's programmed. For our player actor, we are going to keep things super organized, because honestly, he's going to have a lot of behaviors, and even more so if you decide to continue after we are done. Click the Create Group button, right beside the Create Rule button. This will now create a group which we will put all our keyboard control-based behaviors within, so let's name it Keyboard Controls by double-clicking the name of the group. Now let's start creating the rules for each button that will control our character. Each rule will be Actor receives event | key | (whichever key you want) | is down. To select the key, simply press the Keyboard button inside the rule, and it will open up a virtual keyboard. All you have to do is click the key you want to use. For our first rule, let's do Actor receives event | key | left | is down. Because our sprite has been created with Kevin facing the right side, we have to flip him whenever the player presses the left arrow key. So, we have to drag in a Change Attribute behavior, and change it to Change Attribute: self.Graphics.Flip Horizontally to true. Then drag in an Animate behavior for the walking animation, and drag in the corresponding walking images for our character. Now let's get Kevin moving! For each Move rule you create, drag in an Accelerate behavior, change it to the corresponding direction you want him to move in, and change the Acceleration value to 500. Again, you can play around with these moving values to whatever suits your style. When you create the Move Right rule, don't forget to change the Flip Horizontally attribute to false so that he flips back to normal when the right key is pressed. Test out the level to see if he's moving. Oops! Did he fall through the platforms? We need to fix that! We are going to do this the super easy way. Click on the home button and then to the Actors tab. On the bottom-left corner of the screen, you will find a + button, click on it. This will add a new group of actors, or tags; name it Platforms. Now all you have to do is, simply drag in each platform actor you want the character to collide against in this section. Now when we create our Collision behavior, we only have to do one for the whole set of platforms, instead of creating one per actor. Time saved, money made! Plus, it keeps things more organized. Let's go back to our Kevin actor, and outside of the controls group, drag in a Collide behavior, and all you have to do is change it to Bounce when colliding with | actor with tag: | Platforms. Now, whenever he collides with an actor that you dragged into the Platforms section, he will stop. Test it out to see how it works. Bounce! Bounce! Bounce! We need to adjust a few things in the Physics section of the Kevin actor, and also the Platform actors. Open up one of the Platform actors (it doesn't matter which one, because we are going to change them all), in the Attributes window, open up the Physics drop down, and change the Bounciness value to 0. Or if you want it to be a rubbery surface, you can leave it at 1, or even 100, whatever you like. If you want a slippery platform, change the Friction to a lower than one value, such as 0.1. Do the same with our Kevin actor. Change his Bounciness to 0, his Density to 100, and Friction to 50, and check off (if not already) the Fixed Rotation, and Movable options. Now test it out. Everything should work perfectly! If you change the Platform's Friction value and set it much higher, our actor won't move very well, which would be good for super sticky platforms. We are now going to work on jumping. You want him to jump only when he is colliding with the ground, otherwise the player could keep pressing the jump button and he would just continue to fly; which would be cool, but it wouldn't make the levels very challenging now, would it? Let's go back to editing our Kevin actor. We need to create a new attribute within the actor itself, so in the Attributes window, click on the + button, select Boolean, and name it OnGround, and leave it unchecked. Create a new rule, and change it to Actor receives event | overlaps of collides | with actor with tag: | Platforms. Then drag in another rule, change it to Attribute | self.Motion Linear Velocity.Y | < 1, then click on the + button within the rule to create another condition, and change it to Attribute | self.Motion Linear Velocity.Y > -1. Whoa there cowboy! What does this mean? Let's break it down. This is detecting the actor's up and down motion, so when he is going slower than 1, and faster than -1, we will change the attribute. Why do this? It's a bit of a fail safe, so when the actor jumps and the player keeps holding the jump key, he doesn't keep jumping. If we didn't add these conditions, the player could jump at times when he shouldn't. Within that rule, drag in a Change Attribute behavior, and change it to Change Attribute: | self.OnGround To 1. (1 being true, 0 being false, or you can type in true or false.) Now, inside the Keyboard Controls group, create a new rule and name it Jumping. Change the rule to Actor receives event | key | space | is down, and we need to check another condition, so click on the + button inside of the rule, and change the new condition to Attribute | self.OnGround is true. This will not only check if the space bar is down, but will also make sure that he is colliding with the ground. Drag in a Change Attribute behavior inside of this Jumping rule, and change it to self.OnGround to 0, which, you guessed it, turns off the OnGround condition so the player can't keep jumping! Now drag in a Timer behavior, change it to For | 0.1 seconds, then drag in an Accelerate behavior inside of the Timer, and change the Direction to 90, Acceleration to 5000. Again, play around with these values. If you want him to jump a little higher, or lower, just change the timing slower or faster. If you want to slow him down a bit, such as walking speed and falling speed (which is highly recommended), in the Attributes section in the Actor window, under Motion change the Max Speed (I ended up changing it to 500), then click on Apply Max Speed. If you put too low a value, he will pause before he jumps because the combined speed of him walking and jumping is over the max speed so GameSalad will throttle him down, and that doesn't look right. For the finishing touch, let's drag in a Control Camera behavior so the camera moves with Kevin. (Don't forget to change the camera's bounds in the scene!) Now Kevin should be shooting around the level like a maniac! If you want to use mobile controls, let's go through some details. If not, skip ahead to the next heading. Mobile controls Mobile controls in GameSalad are relatively simple to do. It involves creating a non-moveable layer within the scene, and creating actors to act as buttons on screen. Let's see what we're talking about. Go into your scene. Under the Inspector window, click on the Scene button, then the Layers tab. Click on the + button to add a new layer, rename it to Buttons, and uncheck scrollable. If you have a hard time renaming the new layer by double clicking on it, simply click on the layer and press Enter or the return key, you will now be able to rename it. Whether or not this is a bug, we will find out in further releases. In each button, you'll want to uncheck the Moveable option in the Physics section, so the button doesn't fall off the screen when you click on play. Drag them into your scene in the ways you want the buttons to be laid out. We are now going to create three new game attributes; each will correspond to the button being pressed. Go to our scene, and click on the Game button in the Inspector window, and under the Attributes tab add three new Boolean attributes, LeftButton, RightButton, and JumpButton. Now let's start editing each button in the Library, not in the scene. We'll start with the left arrow button. Create a new rule, name it Button Pressed, change it to Actor receives event | touch | is pressed. Inside that rule, drag in a change attribute behavior and change it to game.LeftButton | to true; then expand the Otherwise drop down in the rule and copy and paste the previous change attribute rule into the Otherwise section and change it from true to false. What this does is, any time the player isn't touching that button, the LeftButton attribute is false. Easy! Do the same for each of the other buttons, but change each attribute accordingly. If you want, you can even change the color or image of the actor when it's pressed so you can actually see what button you are pressing. Simply adding three Change Attribute behaviors to the rule and changing the colors to 1 when being touched, and 0 when not, will highlight the button on touch. It's not required, but hey, it looks a lot better! Let's go back into our Kevin actor, copy and paste the Keyboard Controls group, and rename it to Touch Controls. Now we have to change each rule from Actor receives event | key, to Attribute | game.LeftButton | is true (or whatever button is being pressed). Test it out to see if everything works. If it does, awesome! If not, go back and make sure everything was typed in correctly. Don't forget, when it comes to changing attributes, you can't simply type in game.LeftButton, you have to click on the … button beside the Attribute, and then click on Game, then click LeftButton. Now that Kevin is alive, let's make him armed and dangerous. Go back to the Scene button within the scene editor, and under the Layers tab click on the Background layer to start creating actors in that layer. If you forget to do this, GameSalad will automatically start creating actors in the last selected layer, which can be a fast way of creating objects, but annoying if you forget. Attacking Pretty much any game out there nowadays has some kind of shooting or attacking in it. Whether you're playing NHL, NFL, or Portal 2, there is some sort of attack, or shot that you can make, and Kevin will locate a special weapon in the first level. The Inter Dimensional Gravity Disruptor I created both, the weapon and the projectile that will be launched from the Inter Dimensional Gravity Disruptor. For the gun actor, don't forget to uncheck the Movable option in the Physics drop down to prevent the gun from moving around. Now let's place the gun somewhere near the end of the level. For now, let's create a new game-wide attribute, so within the scene, click on the Game button within the Inspector window, and open the Attributes tab. Now, click on the + button to add a new attribute. It's going to be a Boolean, and we will name it GunCollected. What we are going to do is, when Kevin collides with the collectable gun at the end of the level, the Boolean will switch to true to show it's been collected, then we will do some trickery that will allow us to use multiple weapons. We are going to add two more new game-wide attributes, both Integers; name one KevX, and the other KevY. I'll explain this in a minute. Open up the Kevin actor in the Inspector window, create a new Rule, and change it to: Actor receives event | overlaps or collides | with | actor of type | Inter-Dimensional-Gravity-Disruptor, or whatever you want to name it; then drag in a Change Attribute behavior into the rule, and change it to game.GunCollected | to true. Finally, outside of this rule, drag in two Constrain Attribute behaviors and change them to the following: game.KevX | to | self.position.xgame.KevY | to | self.position.y The Constrain attributes do exactly what it sounds like, it constantly changes the attribute, constraining it; whereas the Change Attribute does it once. That's all we need to do within our Kevin actor. Now let's go back to our gun actor inside the Inspector and create a new Rule, change it to Attribute | game.GunCollected | is true. Inside that rule, drag in two Constrain Attribute behaviors, and change them to the following: self.position.x | to | game.KevXself.position.y| to | game.KevY +2 Test out your game now and see if the gun follows Kevin when he collects it. If we were to put a Change Attribute instead of Constrict Attribute, the gun would quickly pop to his position, and that's it. You could fix this by putting the Change attributes inside of a Timer behavior where every 0.0001 second it changes the attributes, but when you test it, the gun will be following quite choppily and that doesn't look good. We now want the gun to flip at the same time Kevin does, so let's create two rules the same way we did with Kevin. When the Left arrow key is pressed, change the flipped attribute to true, and when the Right arrow key is pressed change the flipped attribute to false. This happens only when the GunCollected attribute is true, otherwise the gun will be flipping before you even collect it. So create these rules within the GunCollected is true rule. Now the gun will flip with Kevin! Keep in mind, if you haven't checked out the gun image I created (I drew the gun on top of the Kevin sprite) and saved it where it was positioned, no centering will happen. This way the gun is automatically positioned and you don't have to finagle positioning it in the behaviors. Now, for the pew pew. Open up our GravRound actor within the Inspector window, simply add in a Move behavior and make it pretty fast; I did 500. Change the Density, Friction, and Bounciness, all to zero. Open up our Gun actor. We are going to create a new Rule; it will have three conditions to the rule (for Shoot Left) and they are as follows: Attribute | game.GunCollected | is true Actor receives event | key | (whatever shoot button you want) | is down Attribute | self.graphic.Flip.Horizontally | is true Now drag in a Spawn Actor behavior into this rule, select the GravRound actor, Direction: 180, Position: -25. Copy and paste this rule and change it so flip is false, and the Direction of the Spawn Actor is 0.0, and Position is 25: Test it out to make sure he's shooting in the right directions. When developing for a while, it's easy to get tired and miss things. Don't forget to create a new button for shooting, if you are creating a mobile version of the game. Shooting with touch controls is done the exact same way as walking and jumping with touch buttons. Melee weapons If you want to create a game with melee weapons, I'll quickly take you through the process of doing it. Melee weapons can be made the same way as shooting a bullet in real. You can either have the blade or object as a separate actor positioned to the actor, and when the player presses the attack button, have the player and the melee weapon animate at the same time. This way when the melee weapon collides with the enemy, it will be more accurate. If you want to have the player draw the weapon in hand, simply animate the actor, and detect if there is a collision. This will be a little less accurate, as an enemy could run into the player when he is animating and get hurt, so what you can do is have the player spawn an invisible actor to detect the melee weapon collisions more accurately. I'll quickly go through this with you. When the player presses the attack button and the actor isn't flipped (so he'll be facing the right end of the screen), animate accordingly, and then spawn the actor in front of the player. Then, in the detector actor you spawn when attacking, have it automatically destroyed when the player has finished animating. Also, throw in a Move behavior, especially when there is a down swipe such as the attack. For this example, you'll want a detector to follow the edge of the blade. This way, all you have to do is detect collisions inside the enemies with the detector actor. It is way more accurate to do things this way; in fact you can even do collisions with walls. Let's say the actor punches a wall and this detector collides with the block, spawn some particles for debris, and change the image of the block into a cracked image. There are a lot of cool things you can do with melee weapons and with ranged weapons. You can even have the wall pieces have a sort of health bar, which is really a set amount of times a bullet or sword will have to hit it before it is destroyed. Kevin is now bouncing around our level, running like a silly little boy, and now he can shoot things up. Summary We had an unmovable, boring character—albeit a good looking one. We discussed how to get him moving using a keyboard, and mobile touch controls. We then discussed how to get our character to collect a weapon, have that weapon follow him around, and then start shooting it. What are we going to talk about, you ask? Well, we are going to make Kevin have health, lives, and power ups. We'll discuss an inventory system, scoring, and some more game play mechanics like pausing and such. Relax for a little bit, take a nice break. If the weather is as nice out there as it is at the time of me writing, go enjoy the lovely sun. I can't stress the importance of taking a break enough when it comes to game development. There have been many times when I've been programming for hours on end, and I end up making a mistake and because I'm so tired, I can't find the problem. As soon as I take rest, the problem is easy to fix and I'm more alert. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF ArcGIS Spatial Analyst [Article] Sparrow iOS Game Framework - The Basics of Our Game [Article] Three.js - Materials and Texture [Article]
Read more
  • 0
  • 0
  • 24220
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-managing-local-environments
Packt
09 Feb 2015
15 min read
Save for later

Managing local environments

Packt
09 Feb 2015
15 min read
In this article by Juampy Novillo Requena, author of Drush for Developers, Second Edition, we will learn that Drush site aliases offer a useful way to manage local environments without having to be within Drupal's root directory. (For more resources related to this topic, see here.) A site alias consists of an array of settings for Drush to access a Drupal project. They can be defined in different locations, using various file structures. You can find all of its variations at drush topic docs-aliases. In this article, we will use the following variations: We will define local site aliases at $HOME/.drush/aliases.drushrc.php, which are accessible anywhere for our command-line user. We will define a group of site aliases to manage the development and production environments of our sample Drupal project. These will be defined at sites/all/drush/example.aliases.drushrc.php. In the following example, we will use the site-alias command to generate a site alias definition for our sample Drupal project: $ cd /home/juampy/projects/example $ drush --uri=example.local site-alias --alias-name=example.local @self $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', '#name' => 'self', ); The preceding command printed an array structure for the $aliases variable. You can see the root and uri options. There is also an internal property called #name that we can ignore. Now, we will place the preceding output at $HOME/.drush/aliases.drushrc.php so that we can invoke Drush commands to our local Drupal project from anywhere in the command-line interface: <?php   /** * @file * User-wide site alias definitions. * * Site aliases defined here are available everywhere for the current user. */   // Sample Drupal project. $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', ); Here is how we use this site alias in a command. The following example is running the core-status command for our sample Drupal project: $ cd /home/juampy $ drush @example.local core-status Drupal version                 : 7.29-dev                               Site URI                       : example.local                               Database driver                 : mysql                               Database username               : root                                   Database name                   : drupal7x                               Database                       : Connected                               ... Drush alias files              : /home/juampy/.drush/aliases.drushrc.php Drupal root                     : /home/juampy/projects/example           Site path                       : sites/default                           File directory path             : sites/default/files                     Drush loaded our site alias file and used the root and uri options defined in it to find and bootstrap Drupal. The preceding command is equivalent to the following one: $ drush --root=/home/juampy/projects/example --uri=example.local core-status While $HOME/.drush/aliases.drushrc.php is a good place to define site aliases in your local environment, /etc/drush is a first class directory to place site aliases in servers. Let's discover now how we can connect to remote environments via Drush. Managing remote environments Site aliases that reference remote websites can be accessed by Drush through a password-less SSH connection (http://en.wikipedia.org/wiki/Secure_Shell). Before we start with these, let's make sure that we meet the requirements. Verifying requirements First, it is recommended to install the same version of Drush in all the servers that host your website. Drush will fail to run a command if it is not installed in the remote machine except for core-rsync, which runs rsync, a non-Drush command that is available in Unix-like systems. If you can already access the server that hosts your Drupal project through a public key, then skip to the next section. If not, you can either use the pushkey command from Drush extras (https://www.drupal.org/project/drush_extras), or continue reading to set it up manually. Accessing a remote server through a public key The first thing that we need to do is generate a public key for our command-line user in our local machine. Open the command-line interface and execute the following command. We will explain the output step by step: $ cd $HOME $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/juampy/.ssh/id_rsa): By default, SSH keys are created at $HOME/.ssh/. It is fine to go ahead with the suggested path in the preceding prompt; so, let's hit Enter and continue: Created directory '/home/juampy/.ssh'. Enter passphrase (empty for no passphrase): ********* Enter same passphrase again: ********* If the .ssh directory does not exist for the current user, the ssh-keygen command will create it with the correct permissions. We are next prompted to enter a passphrase. It is highly recommended to set one as it makes our private key safer. Here is the rest of the output once we have entered a passphrase: Your identification has been saved in /home/juampy/.ssh/id_rsa. Your public key has been saved in /home/juampy/.ssh/id_rsa.pub. The key fingerprint is: 6g:bf:3j:a2:00:03:a6:00:e1:43:56:7a:a0:c7:e9:f3 juampy@juampy-box The key's randomart image is: +--[ RSA 2048]----+ |                 | |                 | |..               | |o..*             | |o + . . S        | | + * = . .       | | = O o . .     | |   *.o * . .     | |   .oE oo.     | +-----------------+ The result is a new hidden directory under our $HOME path named .ssh. This directory contains a private key file (id_rsa) and a public key file (id_rsa.pub). The former is to be kept secret by us, while the latter is the one we will copy into remote servers where we want to gain access. Now that we have a public key, we will announce it to the SSH agent so that it can be used without having to enter the passphrase every time: $ ssh-add ~/.ssh/id_rsa Identity added: /home/juampy/.ssh/id_rsa (/home/juampy/.ssh/id_rsa) Our key is ready to be used. Assuming that we know an SSH username and password to access the server that hosts the development environment of our website, we will now copy our public key into it. In the following command, replace exampledev and dev.example.com with the username and server's URL of your server: $ ssh-copy-id exampledev@dev.example.com exampledev@dev.example.com's password: Now try logging into the machine, with "ssh 'exampledev@dev.example.com'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't Our public key has been copied to the server and now we do not need to enter a password to identify ourselves anymore when we log in to it. We could have logged on to the server ourselves and manually copied the key, but the benefit of using the ssh-copy-id command is that it takes care of setting the right permissions to the ~/.ssh/authorized_keys file. Let's test it by logging in to the server: $ ssh exampledev@dev.example.com Welcome! We are ready to set up remote site aliases and run commands using the credentials that we have just configured. We will do this in the next section. If you have any trouble setting up SSH authentication, you can find plenty of debugging tips at https://help.github.com/articles/generating-ssh-keys and http://git-scm.com/book/en/Git-on-the-Server-Generating-Your-SSH-Public-Key. Defining a group of remote site aliases for our project Before diving into the specifics of how to define a Drush site alias, let's assume the following scenario: you are part of a development team working on a project that has two environments, each one located in its own server: Development, which holds the bleeding edge version of the project's codebase. It can be reached at http://dev.example.com. Production, which holds the latest stable release and real data. It can be reached at http://www.example.com. Additionally, there might be a variable amount of local environments for each developer in their working machines; although, these do not need a site alias. Given the preceding scenario and assuming that we have SSH access to the development and production servers, we will create a group of site aliases that identify them. We will define this group at sites/all/drush/example.aliases.drushrc.php within our Drupal project: <?php /** * @file * * Site alias definitions for Example project. */   // Development environment. $aliases['dev'] = array( 'root' => '/var/www/exampledev/docroot', 'uri' => 'dev.example.com', 'remote-host' => 'dev.example.com', 'remote-user' => 'exampledev', );   // Production environment. $aliases['prod'] = array( 'root' => '/var/www/exampleprod/docroot', 'uri' => 'www.example.com', 'remote-host' => 'prod.example.com', 'remote-user' => 'exampleprod', ); The preceding file defines two arrays for the $aliases variable keyed by the environment name. Drush will find this group of site aliases when being invoked from the root of our Drupal project. There are many more settings available, which you can find by reading the contents of the drush topic docs-aliases command. These site aliases contain options known to us: root and uri refer to the remote root path and the hostname of the remote Drupal project. There are also two new settings: remote-host and remote-uri. The former defines the URL of the server hosting the website, while the latter is the user to authenticate Drush when connecting via SSH. Now that we have a group of Drush site aliases to work with, the following section will cover some examples using them. Using site aliases in commands Site aliases prepend a command name for Drush to bootstrap the site and then run the command there. Our site aliases are @example.dev and @example.prod. The word example comes from the filename example.aliases.drushrc.php, while dev and prod are the two keys that we added to the $aliases array. Let's see them in action with a few command examples: Check the status of the Development environment: $ cd /home/juampy/projects/example $ drush @example.dev status Drupal version                 : 7.26                           Site URI                       : http://dev.example.com         Database driver                : mysql                           Database username              : exampledev                     Drush temp directory           : /tmp                           ... Drush alias files              : /home/juampy/projects/example/sites/all/drush/example.aliases.drushrc.php     Drupal root                    : /var/www/exampledev/docroot ...                                           The preceding output shows the current status of our development environment. Drush sent the command via SSH to our development environment and rendered back the resulting output. Most Drush commands support site aliases. Let's see the next example. Log in to the development environment and copy all the files from the files directory located at the production environment: $ drush @example.dev site-ssh Welcome to example.dev server! $ cd `drush @example.dev drupal-directory` $ drush core-rsync @example.prod:%files @self:%files You will destroy data from /var/www/exampledev/docroot/sites/default/files and replace with data from exampleprod@prod.example.com:/var/www/exampleprod/docroot/sites/default/files/ Do you really want to continue? (y/n): y Note the use of @self in the preceding command, which is a special Drush site alias that represents the current Drupal project where we are located. We are using @self instead of @example.dev because we are already logged inside the development environment. Now, we will move on to the next example. Open a connection with the Development environment's database: $ drush @example.dev sql-cli Welcome to the MySQL monitor. Commands end with ; or g. mysql> select database(); +------------+ | database() | +------------+ | exampledev | +------------+ 1 row in set (0.02 sec) The preceding command will be identical to the following set of commands: drush @example.dev site-ssh cd /var/www/exampledev drush sql-cli However, Drush is so clever that it opens the connection for us. Isn't this neat? This is one of the commands I use most frequently. Let's finish by looking at our last example. Log in as the administrator user in production: $ drush @example.prod user-login http://www.example.com/user/reset/1/some-long-token/login Created new window in existing browser session. The preceding command creates a login URL and attempts to open your default browser with it. I love Drush! Summary In this article, we covered practical examples with site aliases. We started by defining a site alias for our local Drupal project, and then went on to write a group of site aliases to manage remote environments for a hypothetical Drupal project with a development and production site. Before using site aliases for our remote environments, we covered the basics of setting up SSH in order for Drush to connect to these servers and run commands there. Resources for Article: Further resources on this subject: Installing and Configuring Drupal [article] Installing and Configuring Drupal Commerce [article] 25 Useful Extensions for Drupal 7 Themers [article]
Read more
  • 0
  • 0
  • 11261

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 6039

article-image-how-to-build-a-koa-web-application-part-2
Christoffer Hallas
08 Feb 2015
5 min read
Save for later

How to Build a Koa Web Application - Part 2

Christoffer Hallas
08 Feb 2015
5 min read
In Part 1 of this series, we got everything in place for our Koa app using Jade and Mongel. In this post, we will cover Jade templates and how to use listing and viewing pages. Please note that this series requires that you use Node.js version 0.11+. Jade templates Rendering HTML is always an important part of any web application. Luckily, when using Node.js there are many great choices, and for this article we’ve chosen Jade. Keep in mind though that we will only touch on a tiny fraction of the Jade functionality. Let’s create our first Jade template. Create a file called create.jade and put in the following: create.jade doctype html html(lang='en') head title Create Page body h1 Create Page form(method='POST', action='/create') input(type='text', name='title', placeholder='Title') input(type='text', name='contents', placeholder='Contents') input(type='submit') For all the Jade questions you have that we won’t answer in this series, I refer you to the excellent official Jade website at http://jade-lang.com . If you add the following statement app.listen(3000); to the end of index.js, then you should be able to run the program from your terminal using the following command and by visiting http://localhost:3000 in your browser. $ node --harmony index.js The --harmony flag just tells the node program that we need support for generators in our program: Listing and viewing pages Now that we can create a page in our MongoDB database, it is time to actually list and view these pages. For this purpose we need to add another middleware to our index.js file after the first middleware: app.use(function* () { if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } … }); As you can probably already tell, this new middleware is very similar to the first one we added that handled the creation of pages. At first we make sure that the method of the request is GET, and if not, we respond appropriately and return the following: var params = this.path.split('/').slice(1); var id = params[0]; if (id.length == 0) { var pages = yield Page.find(); var html = jade.renderFile('list.jade', { pages: pages }); this.body = html; return } Then, we proceed to inspect the path attribute of the Koa context, looking for an ID that represents the page in the database. Remember how we redirected using the ID in the previous middleware. We inspect the path by splitting it into an array of strings separated by the forward slashes of a URL; this way the path /1234 becomes an array of ‘’ and ‘1234.’ Because the path starts with a forward slash, the first item in the array will always be the empty string, so we just discard that by default. Then we check the length of the ID parameter, and if it’s zero we know that there is in fact no ID in the path, and we should just look for the pages in the database and render our list.jade template with those pages made available to the template as the variable pages. Making data available in templates is also known as providing locals to the template. list.jade doctype html html(lang="en") head title Your Web Application body h1 Your Web Application ul - each page in pages li a(href='/#{page._id}')= page.title But if the length of id was not zero, we assume that it’s an id and we try to load that specific page from the database instead of all the pages, and we proceed to render our view.jade template with the: var page = yield Page.findById(id); var html = jade.renderFile('view.jade', page); this.body = html; view.jade doctype html html(lang="en") head title= title body h1= title p= contents That’s it You should now be able to run the app as previously described and create a page, list all of your pages, and view them. If you want to, you can continue and build a simple CMS system. Koa is very simple to use and doesn’t enforce a lot of functionality on you, allowing you to pick and choose between libraries that you need and want to use. There are many possibilities and that is one of Koa’s biggest strengths. Find even more Node.js content on our Node.js page. Featuring our latest titles and most popular tutorials, it's the perfect place to learn more about Node.js. About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea), he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 4080

article-image-five-kinds-python-functions-python-34-edition
Packt
06 Feb 2015
33 min read
Save for later

The Five Kinds of Python Functions Python 3.4 Edition

Packt
06 Feb 2015
33 min read
This article is written by Steven Lott, author of the book Functional Python Programming. You can find more about him at http://slott-softwarearchitect.blogspot.com. (For more resources related to this topic, see here.) What's This About? We're going to look at various ways that Python 3 lets us define things which behave like functions. The proper term here is Callable – we're looking at objects that can be called like a function. We'll look at the following Python constructs: Function definitions Higher-order functions Function wrappers (around methods) Lambdas Callable objects Generator functions and the yield parameter And yes, we're aware that the list above has six items on it. That's because higher-order functions in Python aren't really all that complex or different. In some languages, functions that take functions are arguments involving special syntax. In Python, it's simple and common and barely worth mentioning as a separate topic. We'll look at when it's appropriate and inappropriate to use one or the other of these various functional forms. Some background Let's take a quick peek at a basic bit of mathematical formalism. We'll look at a function as an abstract formalism. We often annotate it like this: This shows us that f() is a function. It has one argument, x, and will map this to a single value, y. Some mathematical functions are written in front, for example, y=sin x. Some are written in other places around the argument, for example, y=|x|. In Python, the syntax is more consistent, for example, we use a function like this: >>> abs(-5)5 We've applied the abs() function to an argument value of -5. The argument value was mapped to a value of 5. Terminology Consider the following function: In this definition, the argument is a pair of values, (a,b). This is called the domain. We can summarize it as the domain of values for which the function is defined. Outside this domain, the function is not defined. In Python, we get a TypeError exception if we provide one value or three values as the argument. The function maps the domain pair to a pair of values, (q,r). This is the range of the function. We can call this the range of values that could be returned by the function. Mathematical function features As we look at the abstract mathematical definition of functions, we note that functions are generally assumed to have no hysteresis; they have no history or memory of prior use. This is sometimes called the property of being idempotent: the results are always the same for a given argument value. We see this in Python as a common feature. But it's not universally true. We'll look at a number of exceptions to the rule of idempotence. Here's an example of the usual situation: >>> int("10f1", 16)4337 The value returned from the evaluation of int("10f1", 16) never changes. There are, however, some common examples of non-idempotent functions in Python. Examples of hysteresis Here are three common situations where a function has hysteresis. In some cases, results vary based on history. In other cases, results vary based on events in some external environment, such as follows: Random number generators. We don't want them to produce the same value over and over again. The Python random.randrange() function, is not obviously idempotent. OS functions depend on the state of the machine as a whole. The os.listdir() function returns values that depend on the use of functions such as os.unlink(), os.rename(), and open() (among several others).While the rules are generally simple, it requires a stateful object outside the narrow world of the code itself. These are examples of Python functions that don't completely fit the formal mathematical definition; they lack idempotence, and their values depend on history, other functions, or both. Function Definitions Python has two statements that are essential features of function definition. The def statement specifies the domain and the return statement(s) specify the range. A simplified gloss of the syntax is as follows: def name(params):   body   return expression In effect, the function's domain is defined by the parameters provided in the def statement. This list of parameter names is not all the information on the domain, however. Even if we use one of the Python extensions to add type annotations, that's still not all the information. There may be if statements in the body of the function that impose additional explicit restrictions. There may be other functions that impose their own kind of implicit restrictions. If, for example, the body included math.sqrt() then there would be an implicit restriction on some values being non-negative. The return statements provide the function's range. An empty return statement means a range of simply None values. When there are multiple return statements, the range is the union of the ranges on all the return statements. This mapping between Python syntax and mathematical concepts isn't very complete. We need more information about a function. Example definition Here's an example of function definition: def odd(n):   """odd(n) -> boolean, true if n is odd."""   return n % 2 == 1 What do does this definition tell us? Several things such as: Domain: We know that this function accepts n, a single object. Range: Boolean value, True if n is an odd number. This is the most likely interpretation. It's also remotely possible that the class of n has repurposed __mod__() or __rmod__() methods, in which case the semantics can be pretty obscure. Because of the inherent ambiguity in Python, this function has provided a triple-quoted """Docstring""" parameter with a summary of the function. This is a best practice, and should be followed universally except in articles like this where it gets too long-winded to include a docstring parameter everywhere. In this case, the doctoring parameter doesn't state unambiguously that n is intended to be a number. There are two ways to handle this gap, they are as follows: Actually include words like n is a number in the docstring parameter Include the docstring parameter test cases that show the required behavior Either is acceptable. Both are preferable. Using a function To complete this example, here's how we'd use this odd little function named odd(): >>> odd(3)True>>> odd(4)False This kind of example text can be included into the docstring parameter to create two test cases that offer insight into what the function really means. The lack of declarations More verbose type declarations—as used in many popular programming languages—aren't actually enough information to fully specify a function's domain and range. To be rigorously complete, we need type definitions that include optional predicates. Take a look at the following command: isinstance(n,int) and n >= 0 The assert statement is a good place for this kind of additional argument domain checking. This isn't the perfect solution because assert statements can be disabled very easily. It can help during design and testing and it can help people to read your code. The fussy formal declarations of data type used in other languages are not really needed in Python. Python replaces an up-front claim about required types with a runtime search for appropriate class methods. This works because each Python object has all the type information bound into it. Static compile-time type information is redundant, since the runtime type information is complete. A Python function definition is pretty spare. In includes the minimal amount of information about the function. There are no formal declaration of parameter types or return type. This odd little function will work with any object that implements the % operator: Generally, this means any object that implements __mod__() or __rmod__(). This means most subclasses of numbers.Number. It also means instances of any class that happen to provide these methods. That could become very weird, but still possible. We hesitate to think about non-numeric objects that work with the number-like % operator. Some Python features In Python, functions we declare are proper first-class objects. This means that they have attributes that can be assigned to variables and placed into collections. Quite a few clever things can be done with function objects. One of the most elegant things is to use a function as an argument or a return value from another function. The ability to do this means that we can easily create and use higher-order functions in Python. For folks who know languages such as C (and C++), functions aren't proper first-class objects. A pointer to a function, however, is a first class object in C. But the function itself is a block of code that can't easily be manipulated. We'll look at a number of simple ways in which we can write—and use—higher-order functions in Python. Functions are objects Consider the following command example: >>> not_even = odd>>> not_even(3)True We've assigned the odd little function object to a new variable, not_even. This creates an alias for a function. While this isn't always the best idea, there are times when we might want to provide an alternate name for a function as part of maintaining reverse compatibility with a previous release of a library. Using functions Consider the following function definition: def some_test(function, value):   print(function, value)   return function(value) This function's domain includes arguments named function and value. We can see that it prints the arguments, then applies the function argument to the given value. When we use the preceding function, it looks like this: >>> some_test(odd, 3)<function odd at 0x613978> 3True The some_test() function accepted a function as an argument. When we printed the function, we got a summary, <function odd at 0x613978>, that shows us some information about the object. We also show a summary of the argument value, 3. When we applied the function to a value, we got the expected result. We can—of course—extend this concept. In particular, we can apply a single function to many values. Higher-order Functions Higher-order functions become particularly useful when we apply them to collections of objects. The built-in map() function applies a simple function to each value in an argument sequence. Here's an example: >>> list(map(odd, [1,2,3,4]))[True, False, True, False] We've used the map() function to apply the odd() function to each value in the sequence. This is a lot like evaluating: >>> [odd(x) for x in [1,2,3,4]] We've created a list comprehension instead of applying a higher-order map() function. This is equivalent to the following command snippet: [odd(1), odd(2), odd(3), odd(4)] Here, we've manually applied the odd() function to each value in a sequence. Yes, that's a diesel engine alternator and some hoses: We'll use this alternator as a subject for some concrete examples of higher-order functions. Diesel engine background Some basic diesel engine mechanics. The following some basic information: The engine turns the alternator. The alternator generates pulses that drive the tachometer. Amongst other things, like charging the batteries. The alternator provides an indirect measurement of engine RPMs. Direct measurement would involve connecting to a small geared shaft. It's difficult and expensive. We already have a tachometer; it's just incorrect. The new alternator has new wheels. The ratios between engine and alternator have changed. We're not interested in installing a new tachometer. Instead, we'll create a conversion from a number on the tachometer, which is calibrated to the old alternator, to a proper number of engine RPMs. This has to allow the change in ratio between the original tachometer and the new tach. Let's collect some data and see what we can figure out about engine RPMs. New alternator First approximation: all we did was get new wheels. We can presume that the old tachometer was correct. Since the new wheel is smaller, we'll have higher alternator RPMs. That means higher readings on the old tachometer. Here's the key question: How far wrong are the RPMs? The old wheel was approximately 3.5 RPM and the new wheel is approximately 2.5 RPM. We can compute the potential ratio between what the tach says and what the engine is really doing: >>> 3.5/2.51.4>>> 1/_0.7142857142857143 That's nice. Is it right? Can we really just multiply and display RPMs by .7 to get actual engine RPMs? Let's create the conversion card first, then collect some more data. Use case Given RPM on the tachometer, what's the real RPM of the engine? Use the following command to find the RPM: def eng(r):   return r/1.4 Use it like the following: >>> eng(2100)1500.0 This seems useful. Tach says 2100, engine (theoretically) spinning at 1500, more or less. Let's confirm our hypothesis with some real data. Data collection Over a period of time, we recorded tachometer readings and actual RPMs using a visual RPM measuring device. The visual device requires a strip of reflective tape on one of the engine wheels. It uses a laser and counts returns per minute. Simple. Elegant. Accurate. It's really inconvenient. But it got some data we could digest. Skipping some boring statistics, we wind up with the following function that maps displayed RPMs to actual RPMs, such as this: def eng2(r):   return 0.7724*r**1.0134 Here's a sample result: >>> eng2(2100)1797.1291903589386 When tach says 2100, the engine is measured as spinning at about 1800 RPM. That's not quite the same as the theoretical model. But it's so close that it gives us a lot of confidence in this version. Of course, the number displayed is hideous. All that floating-point cruft is crazy. What can we do? Rounding is only part of the solution. We need to think through the use case. After all, we use this standing at the helm of the boat; how much detail is appropriate? Limits and ranges The engine has governors and only runs between 800 and 2500 RPM. There's a very tight limit here. Realistically, we're talking about this small range of values: >>> list(range(800, 2500, 200))[800, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400] There's no sensible reason for proving any more detailed engine RPMs. It's a sailboat; top speed is 7.5 knots (Nautical miles per hour). Wind and current have far more impact on the boat speed than the difference between 1600 and 1700 RPMs. The tach can't be read to closer than 100-200 RPM. It's not digital, it's a red pointer near little tick lines. There's no reason to preserve more than a few bits of precision. Example of Tach translation Given the engine RPMs and the conversion function, we can deduce that the tachometer display will be between 1000 to 3200. This will map to engine RPMs in the range of about 800 to 2500. We can confirm this with a mapping like this: >>> list(map(eng2, range(1000,3200,200)))[847.3098694826986, 1019.258964596305, 1191.5942982618956, 1364.2609728487703, 1537.2178605443924, 1710.4329833319157, 1883.8807562755746, 2057.5402392829747, 2231.3939741669838, 2405.4271806626366, 2579.627182659544] We've applied the eng2() mapping from tach to engine RPM. For tach readings between 1000 and 3200 in steps of 200, we've computed the actual engine RPMs. For those who use spreadsheets a lot, the range() function is like filling a column with values. The map(eng2, …) function is like filling an adjacent column with a calculation. We've created the result of applying a function to each value of a given range. As shown, this is little difficult to use. We need to do a little more cleanup. What other function do we need to apply to the results? Round to 100 Here's a function that will round up to the nearest 100: def next100(n):   return int(round(n, -2)) We could call this a kind of composite function built from a partial application of round() and int() functions. If we map this function to the previous results, we get something a little easier to work with. How does this look? >>> tach= range(1000,3200,200)>>> list(map(next100, map(eng2, tach)))[800, 1000, 1200, 1400, 1500, 1700, 1900, 2100, 2200, 2400, 2600] This expression is a bit complex; let's break it down into three discrete steps: First, map the eng2() function to tach numbers between 1000 and 3200. The result is effectively a sequence of values (it's not actually a list, it's a generator, a potential list) Second, map the next100() function to results of previous mapping Finally, collect a single list object from the results We've applied two functions, eng2() and next100(), to a list of values. In principle, we've created a kind of composite function, next100○eng20(rpm). Python doesn't support function composition directly, hence the complex-looking map of map syntax. Interleave sequences of values The final step is to create a table that shows both the tachometer reading and the computed engine RPMs. We need to interleave the input and output values into a single list of pairs. Here are the tach readings we're working with, as a list: >>> tach= range(1000,3200,200) Here are the engine RPMs: >>> engine= list(map(next100,map(eng2,tach))) Here's how we can interleave the two to create something that shows our tachometer reading and engine RPMs: >>> list(zip(tach, engine))[(1000, 800), (1200, 1000), (1400, 1200), (1600, 1400), (1800, 1500), (2000, 1700),(2200, 1900), (2400, 2100), (2600, 2200), (2800, 2400), (3000, 2600)] The rest is pretty-printing. What's important is that we could take functions like eng() or eng2() and apply it to columns of numbers, creating columns of results. The map() function means that we don't have to write explicit for loops to simply apply a function to a sequence of values. Map is lazy We have a few other observations about the Python higher-order functions. First, these functions are lazy, they don't compute any results until required by other statements or expressions. Because they don't actually create intermediate list objects, they may be quite fast. The laziness feature is true for the built-in higher-order functions map() and filter(). It's also true for many of the functions in the itertools library. Many of these functions don't simply create a list object, they yield values as requested. For debugging purposes, we use list() to see what's being produced. If we don't apply list() to the result of a lazy function, we simply see that it's a lazy function. Here's an example: >>> map(lambda x:x*1.4, range(1000,3200,200))<map object at 0x102130610> We don't see a proper result here, because the lazy map() function didn't do anything. The list(), tuple(), or set() functions will force a lazy map() function to actually get up off the couch and compute something. Function Wrappers There are a number of Python functions which are syntactic sugar for method functions. One example is the len() function. This function behaves as if it had the following definition: def len(obj):   return obj.__len__() The function acts like it's simply invoking the object's built-in __len__() method. There are several Python functions that exist only to make the syntax a little more readable. Post-fix syntax purists would prefer to see syntax such as some_list.len(). Those who like their code to look a little more mathematical prefer len(some_list). Some people will go so far as to claim that the presence of prefix functions means that Python isn't strictly object-oriented. This is false; Python is very strictly object-oriented. It doesn't—however—use only postfix method notation. We can write function wrappers to make some method functions a little more palatable. Another good example is the divmod() function. This relies on two method functions, such as the following: a.__divmod__(b) b.__rdivmod__(a) The usual operator rules apply here. If the class for object a implements __divmod__(), then that's used to compute the result. If not, then the same test is made for the class of object b; if there's an implementation, that will be used to compute the results. Otherwise, it's undefined and we'll get an exception. Why wrap a method? Function wrappers for methods are syntactic sugar. They exist to make object methods look like simple functions. In some cases, the functional view is more succinct and expressive. Sometimes the object involved is obvious. For example, the os module functions provide access to OS-level libraries. The OS object is concealed inside the module. Sometimes the object is implied. For example, the random module makes a Random instance for us. We can simply call random.randint() without worrying about the object that was required for this to work properly. Lambdas A lambda is an anonymous function with a degenerate body. It's like a function in some respects and it's unlike a function because of the following two things: A lambda has no name A lambda has no statements A lambda's body is a single expression, nothing more. This expression can have parameters, however, which is why a lambda is a handy form of a callable function. The syntax is essentially as follows: lambda params : expression Here's a concrete example: lambda r: 0.7724*r**1.0134 You may recognize this as the eng2() function defined previously. We don't always need a complete, formal function. Sometimes, we just need an expression that has parameters. Speaking theoretically, a lambda is a one-argument function. When we have multi-argument functions, we can transform it to a series of one-argument lambda forms. This transformation can be helpful for optimization. None of that applies to Python. We'll move on. Using a Lambda with map Here are two equivalent results: map(eng2, tach) map(lambda r: 0.7724*r**1.0134, tach) Here's a previous example, using the lambda instead of the function: >>> tach= range(1000,3200,200)>>> list( map(lambda r: 0.7724*r**1.0134, tach))[847.3098694826986, 1019.258964596305, 1191.5942982618956, 1364.2609728487703, 1537.2178605443924, 1710.4329833319157, 1883.8807562755746, 2057.5402392829747, 2231.3939741669838, 2405.4271806626366, 2579.627182659544] You could scroll back to see that the results are the same. If we're doing a small thing once only, a lambda object might be more clear than a complete function definition. Emphasis here is on small once only. If we start trying to reuse a lambda object, or feel the need to assign a lambda object to a variable, we should really consider a function definition and the associated docstring and doctest features. Another use of Lambdas A common use of lambdas is with three other higher-order functions: sort(), min(), and max(). We might use one of these with a list object: list.sort(key= lambda x: expr) list.min(key= lambda x: expr) list.max(key= lambda x: expr) In each case, we're using a lambda object to embed an expression into the argument values for a function. In some cases, the expression might be very sophisticated; in other cases, it might be something as trivial as lambda x: x[1]. When the expression is trivial, a lambda object is a good idea. If the expression is going to get reused, however, a lambda object might be a bad idea. You can do this… But… The following kind of statement makes sense: some_name = lambda x: 3*x+1 We've created a callable object that takes a single argument value and returns a numeric value such as the following command snippet: def some_name(x): return 3*x+1. There are some differences. Most notably the following: A lambda object is all on one line of code. A possible advantage. There's no docstring. A disadvantage for lambdas of any complexity. Nor is there any doctest in the missing docstring. A significant problem for a lambda object that requires testing. There are ways to test lambdas with doctest outside a docstring, but it seems simpler to switch to a full function definition. We can't easily apply decorators to it. To do it, we lose the @decorator syntax. We can't use any Python statements in it. In particular, no try-except block is possible. For these reasons, we suggest limiting the use of lambdas to truly trivial situations. Callable Objects A callable object fits the model of a function. The unifying feature of all of the things we've looked at is that they're callable. Functions are the primary example of being callable but objects can also be callable. Callable objects can be subclasses of collections.abc.Callable. Because of Python's flexibility, this isn't a requirement, it's merely a good idea. To be callable, a class only needs to provide a __call__() method. Here's a complete callable class definition: from collections.abc import Callableclass Engine(Callable):   def __call__(self, tach):       return 0.7724*tach**1.0134 We've imported the collections.abc.Callable class. This will provide some assurance that any class that extends this abstract superclass will provide a definition for the __call__() method. This is a handy error-checking feature. Our class extends Callable by providing the needed __call__() method. In this case, the __call__() method performs a calculation on the single parameter value, returning a single result. Here's a callable object built from this class: eng= Engine() This creates a function that we can then use. We can evaluate eng(1000) to get the engine RPMs when the tach reads 1000. Callable objects step-by-step There are two parts to making a function a callable object. We'll emphasize these for folks who are new to object-oriented programming: Define a class. Generally, we make this a subclass of collections.abc.Callable. Technically, we only need to implement a __call__() method. It helps to use the proper superclass because it might help catch a few common mistakes. Create an instance of the class. This instance will be a callable object. The object that's created will be very similar to a defined function. And very similar to a lambda object that's been assigned to a variable. While it will be similar to a def statement, it will have one important additional feature: hysteresis. This can be the source of endless bugs. It can also be a way to improve performance. Callables can have hysteresis Here's an example of a callable object that uses hysteresis as a kind of optimization: class Factorial(Callable):   def __init__(self):       self.previous = {}   def __call__(self, n):       if n not in self.previous:           self.previous[n]= self.compute(n)       return self.previous[n]   def compute(self, n):       if n == 0 : return 1       return n*self.__call__(n-1)Here's how we can use this:>>> fact= Factorial()>>> fact(5)120 We create an instance of the class, and then call the instance to compute a value for us. The initializer The initialization method looks like this:    def __init__(self):       self.previous = {} This function creates a cache of previously computed values. This is a technique called memoization. If we've already computed a result once, it's in the self.previous cache; we don't need to compute it again, we already know the answer. The Callable interface The required __call__() method looks like this:    def __call__(self, n):       if n not in self.previous:           self.previous[n]= self.compute(n)       return self.previous[n] We've checked the memoization cache first. If the value is not there, we're forced to compute the answer, and insert it into the cache. The final answer is always a value in the cache. A common what if question is what if we have a function of multiple arguments? There are two minuscule changes to support more complex arguments. Use def __call__(self, *n): and self.compute(*n). Since we're only computing factorial, there's no need to over-generalize. The Compute method The essential computation has been allocated to a method called compute. It looks like this:    def compute(self, n):       if n == 0: return 1           return n*self.__call__(n-1) This does the real work of the callable object: it computes n!. In this case, we've used a pretty standard recursive factorial definition. This recursion relies on the __call__() method to check the cache for previous values. If we don't expect to compute values larger than 1000! (a 2,568 digit number, by the way) the recursion works nicely. If we think we need to compute really large factorials, we'll need to use a different approach. Execute the following code to compute very large factorials: functools.reduce(operator.mul, range(1,n+1)) Either way, we can depend on the internal memoization to leverage previous results. Note the potential issue Hysteresis—memory of what came before—is available to the callable objects. We call functions and lambdas stateless, where callable objects can be stateful. This may be desirable to optimize performance. We can memoize the previous results or we can design an object that's simply confusing. Consider a function like divmod() that returns two values. We could try to define a callable object that first returns the quotient and on the second call with the same arguments returns the remainder: >>> crazy_divmod(355,113)3>>> crazy_divmod(255,113)16 This is technically possible. But it's crazy. Warning: Stay away. We generally expect idempotence: functions do the same thing each time. Implementing memoization didn't alter the basic idempotence of our factorial function. Generator Functions Here's a fun generator, the Collatz function. The function creates a sequence using a simple pair of rules. We'll could call this rule, Half-Or-Three-Plus-One (HOTPO). We'll call it collatz(): def collatz(n):   if n % 2 == 0:        return n//2   else:       return 3*n+1 Each integer argument yields a next integer. These can form a chain. For example, if we start with collatz(13), we get 40. The value of collatz(40) is 20. Here's the sequence of values: 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1At 1, it loops: 1 → 4 → 2 → 1 … Interestingly, all chains seem to lead—eventually—to 1. To explore this, we need a simple function that will build a chain from a given starting value. Successive values Here's a generator function that will build a list object. This iterates through values in the sequence until it reaches 1, when it terminates: def col_list(n):   seq= [n]   while n != 1:       n= collatz(n)       seq.append(n)   return seq This is not wrong. But it's not really the most useful implementation. This always creates a sequence object. In many cases, we don't really want an object, we only want information about the sequence. We might only want the length, or the largest numbers, or the sum. This is where a generator function might be more useful. A generator function yields elements from the sequence instead of building the sequence as a single object. Generator functions To create a generator function, we write a function that has a loop; inside the loop, there's a yield statement. A function with a yield statement is effectively an Iterable object, it can be used in a for statement to produce values. It doesn't create a big list object, it creates the items that can be accumulated into a list or tuple object. A generator function is lazy: it doesn't compute anything unless forced to by another function needing results. We can iterate through as many (or as few) of the results as we need. For example, list(some_generator()) forces all values to be returned. For another example of a lazy generator, look at how range() objects work. If we evaluate range(10), we only get a generator. If we evaluate list(range(10)), we get a list object. The Collatz generator Here's a generator function that will produce sequences of values using the collatz() method rule shown previously: def col_iter(n):   yield n   while n != 1:       n= collatz(n)        yield n When we use this in a for loop or with the list() function, this will yield the argument number. While the number is not 1, it will apply the collatz() function and yield successive values in the chain. When it has yielded 1, it will will terminate. One common pattern for generator functions is to replace all list-accumulation statements with yield statements. Instead of building a list one time at a time, we yield each item. The collatz() function it lazy. We don't get an answer unless we use list() or tuple() or some variation of a for statement context. Using a generator function Here's how this function looks in practice: >>> for i in col_iter(3):…   print(i)3105168421 We've used the generator function in a for loop so that it will yield all of the values until it terminates. Collatz function sequences Now we can do some exploration of this Collatz sequence. Here are a few evaluations of the col_iter() function: >>> list(col_iter(3))[3, 10, 5, 16, 8, 4, 2, 1]>>> list(col_iter(5))[5, 16, 8, 4, 2, 1]>>> list(col_iter(6))[6, 3, 10, 5, 16, 8, 4, 2, 1]>>> list(syracuse_iter(13))[13, 40, 20, 10, 5, 16, 8, 4, 2, 1] There's an interesting pattern here. It seems that from 16, we know the rest. Generalizing this: from any number we've already seen, we know the rest. Wait. This means that memoization might be a big help in exploring the values created by this sequence. When we start combining function design patterns like this, we're doing functional programming. We're stepping outside the box of purely object-oriented Python. Alternate styles Here is an alternative version of the collatz() function: def collatz2(n):   return n//2 if n%2 == 0 else 3*n+1 This simply collapses the if statements into a single if expression and may not help much. We also have this: collatz3= lambda n: n//2 if n%2 == 0 else 3*n+1 We've collapsed the expression into a lambda object. Helpful? Perhaps not. On the other hand, the function doesn't really need all of the overhead of a full function definition and multiple statements. The lambda object seems to capture everything relevant. Functions as object There's a higher-level function that will produce values until some ending condition is met. We can plug in one of the versions of the collatz() function and a termination test into this general-purpose function: def recurse_until(ending, the_function, n):   yield n   while not ending(n):       n= the_function(n)       yield n This requires two plug-in functions, they are as follows: ending() is a function to test to see whether we're done, for example, lambda n: n==1 the_function() is a form of the Collatz function We've completely uncoupled the general idea of recursively applying a function from a specific function and a specific terminating condition. Using the recurs_until() function We can apply this higher-order recurse_until() function like this: >>> recurse_until(lambda n: n==1, syracuse2, 9)<generator object recurse_until at 0x1021278c0> What's that? That's how a lazy generator looks: it didn't return any values because we didn't demand any values. We need to use it in a loop or some kind of expression that iterates through all available values. The list() function, for example, will collect all of the values. Getting the list of values Here's how we make the lazy generator do some work: >>> list(_)[9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] The _ variable is the previously computed value. It relieves us from the burden of having to write an assignment statement. We can write an expression, see the results, and know the results were automatically saved in the _ variable. Project Euler #14 Which starting number, under one million, produces the longest chain? Try it without memoization. It's really simple: >>> collatz_len= [len(list(recurse_until(lambda n: n==1, collatz2, i))) ... for i in range(1,11)]>>> results = zip(collatz_len, range(1,11))>>> max(results)(20, 9)>>> list(col_iter(9))[9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] We defined collatz_len as a list. We're writing a list comprehension that shows the values built from a generator expression. The generator expression evaluates len(something) for i in range(1,11). This means we'll be collecting ten values into the list, each of which is the length of something. The something is a list object built from the recurse_until(lambda n: n==1, collatz2, i) function that we discussed. This will compute the sequence of Collatz values starting from i and proceeding until the value in the sequence is 1. We've zipped the lengths with the original values of i. This will create pairs of lengths and starting numbers. The maximum length will now have a starting value associated with it so that we can confirm that the results match our expectations. And yes, this Project Euler problem could—in principle—be solved in a single line of code. Will this scale to 100? 1,000? 1,000,000? How much will memoization help? Summary In this article, we've looked at five (or six) kinds of Python callables. They all fit the y = f(x) model of a function to varying degrees. When is it appropriate to use each of these different ways to express the same essential concept? Functions are created with def and return. It shouldn't come as a surprise that this should cover most cases. This allows a docstring comment and doctest test cases. We could call these def functions, since they're built with the def statement. Higher-order functions—map(), filter(), and the itertools library—are generally written as plain-old def functions. They're higher-order because they accept functions as arguments or return functions as results. Otherwise, they're just functions. Function wrappers—len(), divmod(), pow(), str(), and repr()—are function syntax wrapped around object methods. These are def'd functions with very tiny bodies. We use them because a.pow(2) doesn't seem as clear as pow(2,a). Lambdas are appropriate for one-time use of something so simple that it doesn't deserve being wrapped in a def statement body. In some cases, we have a small nugget of code that seems more clear when written as a lambda expression rather than a more complete function definition. Simple filter rules, and simple computations are often more clearly shown as a lambda object. The Callable objects have a special property that other functions lack: hysteresis. They can retain the results of previous calculations. We've used this hysteresis property to implement memoizing. This can be a huge performance boost. Callable objects can be used badly, however, to create objects that have simply bizarre behavior. Most functions should strive for idempotence—the same arguments should yield the same results. Generator functions are created with a def statement and at least one yield statement. These functions are iterable. They can be used in a for statement to examine each resulting value. They can also be used with functions like list(), tuple(), and set() to create an actual object from the iterable sequence of values. We might combine them with higher-order functions to do complex processing, one item at a time. It's important to work with each of these kinds of callables. If you only have one tool—a hammer—then every problem has to be reshaped into a nail before you can solve it. Once you have multiple tools available, you can pick the tools that provides the most succinct and expressive solution to the problem. Resources for Article: Further resources on this subject: Expert Python Programming [article] Python Network Programming Cookbook [article] Learning Python Design Patterns [article]
Read more
  • 0
  • 0
  • 2725
article-image-android-virtual-device-manager
Packt
06 Feb 2015
8 min read
Save for later

Android Virtual Device Manager

Packt
06 Feb 2015
8 min read
This article written by Belén Cruz Zapata, the author of the book Android Studio Essentials, teaches us the uses of the AVD Manager tool. It introduces us to the Google Play services. (For more resources related to this topic, see here.) The Android Virtual Device Manager (AVD Manager) is an Android tool accessible from Android Studio to manage the Android virtual devices that will be executed in the Android emulator. To open the AVD Manager from Android Studio, navigate to the Tools | Android | AVD Manager menu option. You can also click on the shortcut from the toolbar. The AVD Manager displays the list of the existing virtual devices. Since we have not created any virtual device, initially the list will be empty. To create our first virtual device, click on the Create Virtual Device button to open the configuration dialog. The first step is to select the hardware configuration of the virtual device. The hardware definitions are listed on the left side of the window. Select one of them, like the Nexus 5, to examine its details on the right side as shown in the following screenshot. Hardware definitions can be classified into one of these categories: Phone, Tablet, Wear or TV. We can also configure our own hardware device definitions from the AVD Manager. We can create a new definition using the New Hardware Profile button. The Clone Device button creates a duplicate of an existing device. Click on the New Hardware Profile button to examine the existing configuration parameters. The most important parameters that define a device are: Device Name: Name of the device. Screensize: Screen size in inches. This value determines the size category of the device. Type a value of 4.0 and notice how the Size value (on the right side) is normal. Now type a value of 7.0 and the Size field changes its value to large. This parameter along with the screen resolution also determines the density category. Resolution: Screen resolution in pixels. This value determines the density category of the device. Having a screen size of 4.0 inches, type a value of 768 x 1280 and notice how the density value is 400 dpi. Change the screen size to 6.0 inches and the density value changes to hdpi. Now change the resolution to 480 x 800 and the density value is mdpi. RAM: RAM memory size of the device. Input: Indicate if the home, back, or menu buttons of the device are available via software or hardware. Supported device states: Check the allowed states. Cameras: Select if the device has a front camera or a back camera. Sensors: Sensors available in the device: accelerometer, gyroscope, GPS, and proximity sensor. Default Skin: Select additional hardware controls. Create a new device with a screen size of 4.7 inches, a resolution of 800 x 1280, a RAM value of 500 MiB, software buttons, and both portrait and landscape states enabled. Name it as My Device. Click on the Finish button. The hardware definition has been added to the list of configurations. Click on the Next button to continue the creation of a new virtual device. The next step is to select the virtual device system image and the target Android platform. Each platform has its architecture, so the system images that are installed on your system will be listed along with the rest of the images that can be downloaded (Show downloadable system images box checked). Download and select one of the images of the Lollipop release and click on the Next button. Finally, the last step is to verify the configuration of the virtual device. Enter the name of the Android Virtual Device in the AVD Name field. Give the virtual device a meaningful name to recognize it easily, such as AVD_nexus5_api21. Click on the Show Advanced Settings button. The settings that we can configure for the virtual device are the following: Emulation Options: The Store a snapshot for faster startup option saves the state of the emulator in order to load faster the next time. The Use Host GPU tries to accelerate the GPU hardware to run the emulator faster. Custom skin definition: Select if additional hardware controls are displayed in the emulator. Memory and Storage: Select the memory parameters of the virtual device. Let the default values, unless a warning message is shown; in this case, follow the instructions of the message. For example, select 1536M for the RAM memory and 64 for the VM Heap. The Internal Storage can also be configured. Select for example: 200 MiB. Select the size of the SD Card or select a file to behave as the SD card. Device: Select one of the available device configurations. These configurations are the ones we tested in the layout editor preview. Select the Nexus 5 device to load its parameters in the dialog. Target: Select the device Android platform. We have to create one virtual device with the minimum platform supported by our application and another virtual device with the target platform of our application. For this first virtual device, select the target platform, Android 4.4.2 - API Level 19. CPU/ABI: Select the device architecture. The value of this field is set when we select the target platform. Each platform has its architecture, so if we do not have it installed, the following message will be shown; No system images installed for this target. To solve this, open the SDK Manager and search for one of the architectures of the target platform, ARM EABI v7a System Image or Intel x86 Atom System Image. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Skin: Select if additional hardware controls are displayed in the emulator. You can select the Skin with dynamic hardware controls option. Front Camera: Select if the emulator has a front camera or a back camera. The camera can be emulated or can be real by the use of a webcam from the computer. Select None for both cameras. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Network: Select the speed of the simulated network and select the delay in processing data across the network. The new virtual device is now listed in the AVD Manager. Select the recently created virtual device to enable the remaining actions: Start: Run the virtual device. Edit: Edit the virtual device configuration. Duplicate: Creates a new device configuration displaying the last step of the creation process. You can change its configuration parameters and then verify the new device. Wipe Data: Removes the user files from the virtual device. Show on Disk: Opens the virtual device directory on your system. View Details: Open a dialog detailing the virtual device characteristics. Delete: Delete the virtual device. Click on the Start button. The emulator will be opened as shown in the following screenshot. Wait until it is completely loaded, and then you will be able to try it. In Android Studio, open the main layout with the graphical editor and click on the list of the devices. As the following screenshot shows, our custom device definition appears and we can select it to preview the layout: Navigation Editor The Navigation Editor is a tool to create and structure the layouts of the application using a graphical viewer. To open this tool navigate to the Tools | Android | Navigation Editor menu. The tool opens a file in XML format named main.nvg.xml. This file is stored in your project at /.navigation/app/raw/. Since there is only one layout and one activity in our project, the navigation editor only shows this main layout. If you select the layout, detailed information about it is displayed on the right panel of the editor. If you double-click on the layout, the XML layout file will be opened in a new tab. We can create a new activity by right-mouse clicking on the editor and selecting the New Activity option. We can also add transitions from the controls of a layout by shift clicking on a control and then dragging to the target activity. Open the main layout and create a new button with the label Open Activity: <Button        android_id="@+id/button_open"        android_layout_width="wrap_content"        android_layout_height="wrap_content"        android_layout_below="@+id/button_accept"        android_layout_centerHorizontal="true"        android_text="Open Activity" /> Open the Navigation Editor and add a second activity. Now the navigation editor displays both activities as the next screenshot shows. Now we can add the navigation between them. Shift-drag from the new button of the main activity to the second activity. A blue line and a pink circle have been added to represent the new navigation. Select the navigation relationship to see its details on the right panel as shown in the following screenshot. The right panel shows the source the activity, the destination activity and the gesture that triggers the navigation. Now open our main activity class and notice the new code that has been added to implement the recently created navigation. The onCreate method now contains the following code: findViewById(R.id.button_open).setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { MainActivity.this.startActivity( new Intent(MainActivity.this, Activity2.class)); } }); This code sets the onClick method of the new button, from where the second activity is launched. Summary This article thought us about the Navigation Editor tool. It also showed how to integrate the Google Play services with a project in Android Studio. In this article, we got acquainted to the AVD Manager tool. Resources for Article: Further resources on this subject: Android Native Application API [article] Creating User Interfaces [article] Android 3.0 Application Development: Multimedia Management [article]
Read more
  • 0
  • 0
  • 14118

article-image-structural-equation-modeling-and-confirmatory-factor-analysis
Packt
06 Feb 2015
30 min read
Save for later

Structural Equation Modeling and Confirmatory Factor Analysis

Packt
06 Feb 2015
30 min read
In this article by Paul Gerrard and Radia M. Johnson, the authors of Mastering Scientific Computation with R, we'll discuss the fundamental ideas underlying structural equation modeling, which are often overlooked in other books discussing structural equation modeling (SEM) in R, and then delve into how SEM is done in R. We will then discuss two R packages, OpenMx and lavaan. We can directly apply our discussion of the linear algebra underlying SEM using OpenMx. Because of this, we will go over OpenMx first. We will then discuss lavaan, which is probably more user friendly because it sweeps the matrices and linear algebra representations under the rug so that they are invisible unless the user really goes looking for them. Both packages continue to be developed and there will always be some features better supported in one of these packages than in the other. (For more resources related to this topic, see here.) SEM model fitting and estimation methods To ultimately find a good solution, software has to use trial and error to come up with an implied covariance matrix that matches the observed covariance matrix as well as possible. The question is what does "as well as possible" mean? The answer to this is that the software must try to minimize some particular criterion, usually some sort of discrepancy function. Just what that criterion is depends on the estimation method used. The most commonly used estimation methods in SEM include: Ordinary least squares (OLS) also called unweighted least squares Generalized least squares (GLS) Maximum likelihood (ML) There are a number of other estimation methods as well, some of which can be done in R, but here we will stick with describing the most common ones. In general, OLS is the simplest and computationally cheapest estimation method. GLS is computationally more demanding, and ML is computationally more intensive. We will see why this is, as we discuss the details of these estimation methods. Any SEM estimation method seeks to estimate model parameters that recreate the observed covariance matrix as well as possible. To evaluate how closely an implied covariance matrix matches an observed covariance matrix, we need a discrepancy function. If we assume multivariate normality of the observed variables, the following function can be used to assess discrepancy: In the preceding figure, R is the observed covariance matrix, C is the implied covariance matrix, and V is a weight matrix. The tr function refers to the trace function, which sums the elements of the main diagonal. The choice of V varies based on the SEM estimation method: For OLS, V = I For GLS, V = R-1 In the case of an ML estimation, we seek to minimize one of a number of similar criteria to describe ML, as follows: In the preceding figure, n is the number of variables. There are a couple of points worth noting here. GLS estimation inverts the observed correlation matrix, something computationally demanding with large matrices, but something that must only be done once. Alternatively, ML requires inversion of the implied covariance matrix, which changes with each iteration. Thus, each iteration requires the computationally demanding step of matrix inversion. With modern fast computers, this difference may not be noticeable, but with large SEM models, this might start to be quite time-consuming. Assessing SEM model fit The final question in an SEM model is how well the model explains the data. This is answered with the use of SEM measures of fit. Most of these measures are based on a chi-squared distribution. The fit criteria for GLS and ML (as well as a number of other estimation procedures such as asymptotic distribution-free methods) multiplied by N-1 is approximately chi-square distributed. Here, the capital N represents the number of observations in the dataset, as opposed to lower case n, which gives the number of variables. We compute degrees of freedom as the difference between the number of estimated parameters and the number of known covariances (that is, the total number of values in one triangle of an observed covariance matrix). This gives way to the first test statistic for SEM models, a chi-squared significance level comparing our chi-square value to some minimum chi-square threshold to achieve statistical significance. As with conventional chi-square testing, a chi-square value that is higher than some minimal threshold will reject the null hypothesis. Most experimental science features such as rejection supports the hypothesis of the experiment. This is not the case in SEM, where the null hypothesis is that the model fits the data. Thus, a non-significant chi-square is an indicator of model fit, whereas a significant chi-square rejects model fit. A notable limitation of this is that a greater sample size, greater N, will increase the chi-square value and will therefore increase the power to reject model fit. Thus, using conventional chi-squared testing will tend to support models developed in small samples and reject models developed in large samples. The choice an interpretation of fit measures is a contentious one in SEM literature. However, as can be seen, chi-square has limitations. As such, other model fit criteria were developed that do not penalize models that fit in large samples (some may penalize models fit to small samples though). There are over a dozen indices, but the most common fit indices and interpretation information are as follows: Comparative fit index: In this index, a higher value is better. Conventionally, a value of greater than 0.9 was considered an indicator of good model fit, but some might argue that a value of at least 0.95 is needed. This is relatively sample size insensitive. Root mean square error of approximation: A value of under 0.08 (smaller is better) is often considered necessary to achieve model fit. However, this fit measure is quite sample size sensitive, penalizing small sample studies. Tucker-Lewis index (Non-normed fit index): This is interpreted in a similar manner as the comparative fit index. Also, this is not very sample size sensitive. Standardized root mean square residual: In this index, a lower value is better. A value of 0.06 or less is considered needed for model fit. Also, this may penalize small samples. In the next section, we will show you how to actually fit SEM models in R and how to evaluate fit using fit measures. Using OpenMx and matrix specification of an SEM We went through the basic principles of SEM and discussed the basic computational approach by which this can be achieved. SEM remains an active area of research (with an entire journal devoted to it, Structural Equation Modeling), so there are many additional peculiarities, but rather than delving into all of them, we will start by delving into actually fitting an SEM model in R. OpenMx is not in the CRAN repository, but it is easily obtainable from the OpenMx website, by typing the following in R: source('http://openmx.psyc.virginia.edu/getOpenMx.R')" Summarizing the OpenMx approach In this example, we will use OpenMx by specifying matrices as mentioned earlier. To fit an OpenMx model, we need to first specify the model and then tell the software to attempt to fit the model. Model specification involves four components: Specifying the model matrices; this has two parts: Declare starting values for the estimation Declaring which values can be estimated and which are fixed Telling OpenMx the algebraic relationship of the matrices that should produce an implied covariance matrix Giving an instruction for the model fitting criterion Providing a source of data The R commands that correspond to each of these steps are: mxMatrix mxAlgebra mxMLObjective mxData We will then pass the objects created with each of these commands to create an SEM model using mxModel. Explaining an entire example First, to make things simple, we will store the FALSE and TRUE logical values in single letter variables, which will be convenient when we have matrices full of TRUE and FALSE values as follows: F <- FALSE T <- TRUE Specifying the model matrices Specifying matrices is done with the mxMatrix function, which returns an MxMatrix object. (Note that the object starts with a capital "M" while the function starts with a lowercase "m.") Specifying an MxMatrix is much like specifying a regular R matrix, but MxMatrices has some additional components. The most notable difference is that there are actually two different matrices used to create an MxMatrix. The first is a matrix of starting values, and the second is a matrix that tells which starting values are free to be estimated and which are not. If a starting value is not freely estimable, then it is a fixed constant. Since the actual starting values that we choose do not really matter too much in this case, we will just pick one as a starting value for all parameters that we would like to be estimated. Let's take a look at the following example: mx.A <- mxMatrix( type = "Full", nrow=14, ncol=14, #Provide the Starting Values values = c(    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0 ), #Tell R which values are free to be estimated    free = c(    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, T, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, F, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, F, F,    F, F, F, F, F, F, F, F, F, F, F, T, T, F ), byrow=TRUE,   #Provide a matrix name that will be used in model fitting name="A", ) We will now apply this same technique to the S matrix. Here, we will create two S matrices, S1 and S2. They differ simply in the starting values that they supply. We will later try to fit an SEM model using one matrix, and then the other to address problems with the first one. The difference is that S1 uses starting variances of 1 in the diagonal, and S2 uses starting variances of 5. Here, we will use the "symm" matrix type, which is a symmetric matrix. We could use the "full" matrix type, but by using "symm", we are saved from typing all of the symmetric values in the upper half of the matrix. Let's take a look at the following matrix: mx.S1 <- mxMatrix("Symm", nrow=14, ncol=14, values = c(    1,    0, 1,    0, 0, 1,    0, 1, 0, 1,    1, 0, 0, 0, 1,    0, 1, 0, 0, 0, 1,    0, 0, 1, 0, 0, 0, 1,    0, 0, 0, 1, 0, 1, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ),      free = c(    T,    F, T,    F, F, T,    F, T, F, T,    T, F, F, F, T,    F, T, F, F, F, T,    F, F, T, F, F, F, T,    F, F, F, T, F, T, F, T,    F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T ), byrow=TRUE, name="S" )   #The alternative, S2 matrix: mx.S2 <- mxMatrix("Symm", nrow=14, ncol=14, values = c(    5,    0, 5,    0, 0, 5,    0, 1, 0, 5,    1, 0, 0, 0, 5,    0, 1, 0, 0, 0, 5,    0, 0, 1, 0, 0, 0, 5,    0, 0, 0, 1, 0, 1, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5 ),         free = c(    T,    F, T,    F, F, T,    F, T, F, T,    T, F, F, F, T,    F, T, F, F, F, T,    F, F, T, F, F, F, T,    F, F, F, T, F, T, F, T,    F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, T,    F, F, F, F, F, F, F, F, F, F, F, F, F, T ), byrow=TRUE, name="S" ) mx.Filter <- mxMatrix("Full", nrow=11, ncol=14, values= c(        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,      0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0    ),    free=FALSE,    name="Filter",    byrow = TRUE ) And finally, we will create our identity and filter matrices the same way, as follows: mx.I <- mxMatrix("Full", nrow=14, ncol=14,    values= c(        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1    ),    free=FALSE,    byrow = TRUE,    name="I" ) Fitting the model Now, it is time to declare the model that we would like to fit using the mxModel command. This part includes steps 2 through step 4 mentioned earlier. Here, we will tell mxModel which matrices to use. We will then use the mxAlgegra command to tell R how the matrices should be combined to reproduce the implied covariance matrix. We will tell R to use ML estimation with the mxMLObjective command, and we will tell it to apply the estimation to a particular matrix algebra, which we named "C". This is simply the right-hand side of the McArdle McDonald equation. Finally, we will tell R where to get the data to use in model fitting using the following code: factorModel.1 <- mxModel("Political Democracy Model", #Model Matrices mx.A, mx.S1, mx.Filter, mx.I, #Model Fitting Instructions mxAlgebra(Filter %*% solve(I-A) %*% S %*% t(solve(I - A)) %*% t(Filter), name="C"),      mxMLObjective("C", dimnames = names(PoliticalDemocracy)),    #Data to fit mxData(cov(PoliticalDemocracy), type="cov", numObs=75) ) Now, let's tell R to fit the model and summarize the results using mxRun, as follows: summary(mxRun(factorModel.1)) Running Political Democracy Model Error in summary(mxRun(factorModel.1)) : error in evaluating the argument 'object' in selecting a method for function 'summary': Error: The job for model 'Political Democracy Model' exited abnormally with the error message: Expected covariance matrix is non-positive-definite. Uh oh! We got an error message telling us that the expected covariance matrix is not positive definite. Our observed covariance matrix is positive definite but the implied covariance matrix (at least at first) is not. This is an effect of the fact that if we multiply our starting value matrices together as specified by the McArdle McDonald equation, we get a starting implied covariance matrix. If we perform an eigenvalue decomposition of this starting implied covariance matrix, then we will find that the last eigenvalue is negative. This means a negative variance does not make much sense, and this is what "not positive definite" refers to. The good news is that this is simply our starting values, so we can fix this if we modify our starting values. In this case, we can choose values of five along the diagonal of the S matrix, and get a positive definite starting implied covariance matrix. We can rerun this using the mx.S2 matrix specified earlier and the software will proceed as follows: #Rerun with a positive definite matrix   factorModel.2 <- mxModel("Political Democracy Model", #Model Matrices mx.A, mx.S2, mx.Filter, mx.I, #Model Fitting Instructions mxAlgebra(Filter %*% solve(I-A) %*% S %*% t(solve(I - A)) %*% t(Filter), name="C"),    mxMLObjective("C", dimnames = names(PoliticalDemocracy)),    #Data to fit mxData(cov(PoliticalDemocracy), type="cov", numObs=75) )   summary(mxRun(factorModel.2)) This should provide a solution. As can be seen from the previous code, the parameters solved in the model are returned as matrix components. Just like we had to figure out how to go from paths to matrices, we now have to figure out how to go from matrices to paths (the reverse problem). In the following screenshot, we show just the first few free parameters: The preceding screenshot tells us that the parameter estimated in the position of the tenth row and twelfth column in the matrix A is 2.18. This corresponds to a path from the twelfth variable in the A matrix ind60, to the 10th variable in the matrix x2. Thus, the path coefficient from ind60 to x2 is 2.18. There are a few other pieces of information here. The first one tells us that the model has not converged but is "Mx status Green." This means that the model was still converging when it stopped running (that is, it did not converge), but an optimal solution was still found and therefore, the results are likely reliable. Model fit information is also provided suggesting a pretty good model fit with CFI of 0.99 and RMSEA of 0.032. This was a fair amount of work, and creating model matrices by hand from path diagrams can be quite tedious. For this reason, SEM fitting programs have generally adopted the ability to fit SEM by declaring paths rather than model matrices. OpenMx has the ability to allow declaration by paths, but applying model matrices has a few advantages. Principally, we get under the hood of SEM fitting. If we step back, we can see that OpenMx actually did very little for us that is specific to SEM. We told OpenMx how we wanted matrices multiplied together and which parameters of the matrix were free to be estimated. Instead of using the RAM specification, we could have passed the matrices of the LISREL or Bentler-Weeks models with the corresponding algebra methods to recreate an implied covariance matrix. This means that if we are trying to come up with our matrix specification, reproduce prior research, or apply a new SEM matrix specification method published in the literature, OpenMx gives us the power to do it. Also, for educators wishing to teach the underlying mathematical ideas of SEM, OpenMx is a very powerful tool. Fitting SEM models using lavaan If we were to describe OpenMx as the SEM equivalent of having a well-stocked pantry and full kitchen to create whatever you want, and you have the time and know how to do it, we might regard lavaan as a large freezer full of prepackaged microwavable dinners. It does not allow quite as much flexibility as OpenMx because it sweeps much of the work that we did by hand in OpenMx under the rug. Lavaan does use an internal matrix representation, but the user never has to see it. It is this sweeping under the rug that makes lavaan generally much easier to use. It is worth adding that the list of prepackaged features that are built into lavaan with minimal additional programming challenge many commercial SEM packages. The lavaan syntax The key to describing lavaan models is the model syntax, as follows: X =~ Y: Y is a manifestation of the latent variable X Y ~ X: Y is regressed on X Y ~~ X: The covariance between Y and X can be estimated Y ~ 1: This estimates the intercept for Y (implicitly requires mean structure) Y | a*t1 + b*t2: Y has two thresholds that is a and b Y ~ a * X: Y is regressed on X with coefficient a Y ~ start(a) * X: Y is regressed on X; the starting value used for estimation is a It may not be evident at first, but this model description language actually makes lavaan quite powerful. Wherever you have seen a or b in the previous examples, a variable or constant can be used in their place. The beauty of this is that multiple parameters can be constrained to be equal simply by assigning a single parameter name to them. Using lavaan, we can fit a factor analysis model to our physical functioning dataset with only a few lines of code: phys.func.data <- read.csv('phys_func.csv')[-1] names(phys.func.data) <- LETTERS[1:20] R has a built-in vector named LETTERS, which contains all of the capital letters of the English alphabet. The lower case vector letters contains the lowercase alphabet. We will then describe our model using the lavaan syntax. Here, we have a model of three latent variables, our factors, and each of them has manifest variables. Let's take a look at the following example: model.definition.1 <- ' #Factors    Cognitive =~ A + Q + R + S    Legs =~ B + C + D + H + I + J + M + N    Arms =~ E + F+ G + K +L + O + P + T    #Correlations Between Factors    Cognitive ~~ Legs    Cognitive ~~ Arms    Legs ~~ Arms ' We then tell lavaan to fit the model as follows: fit.phys.func <- cfa(model.definition.1, data=phys.func.data, ordered= c('A','B', 'C','D', 'E','F','G', 'H','I','J', 'K', 'L','M','N','O','P','Q','R', 'S', 'T')) In the previous code, we add an ordered = argument, which tells lavaan that some variables are ordinal in nature. In response, lavaan estimates polychoric correlations for these variables. Polychoric correlations assume that we binned a continuous variable into discrete categories, and attempts to explicitly model correlations assuming that there is some continuous underlying variable. Part of this requires finding thresholds (placed on an arbitrary scale) between each categorical response. (for example, threshold 1 falls between the response of 1 and 2, and so on). By telling lavaan to treat some variables as categorical, lavaan will also know to use a special estimation method. Lavaan will use diagonally weighted least squares, which does not assume normality and uses the diagonals of the polychoric correlation matrix for weights in the discrepancy function. With five response options, it is questionable as to whether polychoric correlations are truly needed. Some analysts might argue that with many response options, the data can be treated as continuous, but here we use this method to show off lavaan's capabilities. All SEM models in lavaan use the lavaan command. Here, we use the cfa command, which is one of a number of wrapper functions for the lavaan command. Others include sem and growth. These commands differ in the default options passed to the lavaan command. (For full details, see the package documentation.) Summarizing the data, we can see the loadings of each item on the factor as well as the factor intercorrelations. We can also see the thresholds between each category from the polychoric correlations as follows: summary(fit.phys.func) We can also assess things such as model fit using the fitMeasures command, which has most of the popularly used fit measures and even a few obscure ones. Here, we tell lavaan to simply extract three measures of model fit as follows: fitMeasures(fit.phys.func, c('rmsea', 'cfi', 'srmr')) Collectively, these measures suggest adequate model fit. It is worth noting here that the interpretation of fit measures largely comes from studies using maximum likelihood estimation, and there is some debate as to how well these generalize other fitting methods. The lavaan package also has the capability to use other estimators that treat the data as truly continuous in nature. For this, a particular dataset is far from multivariate normal distributed, so an estimator such as ML is appropriate to use. However, if we wanted to do so, the syntax would be as follows: fit.phys.func.ML <- cfa(model.definition.1, data=phys.func.data, estimator = 'ML') Comparing OpenMx to lavaan It can be seen that lavaan has a much simpler syntax that allows to rapidly model basic SEM models. However, we were a bit unfair to OpenMx because we used a path model specification for lavaan and a matrix specification for OpenMx. The truth is that OpenMx is still probably a bit wordier than lavaan, but let's apply a path model specification in each to do a fair head-to-head comparison. We will use the famous Holzinger-Swineford 1939 dataset here from the lavaan package to do our modeling, as follows: hs.dat <- HolzingerSwineford1939 We will create a new dataset with a shorter name so that we don't have to keep typing HozlingerSwineford1939. Explaining an example in lavaan We will learn to fit the Holzinger-Swineford model in this section. We will start by specifying the SEM model using the lavaan model syntax: hs.model.lavaan <- ' visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed   =~ x7 + x8 + x9   visual ~~ textual visual ~~ speed textual ~~ speed '   fit.hs.lavaan <- cfa(hs.model.lavaan, data=hs.dat, std.lv = TRUE) summary(fit.hs.lavaan) Here, we add the std.lv argument to the fit function, which fixes the variance of the latent variables to 1. We do this instead of constraining the first factor loading on each variable to 1. Only the model coefficients are included for ease of viewing in this book. The result is shown in the following model: > summary(fit.hs.lavaan) …                      Estimate Std.err Z-value P(>|z|) Latent variables: visual =~    x1               0.900   0.081   11.127   0.000    x2               0.498   0.077   6.429   0.000    x3              0.656   0.074   8.817   0.000 textual =~    x4               0.990   0.057   17.474   0.000    x5               1.102   0.063   17.576   0.000    x6               0.917   0.054   17.082   0.000 speed =~    x7               0.619   0.070   8.903   0.000    x8               0.731   0.066   11.090   0.000    x9               0.670   0.065   10.305   0.000   Covariances: visual ~~    textual           0.459   0.064   7.189   0.000    speed             0.471   0.073   6.461   0.000 textual ~~    speed             0.283   0.069   4.117   0.000 Let's compare these results with a model fit in OpenMx using the same dataset and SEM model. Explaining an example in OpenMx The OpenMx syntax for path specification is substantially longer and more explicit. Let's take a look at the following model: hs.model.open.mx <- mxModel("Holzinger Swineford", type="RAM",      manifestVars = names(hs.dat)[7:15], latentVars = c('visual', 'textual', 'speed'),    # Create paths from latent to observed variables mxPath(        from = 'visual',        to = c('x1', 'x2', 'x3'),    free = c(TRUE, TRUE, TRUE),    values = 1          ), mxPath(        from = 'textual',        to = c('x4', 'x5', 'x6'),        free = c(TRUE, TRUE, TRUE),        values = 1      ), mxPath(    from = 'speed',    to = c('x7', 'x8', 'x9'),    free = c(TRUE, TRUE, TRUE),    values = 1      ), # Create covariances among latent variables mxPath(    from = 'visual',    to = 'textual',    arrows=2,    free=TRUE      ), mxPath(        from = 'visual',        to = 'speed',        arrows=2,        free=TRUE      ), mxPath(        from = 'textual',        to = 'speed',        arrows=2,        free=TRUE      ), #Create residual variance terms for the latent variables mxPath(    from= c('visual', 'textual', 'speed'),    arrows=2, #Here we are fixing the latent variances to 1 #These two lines are like st.lv = TRUE in lavaan    free=c(FALSE,FALSE,FALSE),    values=1 ), #Create residual variance terms mxPath( from= c('x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9'),    arrows=2, ),    mxData(        observed=cov(hs.dat[,c(7:15)]),        type="cov",        numObs=301    ) )     fit.hs.open.mx <- mxRun(hs.model.open.mx) summary(fit.hs.open.mx) Here are the results of the OpenMx model fit, which look very similar to lavaan's. This gives a long output. For ease of viewing, only the most relevant parts of the output are included in the following model (the last column that R prints giving the standard error of estimates is also not shown here): > summary(fit.hs.open.mx) …   free parameters:                            name matrix     row     col Estimate Std.Error 1   Holzinger Swineford.A[1,10]     A     x1 visual 0.9011177 2   Holzinger Swineford.A[2,10]     A     x2 visual 0.4987688 3   Holzinger Swineford.A[3,10]     A     x3 visual 0.6572487 4   Holzinger Swineford.A[4,11]     A     x4 textual 0.9913408 5   Holzinger Swineford.A[5,11]     A     x5 textual 1.1034381 6   Holzinger Swineford.A[6,11]     A     x6 textual 0.9181265 7   Holzinger Swineford.A[7,12]     A     x7   speed 0.6205055 8   Holzinger Swineford.A[8,12]     A     x8 speed 0.7321655 9   Holzinger Swineford.A[9,12]     A     x9   speed 0.6710954 10   Holzinger Swineford.S[1,1]     S     x1     x1 0.5508846 11   Holzinger Swineford.S[2,2]     S     x2     x2 1.1376195 12   Holzinger Swineford.S[3,3]     S    x3     x3 0.8471385 13   Holzinger Swineford.S[4,4]     S     x4     x4 0.3724102 14   Holzinger Swineford.S[5,5]     S     x5     x5 0.4477426 15   Holzinger Swineford.S[6,6]     S     x6     x6 0.3573899 16   Holzinger Swineford.S[7,7]      S     x7     x7 0.8020562 17   Holzinger Swineford.S[8,8]     S     x8     x8 0.4893230 18   Holzinger Swineford.S[9,9]     S     x9     x9 0.5680182 19 Holzinger Swineford.S[10,11]     S visual textual 0.4585093 20 Holzinger Swineford.S[10,12]     S visual   speed 0.4705348 21 Holzinger Swineford.S[11,12]     S textual   speed 0.2829848 In summary, the results agree quite closely. For example, looking at the coefficient for the path going from the latent variable visual to the observed variable x1, lavaan gives an estimate of 0.900 while OpenMx computes a value of 0.901. Summary The lavaan package is user friendly, pretty powerful, and constantly adding new features. Alternatively, OpenMx has a steeper learning curve but tremendous flexibility in what it can do. Thus, lavaan is a bit like a large freezer full of prepackaged microwavable dinners, whereas OpenMx is like a well-stocked pantry with no prepared foods but a full kitchen that will let you prepare it if you have the time and the know-how. To run a quick analysis, it is tough to beat the simplicity of lavaan, especially given its wide range of capabilities. For large complex models, OpenMx may be a better choice. The methods covered here are useful to analyze statistical relationships when one has all of the data from events that have already occurred. Resources for Article: Further resources on this subject: Creating your first heat map in R [article] Going Viral [article] Introduction to S4 Classes [article]
Read more
  • 0
  • 0
  • 6841

article-image-contexts-and-dependency-injection-netbeans
Packt
06 Feb 2015
18 min read
Save for later

Contexts and Dependency Injection in NetBeans

Packt
06 Feb 2015
18 min read
In this article by David R. Heffelfinger, the author of Java EE 7 Development with NetBeans 8, we will introduce Contexts and Dependency Injection (CDI) and other aspects of it. CDI can be used to simplify integrating the different layers of a Java EE application. For example, CDI allows us to use a session bean as a managed bean, so that we can take advantage of the EJB features, such as transactions, directly in our managed beans. In this article, we will cover the following topics: Introduction to CDI Qualifiers Stereotypes Interceptor binding types Custom scopes (For more resources related to this topic, see here.) Introduction to CDI JavaServer Faces (JSF) web applications employing CDI are very similar to JSF applications without CDI; the main difference is that instead of using JSF managed beans for our model and controllers, we use CDI named beans. What makes CDI applications easier to develop and maintain are the excellent dependency injection capabilities of the CDI API. Just as with other JSF applications, CDI applications use facelets as their view technology. The following example illustrates a typical markup for a JSF page using CDI: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                <h:inputText id="firstName" value="#{customer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                  value="#{customer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName" value="#{customer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email" value="#{customer.email}"/>                <h:message for="email"/>                <h:panelGroup/>                <h:commandButton value="Submit"                  action="#{customerController.navigateToConfirmation}"/>            </h:panelGrid>        </h:form>    </h:body> </html> As we can see, the preceding markup doesn't look any different from the markup used for a JSF application that does not use CDI. The page renders as follows (shown after entering some data): In our page markup, we have JSF components that use Unified Expression Language expressions to bind themselves to CDI named bean properties and methods. Let's take a look at the customer bean first: package com.ensode.cdiintro.model;   import java.io.Serializable; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped public class Customer implements Serializable {      private String firstName;    private String middleName;    private String lastName;    private String email;      public Customer() {    }      public String getFirstName() {        return firstName;    }      public void setFirstName(String firstName) {        this.firstName = firstName;    }      public String getMiddleName() {        return middleName;    }      public void setMiddleName(String middleName) {        this.middleName = middleName;    }      public String getLastName() {        return lastName;    }      public void setLastName(String lastName) {        this.lastName = lastName;    }      public String getEmail() {        return email;    }      public void setEmail(String email) {        this.email = email;    } } The @Named annotation marks this class as a CDI named bean. By default, the bean's name will be the class name with its first character switched to lowercase (in our example, the name of the bean is "customer", since the class name is Customer). We can override this behavior if we wish by passing the desired name to the value attribute of the @Named annotation, as follows: @Named(value="customerBean") A CDI named bean's methods and properties are accessible via facelets, just like regular JSF managed beans. Just like JSF managed beans, CDI named beans can have one of several scopes as listed in the following table. The preceding named bean has a scope of request, as denoted by the @RequestScoped annotation. Scope Annotation Description Request @RequestScoped Request scoped beans are shared through the duration of a single request. A single request could refer to an HTTP request, an invocation to a method in an EJB, a web service invocation, or sending a JMS message to a message-driven bean. Session @SessionScoped Session scoped beans are shared across all requests in an HTTP session. Each user of an application gets their own instance of a session scoped bean. Application @ApplicationScoped Application scoped beans live through the whole application lifetime. Beans in this scope are shared across user sessions. Conversation @ConversationScoped The conversation scope can span multiple requests, and is typically shorter than the session scope. Dependent @Dependent Dependent scoped beans are not shared. Any time a dependent scoped bean is injected, a new instance is created. As we can see, CDI has equivalent scopes to all JSF scopes. Additionally, CDI adds two additional scopes. The first CDI-specific scope is the conversation scope, which allows us to have a scope that spans across multiple requests, but is shorter than the session scope. The second CDI-specific scope is the dependent scope, which is a pseudo scope. A CDI bean in the dependent scope is a dependent object of another object; beans in this scope are instantiated when the object they belong to is instantiated and they are destroyed when the object they belong to is destroyed. Our application has two CDI named beans. We already discussed the customer bean. The other CDI named bean in our application is the controller bean: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class CustomerController {      @Inject    private Customer customer;      public Customer getCustomer() {        return customer;    }      public void setCustomer(Customer customer) {        this.customer = customer;    }      public String navigateToConfirmation() {        //In a real application we would        //Save customer data to the database here.          return "confirmation";    } } In the preceding class, an instance of the Customer class is injected at runtime; this is accomplished via the @Inject annotation. This annotation allows us to easily use dependency injection in CDI applications. Since the Customer class is annotated with the @RequestScoped annotation, a new instance of Customer will be injected for every request. The navigateToConfirmation() method in the preceding class is invoked when the user clicks on the Submit button on the page. The navigateToConfirmation() method works just like an equivalent method in a JSF managed bean would, that is, it returns a string and the application navigates to an appropriate page based on the value of that string. Like with JSF, by default, the target page's name with an .xhtml extension is the return value of this method. For example, if no exceptions are thrown in the navigateToConfirmation() method, the user is directed to a page named confirmation.xhtml: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Success</title>    </h:head>    <h:body>        New Customer created successfully.        <h:panelGrid columns="2" border="1" cellspacing="0">            <h:outputLabel for="firstName" value="First Name"/>            <h:outputText id="firstName" value="#{customer.firstName}"/>              <h:outputLabel for="middleName" value="Middle Name"/>            <h:outputText id="middleName"              value="#{customer.middleName}"/>              <h:outputLabel for="lastName" value="Last Name"/>            <h:outputText id="lastName" value="#{customer.lastName}"/>              <h:outputLabel for="email" value="Email Address"/>            <h:outputText id="email" value="#{customer.email}"/>          </h:panelGrid>    </h:body> </html> Again, there is nothing special we need to do to access the named beans properties from the preceding markup. It works just as if the bean was a JSF managed bean. The preceding page renders as follows: As we can see, CDI applications work just like JSF applications. However, CDI applications have several advantages over JSF, for example (as we mentioned previously) CDI beans have additional scopes not found in JSF. Additionally, using CDI allows us to decouple our Java code from the JSF API. Also, as we mentioned previously, CDI allows us to use session beans as named beans. Qualifiers In some instances, the type of bean we wish to inject into our code may be an interface or a Java superclass, but we may be interested in injecting a subclass or a class implementing the interface. For cases like this, CDI provides qualifiers we can use to indicate the specific type we wish to inject into our code. A CDI qualifier is an annotation that must be decorated with the @Qualifier annotation. This annotation can then be used to decorate the specific subclass or interface. In this section, we will develop a Premium qualifier for our customer bean; premium customers could get perks that are not available to regular customers, for example, discounts. Creating a CDI qualifier with NetBeans is very easy; all we need to do is go to File | New File, select the Contexts and Dependency Injection category, and select the Qualifier Type file type. In the next step in the wizard, we need to enter a name and a package for our qualifier. After these two simple steps, NetBeans generates the code for our qualifier: package com.ensode.cdiintro.qualifier;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.PARAMETER; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Qualifier;   @Qualifier @Retention(RUNTIME) @Target({METHOD, FIELD, PARAMETER, TYPE}) public @interface Premium { } Qualifiers are standard Java annotations. Typically, they have retention of runtime and can target methods, fields, parameters, or types. The only difference between a qualifier and a standard annotation is that qualifiers are decorated with the @Qualifier annotation. Once we have our qualifier in place, we need to use it to decorate the specific subclass or interface implementation, as shown in the following code: package com.ensode.cdiintro.model;   import com.ensode.cdiintro.qualifier.Premium; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped @Premium public class PremiumCustomer extends Customer {      private Integer discountCode;      public Integer getDiscountCode() {        return discountCode;    }      public void setDiscountCode(Integer discountCode) {        this.discountCode = discountCode;    } } Once we have decorated the specific instance we need to qualify, we can use our qualifiers in the client code to specify the exact type of dependency we need: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium;   import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer =          (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Since we used our @Premium qualifier to decorate the customer field, an instance of the PremiumCustomer class is injected into that field. This is because this class is also decorated with the @Premium qualifier. As far as our JSF pages go, we simply access our named bean as usual using its name, as shown in the following code; <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Premium Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Premium Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                 <h:inputText id="firstName"                    value="#{premiumCustomer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                     value="#{premiumCustomer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName"                    value="#{premiumCustomer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email"                    value="#{premiumCustomer.email}"/>                <h:message for="email"/>                  <h:outputLabel for="discountCode" value="Discount Code"/>                <h:inputText id="discountCode"                    value="#{premiumCustomer.discountCode}"/>                <h:message for="discountCode"/>                   <h:panelGroup/>                <h:commandButton value="Submit"                      action="#{premiumCustomerController.saveCustomer}"/>            </h:panelGrid>        </h:form>    </h:body> </html> In this example, we are using the default name for our bean, which is the class name with the first letter switched to lowercase. Now, we are ready to test our application: After submitting the page, we can see the confirmation page. Stereotypes A CDI stereotype allows us to create new annotations that bundle up several CDI annotations. For example, if we need to create several CDI named beans with a scope of session, we would have to use two annotations in each of these beans, namely @Named and @SessionScoped. Instead of having to add two annotations to each of our beans, we could create a stereotype and annotate our beans with it. To create a CDI stereotype in NetBeans, we simply need to create a new file by selecting the Contexts and Dependency Injection category and the Stereotype file type. Then, we need to enter a name and package for our new stereotype. At this point, NetBeans generates the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.inject.Stereotype;   @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now, we simply need to add the CDI annotations that we want the classes annotated with our stereotype to use. In our case, we want them to be named beans and have a scope of session; therefore, we add the @Named and @SessionScoped annotations as shown in the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.Stereotype; import javax.inject.Named;   @Named @SessionScoped @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now we can use our stereotype in our own code: package com.ensode.cdiintro.beans;   import com.ensode.cdiintro.stereotype.NamedSessionScoped; import java.io.Serializable;   @NamedSessionScoped public class StereotypeClient implements Serializable {      private String property1;    private String property2;      public String getProperty1() {        return property1;    }      public void setProperty1(String property1) {        this.property1 = property1;    }      public String getProperty2() {        return property2;    }      public void setProperty2(String property2) {        this.property2 = property2;    } } We annotated the StereotypeClient class with our NamedSessionScoped stereotype, which is equivalent to using the @Named and @SessionScoped annotations. Interceptor binding types One of the advantages of EJBs is that they allow us to easily perform aspect-oriented programming (AOP) via interceptors. CDI allows us to write interceptor binding types; this lets us bind interceptors to beans and the beans do not have to depend on the interceptor directly. Interceptor binding types are annotations that are themselves annotated with @InterceptorBinding. Creating an interceptor binding type in NetBeans involves creating a new file, selecting the Contexts and Dependency Injection category, and selecting the Interceptor Binding Type file type. Then, we need to enter a class name and select or enter a package for our new interceptor binding type. At this point, NetBeans generates the code for our interceptor binding type: package com.ensode.cdiintro.interceptorbinding;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.interceptor.InterceptorBinding;   @Inherited @InterceptorBinding @Retention(RUNTIME) @Target({METHOD, TYPE}) public @interface LoggingInterceptorBinding { } The generated code is fully functional; we don't need to add anything to it. In order to use our interceptor binding type, we need to write an interceptor and annotate it with our interceptor binding type, as shown in the following code: package com.ensode.cdiintro.interceptor;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import java.io.Serializable; import java.util.logging.Level; import java.util.logging.Logger; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext;   @LoggingInterceptorBinding @Interceptor public class LoggingInterceptor implements Serializable{      private static final Logger logger = Logger.getLogger(            LoggingInterceptor.class.getName());      @AroundInvoke    public Object logMethodCall(InvocationContext invocationContext)            throws Exception {          logger.log(Level.INFO, new StringBuilder("entering ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          Object retVal = invocationContext.proceed();          logger.log(Level.INFO, new StringBuilder("leaving ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          return retVal;    } } As we can see, other than being annotated with our interceptor binding type, the preceding class is a standard interceptor similar to the ones we use with EJB session beans. In order for our interceptor binding type to work properly, we need to add a CDI configuration file (beans.xml) to our project. Then, we need to register our interceptor in beans.xml as follows: <?xml version="1.0" encoding="UTF-8"?> <beans               xsi_schemaLocation="http://>    <interceptors>          <class>        com.ensode.cdiintro.interceptor.LoggingInterceptor      </class>    </interceptors> </beans> To register our interceptor, we need to set bean-discovery-mode to all in the generated beans.xml and add the <interceptor> tag in beans.xml, with one or more nested <class> tags containing the fully qualified names of our interceptors. The final step before we can use our interceptor binding type is to annotate the class to be intercepted with our interceptor binding type: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium; import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @LoggingInterceptorBinding @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer = (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Now, we are ready to use our interceptor. After executing the preceding code and examining the GlassFish log, we can see our interceptor binding type in action. The lines entering saveCustomer method and leaving saveCustomer method were added to the log by our interceptor, which was indirectly invoked by our interceptor binding type. Custom scopes In addition to providing several prebuilt scopes, CDI allows us to define our own custom scopes. This functionality is primarily meant for developers building frameworks on top of CDI, not for application developers. Nevertheless, NetBeans provides a wizard for us to create our own CDI custom scopes. To create a new CDI custom scope, we need to go to File | New File, select the Contexts and Dependency Injection category, and select the Scope Type file type. Then, we need to enter a package and a name for our custom scope. After clicking on Finish, our new custom scope is created, as shown in the following code: package com.ensode.cdiintro.scopes;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Scope;   @Inherited @Scope // or @javax.enterprise.context.NormalScope @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface CustomScope { } To actually use our scope in our CDI applications, we would need to create a custom context which, as mentioned previously, is primarily a concern for framework developers and not for Java EE application developers. Therefore, it is beyond the scope of this article. Interested readers can refer to JBoss Weld CDI for Java Platform, Ken Finnigan, Packt Publishing. (JBoss Weld is a popular CDI implementation and it is included with GlassFish.) Summary In this article, we covered NetBeans support for CDI, a new Java EE API introduced in Java EE 6. We provided an introduction to CDI and explained additional functionality that the CDI API provides over standard JSF. We also covered how to disambiguate CDI injected beans via CDI Qualifiers. Additionally, we covered how to group together CDI annotations via CDI stereotypes. We also, we saw how CDI can help us with AOP via interceptor binding types. Finally, we covered how NetBeans can help us create custom CDI scopes. Resources for Article: Further resources on this subject: Java EE 7 Performance Tuning and Optimization [article] Java EE 7 Developer Handbook [article] Java EE 7 with GlassFish 4 Application Server [article]
Read more
  • 0
  • 0
  • 9639
article-image-hyper-v-basics
Packt
06 Feb 2015
10 min read
Save for later

Hyper-V Basics

Packt
06 Feb 2015
10 min read
This article by Vinith Menon, the author of Microsoft Hyper-V PowerShell Automation, delves into the basics of Hyper-V, right from installing Hyper-V to resizing virtual hard disks. The Hyper-V PowerShell module includes several significant features that extend its use, improve its usability, and allow you to control and manage your Hyper-V environment with more granular control. Various organizations have moved on from Hyper-V (V2) to Hyper-V (V3). In Hyper-V (V2), the Hyper-V management shell was not built-in and the PowerShell module had to be manually installed. In Hyper-V (V3), Microsoft has provided an exhaustive set of cmdlets that can be used to manage and automate all configuration activities of the Hyper-V environment. The cmdlets are executed across the network using Windows Remote Management. In this article, we will cover: The basics of setting up a Hyper-V environment using PowerShell The fundamental concepts of Hyper-V management with the Hyper-V management shell The updated features in Hyper-V (For more resources related to this topic, see here.) Here is a list of all the new features introduced in Hyper-V in Windows Server 2012 R2. We will be going in depth through the important changes that have come into the Hyper-V PowerShell module with the following features and functions: Shared virtual hard disk Resizing the live virtual hard disk Installing and configuring your Hyper-V environment Installing and configuring Hyper-V using PowerShell Before you proceed with the installation and configuration of Hyper-V, there are some prerequisites that need to be taken care of: The user account that is used to install the Hyper-V role should have administrative privileges on the computer There should be enough RAM on the server to run newly created virtual machines Once the prerequisites have been taken care of, let's start with installing the Hyper-V role: Open a PowerShell prompt in Run as Administrator mode: Type the following into the PowerShell prompt to install the Hyper-V role along with the management tools; once the installation is complete, the Hyper-V Server will reboot and the Hyper-V role will be successfully installed: Install-WindowsFeature –Name Hyper-V -IncludeManagementTools - Restart Once the server boots up, verify the installation of Hyper-V using the Get-WindowsFeature cmdlet: Get-WindowsFeature -Name hyper* You will be able to see that the Hyper-V role, Hyper-V PowerShell management shell, and the GUI management tools are successfully installed:   Fundamental concepts of Hyper-V management with the Hyper-V management shell In this section, we will look at some of the fundamental concepts of Hyper-V management with the Hyper-V management shell. Once you get the Hyper-V role installed as per the steps illustrated in the previous section, a PowerShell module to manage your Hyper-V environment will also get installed. Now, perform the following steps: Open a PowerShell prompt in the Run as Administrator mode. PowerShell uses cmdlets that are built using a verb-noun naming system (for more details, refer to Learning Windows PowerShell Names at http://technet.microsoft.com/en-us/library/dd315315.aspx). Type the following command into the PowerShell prompt to get a list of all the cmdlets in the Hyper-V PowerShell module: Get-Command -Module Hyper-V Hyper-V in Windows Server 2012 R2 ships with about 178 cmdlets. These cmdlets allow a Hyper-V administrator to handle very simple, basic tasks to advanced ones such as setting up a Hyper-V replica for virtual machine disaster recovery. To get the count of all the available Hyper-V cmdlets, you can type the following command in PowerShell: Get-Command -Module Hyper-V | Measure-Object The Hyper-V PowerShell cmdlets follow a very simple approach and are very user friendly. The cmdlet name itself indirectly communicates with the Hyper-V administrator about its functionality. The following screenshot shows the output of the Get command: For example, in the following screenshot, the Remove-VMSwitch cmdlet itself says that it's used to delete a previously created virtual machine switch: If the administrator is still not sure about the task that can be performed by the cmdlet, he or she can get help with detailed examples using the Get-Help cmdlet. To get help on the cmdlet type, type the cmdlet name in the prescribed format. To make sure that the latest version of help files are installed on the server, run the Update-Help cmdlet before executing the following cmdlet: Get-Help <Hyper-V cmdlet> -Full The following screenshot is an example of the Get-Help cmdlet: Shared virtual hard disks This new and improved feature in Windows Server 2012 R2 allows an administrator to share a virtual hard disk file (the .vhdx file format) between multiple virtual machines. These .vhdx files can be used as shared storage for a failover cluster created between virtual machines (also known as guest clustering). A shared virtual hard disk allows you to create data disks and witness disks using .vhdx files with some advantages: Shared disks are ideal for SQL database files and file servers Shared disks can be run on generation 1 and generation 2 virtual machines This new feature allows you to save on storage costs and use the .vhdx files for guest clustering, enabling easier deployment rather than using virtual Fibre Channel or Internet Small Computer System Interface (iSCSI), which are complicated and require storage configuration changes such as zoning and Logic Unit Number (LUN) masking. In Windows Server 2012 R2, virtual iSCSI disks (both shared and unshared virtual hard disk files) show up as virtual SAS disks when you add an iSCSI hard disk to a virtual machine. Shared virtual hard disks (.vhdx) files can be placed on Cluster Shared Volumes (CSV) or a Scale-Out File Server cluster Let's look at the ways you can automate and manage your shared .vhdx guest clustering configuration using PowerShell. In the following example, we will demonstrate how you can create a two-node file server cluster using the shared VHDX feature. After that, let's set up a testing environment within which we can start learning these new features. The steps are as follows: We will start by creating two virtual machines each with 50 GB OS drives, which contains a sysprep image of Windows Server 2012 R2. Each virtual machine will have 4 GB RAM and four virtual CPUs. D:vhdbase_1.vhdx and D:vhdbase_2.vhdx are already existing VHDX files with sysprepped image of Windows Server 2012 R2. The following code is used to create two virtual machines: New-VM –Name "Fileserver_VM1" –MemoryStartupBytes 4GB – NewVHDPath d:vhdbase_1.vhdx -NewVHDSizeBytes 50GB New-VM –Name "Fileserver_VM2" –MemoryStartupBytes 4GB –NewVHDPath d:vhdbase_2.vhdx -NewVHDSizeBytes 50GB Next, we will install the file server role and configure a failover cluster on both the virtual machines using PowerShell. You need to enable PowerShell remoting on both the file servers and also have them joined to a domain. The following is the code: Install-WindowsFeature -computername Fileserver_VM1 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM1 RSAT- Clustering –IncludeAllSubFeature   Install-WindowsFeature -computername Fileserver_VM2 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM2 RSAT- Clustering -IncludeAllSubFeature Once we have the virtual machines created and the file server and failover clustering features installed, we will create the failover cluster as per Microsoft's best practices using the following set of cmdlets: New-Cluster -Name Cluster1 -Node FileServer_VM1,   FileServer_VM2 -StaticAddress 10.0.0.59 -NoStorage – Verbose You will need to choose a name and IP address that fits your organization. Next, we will create two vhdx files named sharedvhdx_data.vhdx (which will be used as a data disk) and sharedvhdx_quorum.vhdx (which will be used as the quorum or the witness disk). To do this, the following commands need to be run on the Hyper-V cluster: New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX -Fixed - SizeBytes 10GB   New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_quorum.VHDX -Fixed - SizeBytes 1GB Once we have created these virtual hard disk files, we will add them as shared .vhdx files. We will attach these newly created VHDX files to the Fileserver_VM1 and Fileserver_VM2 virtual machines and specify the parameter-shared VHDX files for guest clustering: Add-VMHardDiskDrive –VMName Fileserver_VM1 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk   Add-VMHardDiskDrive –VMName Fileserver_VM2 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk Finally, we will be making the disks available online and adding them to the failover cluster using the following command: Get-ClusterAvailableDisk | Add-ClusterDisk Once we have executed the preceding set of steps, we will have a highly available file server infrastructure using shared VHD files. Live virtual hard disk resizing With Windows Server 2012 R2, a newly added feature in Hyper-V allows the administrators to expand or shrink the size of a virtual hard disk attached to the SCSI controller while the virtual machines are still running. Hyper-V administrators can now perform maintenance operations on a live VHD and avoid any downtime by not temporarily shutting down the virtual machine for these maintenance activities. Prior to Windows Server 2012 R2, to resize a VHD attached to the virtual machine, it had to be turned off leading to costly downtime. Using the GUI controls, the VHD resize can be done by using only the Edit Virtual Hard Disk wizard. Also, note that the VHDs that were previously expanded can be shrunk. The Windows PowerShell way of doing a VHD resize is by using the Resize-VirtualDisk cmdlet. Let's look at the ways you can automate a VHD resize using PowerShell. In the next example, we will demonstrate how you can expand and shrink a virtual hard disk connected to a VM's SCSI controller. We will continue using the virtual machine that we created for our previous example. We have a pre-created VHD of 50 GB that is connected to the virtual machine's SCSI controller. Expanding the virtual hard disk Let's resize the aforementioned virtual hard disk to 57 GB using the Resize-Virtualdisk cmdlet: Resize-VirtualDisk -Name "scsidisk" -Size (57GB) Next, if we open the VM settings and perform an inspect disk operation, we'll be able to see that the VHDX file size has become 57 GB: Also, one can verify this when he or she logs into the VM, opens disk management, and extends the unused partition. You can see that the disk size has increased to 57 GB: Resizing the virtual hard disk Let's resize the earlier mentioned VHD to 57 GB using the Resize-Virtualdisk cmdlet: For this exercise, the primary requirement is to shrink the disk partition by logging in to the VM using disk management, as you can see in the following screenshot; we're shrinking the VHDX file by 7 GB: Next, click on Shrink. Once you complete this step, you will see that the unallocated space is 7 GB. You can also execute this step using the Resize-Partition Powershell cmdlet: Get-Partition -DiskNumber 1 | Resize-Partition -Size 50GB The following screenshot shows the partition: Next, we will resize/shrink the VHD to 50 GB: Resize-VirtualDisk -Name "scsidisk" -Size (50GB) Once the previous steps have been executed successfully, run a re-scan disk using disk management and you will see that the disk size is 50 GB: Summary In this article, we went through the basics of setting up a Hyper-V environment using PowerShell. We also explored the fundamental concepts of Hyper-V management with Hyper-V management shell. Resources for Article: Further resources on this subject: Hyper-V building blocks for creating your Microsoft virtualization platform [article] The importance of Hyper-V Security [article] Network Access Control Lists [article]
Read more
  • 0
  • 0
  • 9499

article-image-remote-access
Packt
06 Feb 2015
32 min read
Save for later

Remote Access

Packt
06 Feb 2015
32 min read
In this article by Jordan Krause, author of the book Windows Server 2012 R2 Administrator Cookbook, we will see how Windows Server 2012 R2 by Microsoft brings a whole new way of looking at remote access. Companies have historically relied on third-party tools to connect remote users into the network, such as traditional and SSL VPN provided by appliances from large networking vendors. I'm here to tell you those days are gone. Those of us running Microsoft-centric shops can now rely on Microsoft technologies to connect our remote workforce. Better yet is that these technologies are included with the Server 2012 R2 operating system, and have functionality that is much improved over anything that a traditional VPN can provide. Regular VPN does still have a place in the remote access space, and the great news is that you can also provide it with Server 2012 R2. Our primary focus for this article will be DirectAccess (DA). DA is kind of like automatic VPN. There is nothing the user needs to do in order to be connected to work. Whenever they are on the Internet, they are also connected automatically to the corporate network. DirectAccess is an amazing way to have your Windows 7 and Windows 8 domain joined systems connected back to the network for data access and for management of those traveling machines. DirectAccess has actually been around since 2008, but the first version came with some steep infrastructure requirements and was not widely used. Server 2012 R2 brings a whole new set of advantages and makes implementation much easier than in the past. I still find many server and networking admins who have never heard of DirectAccess, so let's spend some time together exploring some of the common tasks associated with it. In this article, we will cover the following recipes: Configuring DirectAccess, VPN, or a combination of the two Pre-staging Group Policy Objects (GPOs) to be used by DirectAccess Enhancing the security of DirectAccess by requiring certificate authentication Building your Network Location Server (NLS) on its own system  (For more resources related to this topic, see here.) There are two "flavors" of remote access available in Windows Server 2012 R2. The most common way to implement the Remote Access role is to provide DirectAccess for your Windows 7 and Windows 8 domain joined client computers, and VPN for the rest. The DirectAccess machines are typically your company-owned corporate assets. One of the primary reasons that DirectAccess is usually only for company assets is that the client machines must be joined to your domain, because the DirectAccess configuration settings are brought down to the client through a GPO. I doubt you want home and personal computers joining your domain. VPN is therefore used for down level clients such as Windows XP, and for home and personal devices that want to access the network. Since this is a traditional VPN listener with all regular protocols available such as PPTP, L2TP, SSTP, it can even work to connect devices such as smartphones. There is a third function available within the Server 2012 R2 Remote Access role, called the Web Application Proxy ( WAP ). This function is not used for connecting remote computers fully into the network as DirectAccess and VPN are; rather, WAP is used for publishing internal web resources out to the internet. For example, if you are running Exchange and Lync Server inside your network and want to publish access to these web-based resources to the internet for external users to connect to, WAP would be a mechanism that could publish access to these resources. The term for publishing out to the internet like this is Reverse Proxy, and WAP can act as such. It can also behave as an ADFS Proxy. For further information on the WAP role, please visit: http://technet.microsoft.com/en-us/library/dn584107.aspx One of the most confusing parts about setting up DirectAccess is that there are many different ways to do it. Some are good ideas, while others are not. Before we get rolling with recipes, we are going to cover a series of questions and answers to help guide you toward a successful DA deployment. The first question that always presents itself when setting up DA is "How do I assign IP addresses to my DirectAccess server?". This is quite a loaded question, because the answer depends on how you plan to implement DA, which features you plan to utilize, and even upon how secure you believe your DirectAccess server to be. Let me ask you some questions, pose potential answers to those questions, and discuss the effects of making each decision. DirectAccess Planning Q&A Which client operating systems can connect using DirectAccess? Answer: Windows 7 Ultimate, Windows 7 Enterprise, and Windows 8.x Enterprise. You'll notice that the Professional SKU is missing from this list. That is correct, Windows 7 and Windows 8 Pro do not contain the DirectAccess connectivity components. Yes, this does mean that Surface Pro tablets cannot utilize DirectAccess out of the box. However, I have seen many companies now install Windows 8 Enterprise onto their Surface tablets, effectively turning them into "Surface Enterprises." This works fine and does indeed enable them to be DirectAccess clients. In fact, I am currently typing this text on a DirectAccess-connected Surface "Pro turned Enterprise" tablet. Do I need one or two NICs on my DirectAccess server? Answer: Technically, you could set it up either way. In practice however, it really is designed for dual-NIC implementation. Single NIC DirectAccess works okay sometimes to establish a proof-of-concept to test out the technology. But I have seen too many problems with single NIC implementation in the field to ever recommend it for production use. Stick with two network cards, one facing the internal network and one facing the Internet. Do my DirectAccess servers have to be joined to the domain? Answer: Yes. Does DirectAccess have site-to-site failover capabilities? Answer: Yes, though only Windows 8.x client computers can take advantage of it. This functionality is called Multi-Site DirectAccess. Multiple DA servers that are spread out geographically can be joined together in a multi-site array. Windows 8 client computers keep track of each individual entry point and are able to swing between them as needed or at user preference. Windows 7 clients do not have this capability and will always connect through their primary site. What are these things called 6to4, Teredo, and IP-HTTPS I have seen in the Microsoft documentation? Answer: 6to4, Teredo, and IP-HTTPS are all IPv6 transition tunneling protocols. All DirectAccess packets that are moving across the internet between DA client and DA server are IPv6 packets. If your internal network is IPv4, then when those packets reach the DirectAccess server they get turned down into IPv4 packets, by some special components called DNS64 and NAT64. While these functions handle the translation of packets from IPv6 into IPv4 when necessary inside the corporate network, the key point here is that all DirectAccess packets that are traveling over the Internet part of the connection are always IPv6. Since the majority of the Internet is still IPv4, this means that we must tunnel those IPv6 packets inside something to get them across the Internet. That is the job of 6to4, Teredo, and IP-HTTPS. 6to4 encapsulates IPv6 packets into IPv4 headers and shuttles them around the internet using protocol 41. Teredo similarly encapsulates IPv6 packets inside IPv4 headers, but then uses UDP port 3544 to transport them. IP-HTTPS encapsulates IPv6 inside IPv4 and then inside HTTP encrypted with TLS, essentially creating an HTTPS stream across the Internet. This, like any HTTPS traffic, utilizes TCP port 443. The DirectAccess traffic traveling inside either kind of tunnel is always encrypted, since DirectAccess itself is protected by IPsec. Do I want to enable my clients to connect using Teredo? Answer: Most of the time, the answer here is yes. Probably the biggest factor that weighs on this decision is whether or not you are still running Windows 7 clients. When Teredo is enabled in an environment, this gives the client computers an opportunity to connect using Teredo, rather than all clients connecting in over the IP-HTTPS protocol. IP-HTTPS is sort of the "catchall" for connections, but Teredo will be preferred by clients if it is available. For Windows 7 clients, Teredo is quite a bit faster than IP-HTTPS. So enabling Teredo on the server side means your Windows 7 clients (the ones connecting via Teredo) will have quicker response times, and the load on your DirectAccess server will be lessened. This is because Windows 7 clients who are connecting over IP-HTTPS are encrypting all of the traffic twice. This also means that the DA server is encrypting/decrypting everything that comes and goes twice. In Windows 8, there is an enhancement that brings IP-HTTPS performance almost on par with Teredo, and so environments that are fully cut over to Windows 8 will receive less benefit from the extra work that goes into making sure Teredo works. Can I place my DirectAccess server behind a NAT? Answer: Yes, though there is a downside. Teredo cannot work if the DirectAccess server is sitting behind a NAT. For Teredo to be available, the DA server must have an External NIC that has two consecutive public IP addresses. True public addresses. If you place your DA server behind any kind of NAT, Teredo will not be available and all clients will connect using the IP-HTTPS protocol. Again, if you are using Windows 7 clients, this will decrease their speed and increase the load on your DirectAccess server. How many IP addresses do I need on a standalone DirectAccess server? Answer: I am going to leave single NIC implementation out of this answer since I don't recommend it anyway. For scenarios where you are sitting the External NIC behind a NAT or, for any other reason, are limiting your DA to IP-HTTPS only, then we need one external address and one internal address. The external address can be a true public address or a private NATed DMZ address. Same with the internal; it could be a true internal IP or a DMZ IP. Make sure both NICs are not plugged into the same DMZ, however. For a better installation scenario that allows Teredo connections to be possible, you would need two consecutive public IP addresses on the External NIC and a single internal IP on the Internal NIC. This internal IP could be either true internal or DMZ. But the public IPs would really have to be public for Teredo to work. Do I need an internal PKI? Answer: Maybe. If you want to connect Windows 7 clients, then the answer is yes. If you are completely Windows 8, then technically you do not need internal PKI. But you really should use it anyway. Using an internal PKI, which can be a single, simple Windows CA server, increases the security of your DirectAccess infrastructure. You'll find out during this article just how easy it is to require certificates as part of the tunnel building authentication process. Configuring DirectAccess, VPN, or a combination of the two Now that we have some general ideas about how we want to implement our remote access technologies, where do we begin? Most services that you want to run on a Windows Server begin with a role installation, but the implementation of remote access begins before that. Let's walk through the process of taking a new server and turning it into a Microsoft Remote Access server. Getting ready All of our work will be accomplished on a new Windows Server 2012 R2. We are taking the two-NIC approach to networking, and so we have two NICs installed on this server. The Internal NIC is plugged into the corporate network and the External NIC is plugged into the Internet for the sake of simplicity. The External NIC could just as well be plugged into a DMZ. How to do it... Follow these steps to turn your new server into a Remote Access server: Assign IP addresses to your server. Remember, the most important part is making sure that the Default Gateway goes on the External NIC only. Join the new server to your domain. Install an SSL certificate onto your DirectAccess server that you plan to use for the IP-HTTPS listener. This is typically a certificate purchased from a public CA. If you're planning to use client certificates for authentication, make sure to pull down a copy of the certificate to your DirectAccess server. You want to make sure certificates are in place before you start with the configuration of DirectAccess. This way the wizards will be able to automatically pull in information about those certificates in the first run. If you don't, DA will set itself up to use self-signed certificates, which are a security no-no. Use Server Manager to install the Remote Access role. You should only do this after completing the steps listed earlier. If you plan to load balance multiple DirectAccess servers together at a later time, make sure to also install the feature called Network Load Balancing . After selecting your role and feature, you will be asked which Remote Access role services you want to install. For our purposes in getting the remote workforce connected back into the corporate network, we want to choose DirectAccess and VPN (RAS) .  Now that the role has been successfully installed, you will see a yellow exclamation mark notification near the top of Server Manager indicating that you have some Post-deployment Configuration that needs to be done. Do not click on Open the Getting Started Wizard ! Unfortunately, Server Manager leads you to believe that launching the Getting Started Wizard (GSW) is the logical next step. However, using the GSW as the mechanism for configuring your DirectAccess settings is kind of like roasting a marshmallow with a pair of tweezers. In order to ensure you have the full range of options available to you as you configure your remote access settings, and that you don't get burned later, make sure to launch the configuration this way: Click on the Tools menu from inside Server Manager and launch the Remote Access Management Console . In the left window pane, click on Configuration | DirectAccess and VPN . Click on the second link, the one that says Run the Remote Access Setup Wizard . Please note that once again the top option is to run that pesky Getting Started Wizard. Don't do it! I'll explain why in the How it works… section of this recipe. Now you have a choice that you will have to answer for yourself. Are you configuring only DirectAccess, only VPN, or a combination of the two? Simply click on the option that you want to deploy. Following your choice, you will see a series of steps (steps 1 through 4) that need to be accomplished. This series of mini-wizards will guide you through the remainder of the DirectAccess and VPN particulars. This recipe isn't large enough to cover every specific option included in those wizards, but at least you now know the correct way to bring a DirectAccess/VPN server into operation. How it works... The remote access technologies included in Server 2012 R2 have great functionality, but their initial configuration can be confusing. Following the procedure listed in this recipe will set you on the right path to be successful in your deployment, and prevent you from running into issues down the road. The reason that I absolutely recommend you stay away from using the "shortcut" deployment method provided by the Getting Started Wizard is twofold: GSW skips a lot of options as it sets up DirectAccess, so you don't really have any understanding of how it works after finishing. You may have DA up and running, but have no idea how it's authenticating or working under the hood. This holds so much potential for problems later, should anything suddenly stop working. GSW employs a number of bad security practices in order to save time and effort in the setup process. For example, using the GSW usually means that your DirectAccess server will be authenticating users without client certificates, which is not a best practice. Also, it will co-host something called the NLS website on itself, which is also not a best practice. Those who utilize the GSW to configure DirectAccess will find that their GPO, which contains the client connectivity settings, will be security-filtered to the Domain Computers group. Even though it also contains a WMI filter that is supposed to limit that policy application to mobile hardware such as laptops, this is a terribly scary thing to see inside GPO filtering settings. You probably don't want all of your laptops to immediately start getting DA connectivity settings, but that is exactly what the GSW does for you. Perhaps worst, the GSW will create and make use of self-signed SSL certificates to validate its web traffic, even the traffic coming in from the Internet! This is a terrible practice and is the number one reason that should convince you that clicking on the Getting Started Wizard is not in your best interests. Pre-staging Group Policy Objects (GPOs) to be used by DirectAccess One of the great things about DirectAccess is that all of the connectivity settings the client computers need in order to connect are contained within a Group Policy Object (GPO). This means that you can turn new client computers into DirectAccess-connected clients without ever touching that system. Once configured properly, all you need to do is add the new computer account to an Active Directory security group, and during the next automatic Group Policy refresh cycle (usually within 90 minutes), that new laptop will be connecting via DirectAccess whenever outside the corporate network. You can certainly choose not to pre-stage anything with the GPOs and DirectAccess will still work. When you get to the end of the DA configuration wizards, it will inform you that two new GPOs are about to be created inside Active Directory. One GPO is used to contain the DirectAccess server settings and the other GPO is used to contain the DirectAccess client settings. If you allow the wizard to handle the generation of these GPOs, it will create them, link them, filter them, and populate them with settings automatically. About half of the time I see folks do it this way and they are forever happy with letting the wizard manage those GPOs now and in the future. The other half of the time, it is desired that we maintain a little more personal control over the GPOs. If you are setting up a new DA environment but your credentials don't have permission to create GPOs, the wizard is not going to be able to create them either. In this case, you will need to work with someone on your Active Directory team to get them created. Another reason to manage the GPOs manually is to have better control over placement of these policies. When you let the DirectAccess wizard create the GPOs, it will link them to the top level of your domain. It also sets Security Filtering on those GPOs so they are not going to be applied to everything in your domain, but when you open up the Group Policy Management Console you will always see those DirectAccess policies listed right up there at the top level of the domain. Sometimes this is simply not desirable. So for this reason also, you may want to choose to create and manage the GPOs by hand, so that we can secure placement and links where we specifically want them to be located. The key factors here are to make sure your DirectAccess Server Settings GPO applies to only the DirectAccess server or servers in your environment. And that the DirectAccess Client Settings GPO applies to only the DA client computers that you plan to enable in your network. The best practice here is to specify this GPO to only apply to a specific Active Directory security group so that you have full control over which computer accounts are in that group. I have seen some folks do it based only on the OU links and include whole OUs in the filtering for the clients GPO (foregoing the use of an AD group at all), but doing it this way makes it quite a bit more difficult to add or remove machines from the access list in the future. Requiring certificates as part of your DirectAccess tunnel authentication process is a good idea in any environment. It makes the solution more secure, and enables advanced functionality. The primary driver for most companies to require these certificates is the enablement of Windows 7 clients to connect via DirectAccess, but I suggest that anyone using DirectAccess in any capacity make use of these certs. They are simple to deploy, easy to configure, and give you some extra peace of mind that only computers who have a certificate issued directly to them from your own internal CA server are going to be able to connect through your DirectAccess entry point. Getting ready While the DirectAccess wizards themselves are run from the DirectAccess server, our work with this recipe is not. The Group Policy settings that we will be configuring are all accomplished within Active Directory, and we will be doing the work from a Domain Controller in our environment. How to do it... To pre-stage Group Policy Objects (GPOs) for use with DirectAccess: On your Domain Controller, launch the Group Policy Management Console . Expand Forest | Domains | Your Domain Name . There should be a listing here called Group Policy Object . Right-click on that and choose New . Name your new GPO something like DirectAccess Server Settings. Click on the new DirectAccess Server Settings GPO and it should open up automatically to the Scope tab. We need to adjust the Security Filtering section so that this GPO only applies to our DirectAccess server. This is a critical step for each GPO to ensure the settings that are going to be placed here do not get applied to the wrong computers. Remove Authenticated Users that is prepopulated in that list. The list should now be empty. Click the Add… button and search for the computer account of your DirectAccess server. Mine is called RA-01. By default this window will only search user accounts, so you will need to adjust Object Types to include Computers before it will allow you to add your server into this filtering list. Your Security Filtering list should now look like this:  Now click on the Details tab of your GPO. Change the GPO Status to be User configuration settings disabled . We do this because our GPO is only going to contain computer-level settings, nothing at the user level. The last thing to do is link your GPO to an appropriate container. Since we have Security Filtering enabled, our GPO is only ever going to apply its settings to the RA-01 server; however, without creating a link, the GPO will not even attempt to apply itself to anything. My RA-01 server is sitting inside the OU called Remote Access Servers . So I will right-click on my Remote Access Servers OU and choose Link an Existing GPO… .  Choose the new DirectAccess Server Settings from the list of available GPOs and click on the OK button. This creates the link and puts the GPO into action. Since there are not yet any settings inside the GPO, it won't actually make any changes on the server. The DirectAccess configuration wizards take care of populating the GPO with the settings that are needed. Now we simply need to rinse and repeat all of these steps to create another GPO, something like DirectAccess Client Settings . You want to set up the client settings GPO in the same way. Make sure that it is filtering to only the Active Directory Security Group that you created to contain your DirectAccess client computers. And make sure to link it to an appropriate container that will include those computer accounts. So maybe your clients GPO will look something like this:  How it works... Creating GPOs in Active Directory is a simple enough task, but it is critical that you configure the Links and Security Filtering correctly. If you do not take care to ensure that these DirectAccess connection settings are only going to apply to the machines that actually need the settings, you could create a world of trouble by internal servers getting remote access connection settings and cause them issues with connection while inside the network. Enhancing the security of DirectAccess by requiring certificate authentication When a DirectAccess client computer builds its IPsec tunnels back to the corporate network, it has the ability to require a certificate as part of that authentication process. In earlier versions of DirectAccess, the one in Server 2008 R2 and the one provided by Unified Access Gateway ( UAG ), these certificates were required in order to make DirectAccess work. Setting up the certificates really isn't a big deal at all; as long as there is a CA server in your network you are already prepared to issue the certs needed at no cost. Unfortunately, though, there must have been enough complaints back to Microsoft in order for them to make these certificates "recommended" instead of "required" and they created a new mechanism in Windows 8 and Server 2012 called KerberosProxy that can be used to authenticate the tunnels instead. This allows the DirectAccess tunnels to build without the computer certificate, making that authentication process less secure. I'm here to strongly recommend that you still utilize certificates in your installs! They are not difficult to set up, and using them makes your tunnel authentication stronger. Further, many of you may not have a choice and will still be required to install these certificates. Only simple DirectAccess scenarios that are all Windows 8 on the client side can get away with the shortcut method of foregoing certs. Anybody who still wants to connect Windows 7 via DirectAccess will need to use certificates on all of their client computers, both Windows 7 and Windows 8. In addition to Windows 7 access, anyone who intends to use the advanced features of DirectAccess such as load balancing, multi-site, or two-factor authentication will also need to utilize these certificates. With any of these scenarios, certificates become a requirement again, not a recommendation. In my experience, almost everyone still has Windows 7 clients that would benefit from being DirectAccess connected, and it's always a good idea to make your DA environment redundant by having load balanced servers. This further emphasizes the point that you should just set up certificate authentication right out of the gate, whether or not you need it initially. You might decide to make a change later that would require certificates and it would be easier to have them installed from the get-go rather than trying to incorporate them later into a running DA environment. Getting ready In order to distribute certificates, you will need a CA server running in your network. Once certificates are distributed to the appropriate places, the rest of our work will be accomplished from our Server 2012 R2 DirectAccess server. How to do it... Follow these steps to make use of certificates as part of the DirectAccess tunnel authentication process: The first thing that you need to do is distribute certificates to your DirectAccess servers and all DirectAccess client computers. The easiest way to do this is by using the built-in Computer template provided by default in a Windows CA server. If you desire to build a custom certificate template for this purpose, you can certainly do so. I recommend that you duplicate the Computer template and build it from there. Whenever I create a custom template for use with DirectAccess, I try to make sure that it meets the following criterias: The Subject Name of the certificate should match the Common Name of the computer (which is also the FQDN of the computer). The Subject Alternative Name ( SAN ) of the certificate should match the DNS Name of the computer (which is also the FQDN of the computer). The certificate should serve the Intended Purposes of both Client Authentication and Server Authentication . You can issue the certificates manually using Microsoft Management Console (MMC). Otherwise, you can lessen your hands-on administrative duties by enabling Autoenrollment. Now that we have certificates distributed to our DirectAccess clients and servers, log in to your primary DirectAccess server and open up the Remote Access Management Console . Click on Configuration in the top-left corner. You should now see steps 1 through 4 listed. Click Edit… listed under Step 2 . Now you can either click Next twice or click on the word Authentication to jump directly to the authentication screen. Check the box that says Use computer certificates . Now we have to specify the Certification Authority server that issued our client certificates. If you used an intermediary CA to issue your certs, make sure to check the appropriate checkbox. Otherwise, most of the time, certificates are issued from a root CA and in this case you would simply click on the Browse… button and look for your CA in the list. This screen is sometimes confusing because people expect to have to choose the certificate itself from the list. This is not the case. What you are actually choosing from this list is the Certificate Authority server that issued the certificates. Make any other appropriate selections on the Authentication screen. For example, many times when we require client certificates for authentication, it is because we have Windows 7 computers that we want to connect via DirectAccess. If that is the case for you, select the checkbox for Enable Windows 7 client computers to connect via DirectAccess .  How it works... Requiring certificates as part of your DirectAccess tunnel authentication process is a good idea in any environment. It makes the solution more secure, and enables advanced functionality. The primary driver for most companies to require these certificates is the enablement of Windows 7 clients to connect via DirectAccess, but I suggest that anyone using DirectAccess in any capacity make use of these certs. They are simple to deploy, easy to configure, and give you some extra peace of mind that only computers who have a certificate issued directly to them from your own internal CA server are going to be able to connect through your DirectAccess entry point. Building your Network Location Server (NLS) on its own system If you zipped through the default settings when configuring DirectAccess, or worse used the Getting Started Wizard, chances are that your Network Location Server ( NLS ) is running right on the DirectAccess server itself. This is not the recommended method for using NLS, it really should be running on a separate web server. In fact, if you later want to do something more advanced such as setting up load balanced DirectAccess servers, you're going to have to move NLS off onto a different server anyway. So you might as well do it right the first time. NLS is a very simple requirement, yet a critical one. It is just a website, it doesn't matter what content the site has, and it only has to run inside your network. Nothing has to be externally available. In fact, nothing should be externally available, because you only want this site being accessed internally. This NLS website is a large part of the mechanism by which DirectAccess client computers figure out when they are inside the office and when they are outside. If they can see the NLS website, they know they are inside the network and will disable DirectAccess name resolution, effectively turning off DA. If they do not see the NLS website, they will assume they are outside the corporate network and enable DirectAccess name resolution. There are two gotchas with setting up an NLS website: The first is that it must be HTTPS, so it does need a valid SSL certificate. Since this website is only running inside the network and being accessed from domain-joined computers, this SSL certificate can easily be one that has been issued from your internal CA server. So no cost associated there. The second catch that I have encountered a number of times is that for some reason the default IIS splash screen page doesn't make for a very good NLS website. If you set up a standard IIS web server and use the default site as NLS, sometimes it works to validate the connections and sometimes it doesn't. Given that, I always set up a specific site that I create myself, just to be on the safe side. So let's work together to follow the exact process I always take when setting up NLS websites in a new DirectAccess environment. Getting ready Our NLS website will be hosted on an IIS server we have that runs Server 2012 R2. Most of the work will be accomplished from this web server, but we will also be creating a DNS record and will utilize a Domain Controller for that task. How to do it... Let's work together to set up our new Network Location Server website: First decide on an internal DNS name to use for this website and set it up in DNS of your domain. I am going to use nls.mydomain.local and am creating a regular Host (A) record that points nls.mydomain.local at the IP address of my web server. Now log in to that web server and let's create some simple content for this new website. Create a new folder called C:NLS. Inside your new folder, create a new Default.htm file. Edit this file and throw some simple text in there. I usually say something like This is the NLS website used by DirectAccess. Please do not delete or modify me!.  Remember, this needs to be an HTTPS website, so before we try setting up the actual website, we should acquire the SSL certificate that we need to use with this site. Since this certificate is coming from my internal CA server, I'm going to open up MMC on my web server to accomplish this task. Once MMC is opened, snap-in the Certificates module. Make sure to choose Computer account and then Local computer when it prompts you for which certificate store you want to open. Expand Certificates (Local Computer) | Personal | Certificates . Right-click on this Certificates folder and choose All Tasks | Request New Certificate… . Click Next twice and you should see your list of certificate templates that are available on your internal CA server. If you do not see one that looks appropriate for requesting a website certificate, you may need to check over the settings on your CA server to make sure the correct templates are configured for issuing. My template is called Custom Web Server . Since this is a web server certificate, there is some additional information that I need to provide in my request in order to successfully issue a certificate. So I go ahead and click on that link that says More information is required to enroll for this certificate. Click here to configure settings. .  Drop-down the Subject name | Type menu and choose the option Common name . Enter a common name for our website into the Value field, which in my case is nls.mydomain.local. Click the Add button and your CN should move over to the right side of the screen like this:  Click on OK then click on the Enroll button. You should now have an SSL certificate sitting in your certificates store that can be used to authenticate traffic moving to our nls.mydomain.local name. Open up Internet Information Services (IIS) Manager , and browse to the Sites folder. Go ahead and remove the default website that IIS automatically set up, so that we can create our own NLS website without any fear of conflict. Click on the Add Website… action. Populate the information as shown in the following screenshot. Make sure to choose your own IP address and SSL certificate from the lists, of course:  Click the OK button and you now have an NLS website running successfully in your network. You should be able to open up a browser on a client computer sitting inside the network and successfully browse to https://nls.mydomain.local. How it works... In this recipe, we configured a basic Network Location Server website for use with our DirectAccess environment. This site will do exactly what we need it to when our DA client computers try to validate whether they are inside or outside the corporate network. While this recipe meets our requirements for NLS, and in fact puts us into a good practice of installing DirectAccess with NLS being hosted on its own web server, there is yet another step you could take to make it even better. Currently this web server is a single point of failure for NLS. If this web server goes down or has a problem, we would have DirectAccess client computers inside the office who would think they are outside, and they would have some major name resolution problems until we sorted out the NLS problem. Given that, it is a great idea to make NLS redundant. You could cluster servers together, use Microsoft Network Load Balancing ( NLB ), or even use some kind of hardware load balancer if you have one available in your network. This way you could run the same NLS website on multiple web servers and know that your clients will still work properly in the event of a web server failure. Summary This article encourages you to use Windows Server 2012 R2 as the connectivity platform that brings your remote computers into the corporate network. We discussed DirectAccess and VPN in this article. We also saw how to configure DirectAccess and VPN, and how to secure DirectAccess using certificate authentication. Resources for Article: Further resources on this subject: Cross-premise Connectivity [article] Setting Up and Managing E-mails and Batch Processing [article] Upgrading from Previous Versions [article]
Read more
  • 0
  • 0
  • 1564
Modal Close icon
Modal Close icon