Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-openvz-container-administration
Packt
11 Nov 2014
11 min read
Save for later

OpenVZ Container Administration

Packt
11 Nov 2014
11 min read
In this article by Mark Furman, the author of OpenVZ Essentials, we will go over the various aspects of OpenVZ administration. Some of the things we are going to go over in this article are as follows: Listing the containers that are running on the server Starting, stopping, suspending, and resuming containers Destroying, mounting, and unmounting containers Setting quota on and off Creating snapshots of the containers in order to back up and restore the container to another server (For more resources related to this topic, see here.) Using vzlist The vzlist command is used to list the containers on a node. When you run vzlist on its own without any options, it will only list the containers that are currently running on the system: vzlist In the previous example, we used the vzlist command to list the containers that are currently running on the server. Listing all the containers on the server If you want to list all the containers on the server instead of just the containers that are currently running on the server, you will need to add -a after vzlist. This will tell vzlist to include all of the containers that are created on the node inside its output: vzlist -a In the previous example, we used the vzlist command with an -a flag to tell vzctl that we want to list all of the containers that have been created on the server. The vzctl command The next command that we are going to cover is the vzctl command. This is the primary command that you are going to use when you want to perform tasks with the containers on the node. The initial functions of the vzctl command that we will go over are how to start, stop, and restart the container. Starting a container We use vzctl to start a container on the node. To start a container, run the following command: vzctl start 101Starting Container ...Setup slm memory limitSetup slm subgroup (default)Setting devperms 20002 dev 0x7d00Adding IP address(es) to pool:Adding IP address(es): 192.168.2.101Hostname for Container set: gotham.example.comContainer start in progress... In the previous example, we used the vzctl command with the start option to start the container 101. Stopping a container To stop a container, run the following command: vzctl stop 101Stopping container ...Container was stoppedContainer is unmounted In the previous example, we used the vzctl command with the stop option to stop the container 101. Restarting a container To restart a container, run the following command: vzctl restart 101Stopping Container ...Container was stoppedContainer is unmountedStarting Container... In the previous example, we used the vzctl command with the restart option to restart the container 101. Using vzctl to suspend and resume a container The following set of commands will use vzctl to suspend and resume a container. When you use vzctl to suspend a container, it creates a save point of the container to a dump file. You can then use vzctl to resume the container to the saved point it was in before the container was suspended. Suspending a container To suspend a container, run the following command: vzctl suspend 101 In the previous example, we used the vzctl command with the suspend option to suspend the container 101. Resuming a container To resume a container, run the following command: vzctl resume 101 In the previous example, we used the vzctl command with the resume option to resume operations on the container 101. In order to get resume or suspend to work, you may need to enable several kernel modules by running the following:modprobe vzcptmodprobe vzrst Destroying a container You can destroy a container that you created by using the destroy argument with vzctl. This will remove all the files including the configuration file and the directories created by the container. In order to destroy a container, you must first stop the container from running. To destroy a container, run the following command: vzctl destroy 101Destroying container private area: /vz/private/101 Container private area was destroyed. In the previous example, we used the vzctl command with the destroy option to destroy the container 101. Using vzctl to mount and unmount a container You are able to mount and unmount a container's private area located at /vz/root/ctid, which provides the container with root filesystem that exists on the server. Mounting and unmounting containers come in handy when you have trouble accessing the filesystem for your container. Mounting a container To mount a container, run the following command: vzctl mount 101 In the previous example, we used the vzctl command with the mount option to mount the private area for the container 101. Unmounting a container To unmount a container, run the following command: vzctl umount 101 In the previous example, we used the vzctl command with the umount option to unmount the private area for the container 101. Disk quotas Disk quotas allow you to define special limits for your container, including the size of the filesystem or the number of inodes that are available for use. Setting quotaon and quotaoff for a container You can manually start and stop the containers disk quota by using the quotaon and quotaoff arguments with vzctl. Turning on disk quota for a container To turn on disk quota for a container, run the following command: vzctl quotaon 101 In the previous example, we used the vzctl command with the quotaon option to turn disk quota on for the container 101. Turning off disk quota for a container To turn off disk quota for a container, run the following command: vzctl quotaoff 101 In the previous example, we used the vzctl command with the quotaoff option to turn off disk quota for the container 101. Setting disk quotas with vzctl set You are able to set the disk quotas for your containers on your server using the vzctl set command. With this command, you can set the disk space, disk inodes, and the quota time. To set the disk space for container 101 to 2 GB, use the following command: vzctl set 101 --diskspace 2000000:2200000 --save In the previous example, we used the vzctl set command to set the disk space quota to 2 GB with a 2.2 GB barrier. The two values that are separated with a : symbol and are the soft limit and the hard limit. The soft limit in the example is 2000000 and the hard limit is 2200000. The soft limit can be exceeded up to the value of the hard limit. The hard limit should never exceed its value. OpenVZ defines soft limits as barriers and hard limits as limits. To set the inode disk for container 101 to 1 million inodes, use the following command: vzctl set 101 --diskinodes 1000000:1100000 --save In the previous example, we used the vzctl set command to set the disk inode limits to a soft limit or barrier of 1 million inodes and a hard limit or limit or 1.1 million inodes. To set the quota time or the period of time in seconds that the container is allowed to exceed the soft limit values of disk quota and inode quota, use the following command: vzctl set 101 --quotatime 900 --save In the previous example, we used the vzctl command to set the quota time to 900 seconds or 15 minutes. This means that once the container soft limit is broken, you will be able to exceed the quota to the value of the hard limit for 15 minutes before the container reports that the value is over quota. Further use of vzctl set The vzctl set command allows you to make modifications to the container's config file without the need to manually edit the file. We are going to go over a few of the options that are essential to administer the node. --onboot The --onboot flag allows you to set whether or not the container will be booted when the node is booted. To set the onboot option, use the following command: vzctl set 101 --onboot In the previous example, we used the vzctl command with the set option and the --onboot flag to enable the container to boot automatically when the server is rebooted, and then saved to the container configuration file. --bootorder The --bootorder flag allows you to change the boot order priority of the container. The higher the value given, the sooner the container will start when the node is booted. To set the bootorder option, use the following command: vzctl set 101 --bootorder 9 --save In the previous example, we used the vzctl command with the set option and the bootorder flag to tell that we would like to change the priority of the order that the container is booted in, and then we save the option to the container's configuration file. --userpasswd The --userpasswd flag allows you to change a user's password that belongs to the container. If the user does not exist, then the user will be created. To set the userpasswd option, use the following command: vzctl set 101 --userpasswd admin:changeme In the previous example, we used the vzctl command with the set option and the --userpasswd flag and change the password for the admin user to the password changeme. --name The --name flag allows you to give the container a name that when assigned, can be used in place of the CTID value when using vzctl. This allows for an easier way to memorize your containers. Instead of focusing on the container ID, you will just need to remember the container name to access the container. To set the name option, use the following command: vzctl set 101 --name gotham --save In the previous example, we used the vzctl command with the set option to set our container 101 to use the name gotham and then save the changes to containers configuration file. --description The --description flag allows you to add a description for the container to give an idea of what the container is for. To use the description option, use the following command: vzctl set 101 --description "Web Development Test Server" --save In the previous example, we used the vzctl command with the set option and the --description flag to add a description of the container "Web Development Test Server". --ipadd The --ipadd flag allows you to add an IP address to the specified container. To set the ipadd option, use the following command: vzctl set 101 --ipadd 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipadd flag to add the IP address 192.168.2.103 to container 101 and then save the changes to the containers configuration file. --ipdel The --ipdel flag allows you to remove an IP address from the specified container. To use the ipdel option, use the following command: vzctl set 101 --ipdel 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipdel flag to remove the IP address 192.168.2.193 from the container 101 and then save the changes to the containers configuration file. --hostname The --hostname flag allows you to set or change the hostname for your container. To use the hostname option, use the following command: vzctl set 101 --hostname gotham.example.com --save In the previous example, we used the vzctl command with the set option and the --hostname flag to change the hostname of the container to gotham.example.com. --disable The --disable flag allows you to disable a containers startup. When this option is in place, you will not be able to start the container until this option is removed. To use the disable option, use the following command: vzctl set 101 --disable In the preceding example, we used the vzctl command with the set option and the --disable flag to prevent the container 101 from starting and then save the changes to the container's configuration file. --ram The --ram flag allows you to set the value for the physical page limit of the container and helps to regulate the amount of memory that is available to the container. To use the ram option, use the following command: vzctl set 101 --ram 2G --save In the previous example, we set the physical page limit to 2 GB using the --ram flag. --swap The --swap flag allows you to set the value of the amount of swap memory that is available to the container. To use the swap option, use the following command: vzctl set 101 --swap 1G --save In the preceding example, we set the swap memory limit for the container to 1 GB using the --swap flag. Summary In this article, we learned to administer the containers that are created on the node by using the vzctl command, and the vzlist command to list containers on the server. The vzctl command has a broad range of flags that can be given to it to allow you to perform many actions to a container. It allows you to start, stop, and restart, create, and destroy a container. You can also suspend and unsuspend the current state of the container, mount and unmount a container, issue changes to the container's config file by using vzctl set. Resources for Article: Further resources on this subject: Basic Concepts of Proxmox Virtual Environment [article] A Virtual Machine for a Virtual World [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 10359

article-image-making-large-scale-led-art-with-fadecandy
Michael Ang
10 Nov 2014
6 min read
Save for later

Making Large-Scale LED Art with FadeCandy

Michael Ang
10 Nov 2014
6 min read
Building projects with programmable LEDs can be very satisfying. With a few lines of code, you can create an awesome animated pattern of light. If you've programmed with an Arduino, you certainly remember the first time you got an LED to blink! From there, you probably wanted to go larger. Arduino is an excellent platform for projects with a small number of LEDs, but at a certain point the small micro-controller reaches its limit. Ardent Mobile Cloud Platform, the project that sparked FadeCandy. Photo by Aaron Muszalski, used under CC-BY. FadeCandy is an alternative way to drive potentially thousands of LEDs, and is an excellent way to go when you have a project larger than Arduino can handle. FadeCandy also provides sophisticated techniques to make your animations buttery smooth—very important when you aren't going for a "chunky" look with your lighting. A typical problem when using LEDs is getting a smooth fade to off or a low brightness. Usually there's a pronounced "stair step" or chunkiness as the LED approaches minimum brightness. The FadeCandy board uses dithering and interpolation to smooth between color values, giving you more nuanced color and smoother animation. With FadeCandy, your light palette can now include "subtle and smooth" as well as "blinky and bright." Two FadeCandy boards The FadeCandy board connects to your computer over USB and can drive up to 8 strips of 64 LEDs. FadeCandy uses the popular WS281x RGB LEDs, which are available online, for example, NeoPixels from Adafruit. Multiple boards can be connected to the host computer, and with 512 LEDs per board, you can create quite large light projects! The host computer runs a piece of software called the FadeCandy server (fcserver). The program (called the "client") that creates the light pattern is separate from the server and can be written in a variety of different programming languages. For example, you can write your animation program in Processing and your Processing sketch will send the colors for the pixels to the FadeCandy server, which sends the data over USB to the FadeCandy hardware boards. It's also possible to make a web page that connects to the FadeCandy server, or to use Python or Node.js. This flexibility means that you can use a powerful desktop programming language that supports, for example, video playback or camera processing. The downside is that you need a host computer with a USB to drive the FadeCandy hardware boards, but the host computer could be a small one, such as the Raspberry Pi.   FadeCandy connected to computer via USB and an LED strip via a breadboard. The LED strip is connected to a separate 5V power supply (not shown). When using a small number of LEDs with an Arduino, you can get away with powering the LEDs from the same power supply as the Arduino. Since FadeCandy is designed to use a large number of LEDs at once, you'll need a separate power supply to power the LEDs (you can't just power them from the USB connection). This is actually a good thing, since it makes you think about providing enough juice to run all the LEDs. For a full guide on setting up a FadeCandy board and software, I recommend the in-depth tutorial LED Art with FadeCandy. Once you have the hardware set up, there are two pieces of software you need to run. The first is the FadeCandy server (fcserver). The server connects to the FadeCandy boards over USB and listens for clients to connect over the network. The client software is responsible for generating the pixels that you want to show and then passing this data to the server. The client software is where you create your fancy animation, handle user interaction, analyze audio, or do whatever processing is needed to generate the colors for your LEDs. Client program in Processing Let's look at one of the examples included with the FadeCandy source code. The example is written in Processing and plays back an animation of fire on a strip of 64 LEDs. The Processing code loads a picture of flames and scrolls the image vertically over the strip of LEDs. The white dots in the screenshot represent the LEDs in the strip—at each of the dots, the color from the Processing sketch is sampled and sent to the corresponding LED. The nice thing about using Processing to make the animation is that it's easy to load an image or perform other more complicated operations and we get to see what's happening onscreen. With the large amount of memory and CPU on a laptop computer, we could also load a video of a fire burning and have that show on the LEDs. The light from the LEDs in this example is quite pleasing. Projected on a wall or other surface, the light ripples smoothly. It's possible to turn off the dithering and interpolation that FadeCandy provides (for example, with this Python config utility) and you can see that these techniques do lead to smoother animation, especially at lower brightness levels. With the dithering and interpolation turned on, the motion is more fluid, giving more of an illusion of continuous movement rather than individual LEDs changing. The choice of using FadeCandy or Arduino to control LEDs comes down largely to a question of scale. For projects using a small number of LEDs, using an Arduino makes it easy to make the project standalone and run on battery power. For example, in my Chrysalis light sculpture, I use an Arduino to drive 32 LEDs interpolating between colors from an image. I was able to fit the image into the onboard memory of the Arduino by making it quite small (31x16 RGB pixels, for a grand total of 1,488 bytes). Getting smooth fading with 32 LEDs on an Arduino is certainly possible, but using hundreds of LEDs would be out of the question. FadeCandy-driven LEDs in a Polycon light sculpture. FadeCandy was designed for projects that are too big to fit on a single Arduino. Where the total memory on an Arduino is measured in kilobytes, the RAM on a Raspberry Pi is hundreds of megabytes, and on a laptop you're talking gigabytes. You can use the processing power of your laptop (or single board computer) to analyze audio, play back video, or do heavy computation that would be hard on a microcontroller. By providing easy interfacing and smooth fading, FadeCandy really opens up what is possible for artistic expression with programmable lighting. I for one welcome the new age of buttery smooth LED light art! FadeCandy is a project by Micah Elizabeth Scott produced in collaboration with Adafruit. About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit that bridges the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology. Into Arduino? Check out our Arduino page for our newest and most popular releases. Begin your adventure through creative hardware today!
Read more
  • 0
  • 0
  • 6048

article-image-create-simple-plugin-melonjs-games
Ellison Leao
10 Nov 2014
4 min read
Save for later

Creating a simple plugin for MelonJS games

Ellison Leao
10 Nov 2014
4 min read
If you are not familiar with the great MelonJS game framework, please go to their official page and read about the great things you can do with this awesome tool. In this post I will teach you how to create a simple plugin to use in your MelonJS game. First, you need to understand the plugin structure: (function($) { myPlugin = me.plugin.Base.extend({ // minimum melonJS version expected version : "1.0.0", init : function() { // call the parent constructor this.parent(); this.myVar = null; }, }); })(window); As you can see, there are no real difficulties in creating new plugins. You just have to create a me class inheriting from the me.plugin.Base class, passing the minimum melonJS version this plugin will use. If you need to persist some variables, you can override the init method just like the code from the start. For this plugin I will create a Clay.io leaderboard integration for a game. The code for the plugin is as follows: /* * MelonJS Game Engine * Copyright (C) 2011 - 2013, Olivier Biot, Jason Oster * http://www.melonjs.org * * Clay.io API plugin * */ (function($) { /** * @class * @public * @extends me.plugin.Base * @memberOf me * @constructor */ Clayio = me.plugin.Base.extend({ // minimum melonJS version expected version : "1.0.0", gameKey: null, _leaderboard: null, init : function(gameKey, options) { // call the parent constructor this.parent(); this.gameKey = gameKey; Clay = {}; Clay.gameKey = this.gameKey; Clay.readyFunctions = []; Clay.ready = function( fn ) { Clay.readyFunctions.push( fn ); }; if (options === undefined) { options = { debug: false, hideUI: false } } Clay.options = { debug: options.debug === undefined ? false: options.debug, hideUI: options.hideUI === undefined ? false: options.hideUI, fail: options.fail } window.onload = function() { var clay = document.createElement("script"); clay.async = true; clay.src = ( "https:" == document.location.protocol ? "https://" : "http://" ) + "cdn.clay.io/api.js"; var tag = document.getElementsByTagName("script")[0]; tag.parentNode.insertBefore(clay, tag); } }, leaderboard: function(id, score, callback) { if (!id) { throw "You must pass a leaderboard id"; } // we can get the score directly from game.data.score if (!score){ score = game.data.score; } var leaderboard = new Clay.Leaderboard({id: id}); this._leaderboard = leaderboard; if (!callback) { this._leaderboard.post({score: score}, callback); }else{ this._leaderboard.post({score: score}); } }, showLeaderBoard: function(id, options, callback) { if (!options){ options = {}; } if (options.limit === undefined){ options.limit = 10; } if (!this._leaderboard) { if (id === undefined) { throw "The leaderboard was not defined before. You must pass a leaderboard id"; } var leaderboard = new Clay.Leaderboard({id: id}); this._leaderboard = leaderboard; } this._leaderboard.show(options, callback); } }); })(window); Let me explain how all of this works: The init method will receive your Clay.io gamekey and initialize the Clay.io API file asynchronously. The leaderboard receives a Clay.io leaderboard ID and a score value. Then, the method creates a leaderboard instance and adds the passed score to the Clay.io leaderboard. If no ID is passed, the function throws an error. The showLeaderboard method is an event that shows the Clay.io leaderboard modal on the screen. If you previously called the leaderboard method, there is no need to pass the leaderboard ID again. To use this plugin in your game, first register the plugin in your game.js file. On the game.onload method add the following code: window.onload(function() { me.plugin.register.defer(this, Clayio, "clay"); }); Due to a Clay.io bug you need to add the socket.io.js script into the index.html file manually. Place the following code into the file <head>: <script src='http://api.clay.io/socket.io/socket.io.js'></script> Now, if you want to call the leaderboard method, just add the following code into your scene: me.plugin.clay.leaderboard(leaderboardId); And that's it! I hope I’ve shown you how easy it is to create plugins for MelonJS. About The Author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 0
  • 2035

article-image-how-to-deploy-a-blog-with-ghost-and-docker
Felix Rabe
07 Nov 2014
6 min read
Save for later

How to Deploy a Blog with Ghost and Docker

Felix Rabe
07 Nov 2014
6 min read
2013 gave birth to two wonderful Open Source projects: Ghost and Docker. This post will show you what the buzz is all about, and how you can use them together. So what are Ghost and Docker, exactly? Ghost is an exciting new blogging platform, written in JavaScript running on Node.js. It features a simple and modern user experience, as well as very transparent and accessible developer communications. This blog post covers Ghost 0.4.2. Docker is a very useful new development tool to package applications together with their dependencies for automated and portable deployment. It is based on Linux Containers (lxc) for lightweight virtualization, and AUFS for filesystem layering. This blog post covers Docker 1.1.2. Install Docker If you are on Windows or Mac OS X, the easiest way to get started using Docker is Boot2Docker. For Linux and more in-depth instructions, consult one of the Docker installation guides. Go ahead and install Docker via one of the above links, then come back and run: docker version You run this in your terminal to verify your installation. If you get about eight lines of detailed version information, the installation was successful. Just running docker will provide you with a list of commands, and docker help <command> will show a command's usage. If you use Boot2Docker, remember to export DOCKER_HOST=tcp://192.168.59.103:2375. Now, to get the Ubuntu 14.04 base image downloaded (which we'll use in the next sections), run the following command: docker run --rm ubuntu:14.04 /bin/true This will take a while, but only for the first time. There are many more Docker images available at the Docker Hub Registry. Hello Docker To give you a quick glimpse into what Docker can do for you, run the following command: docker run --rm ubuntu:14.04 /bin/echo Hello Docker This runs /bin/echo Hello Docker in its own virtual Ubuntu 14.04 environment, but since it uses Linux Containers instead of booting a complete operating system in a virtual machine, this only takes less than a second to complete. Pretty sweet, huh? To run Bash, provide the -ti flags for interactivity: docker run --rm -ti ubuntu:14.04 /bin/bash The --rm flag makes sure that the container gets removed after use, so any files you create in that Bash session get removed after logging out. For more details, see the Docker Run Reference. Build the Ghost image In the previous section, you've run the ubuntu:14.04 image. In this section, we'll build an image for Ghost that we can then use to quickly launch a new Ghost container. While you could get a pre-made Ghost Docker image, for the sake of learning, we'll build our own. About the terminology: A Docker image is analogous to a program stored on disk, while a Docker container is analogous to a process running in memory. Now create a new directory, such as docker-ghost, with the following files — you can also find them in this Gist on GitHub: package.json: {} This is the bare minimum actually required, and will be expanded with the current Ghost dependency by the Dockerfile command npm install --save ghost when building the Docker image. server.js: #!/usr/bin/env node var ghost = require('ghost'); ghost({ config: __dirname + '/config.js' }); This is all that is required to use Ghost as an NPM module. config.js: config = require('./node_modules/ghost/config.example.js'); config.development.server.host = '0.0.0.0'; config.production.server.host = '0.0.0.0'; module.exports = config; This will make the Ghost server accessible from outside of the Docker container. Dockerfile: # DOCKER-VERSION 1.1.2 FROM ubuntu:14.04 # Speed up apt-get according to https://gist.github.com/jpetazzo/6127116 RUN echo "force-unsafe-io" > /etc/dpkg/dpkg.cfg.d/02apt-speedup RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache # Update the distribution ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get upgrade -y # https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y python-software-properties python g++ make nodejs git # git needed by 'npm install' ADD . /src RUN cd /src; npm install --save ghost ENTRYPOINT ["node", "/src/server.js"] # Override ubuntu:14.04 CMD directive: CMD [] EXPOSE 2368 This Dockerfile will create a Docker image with Node.js and the dependencies needed to build the Ghost NPM module, and prepare Ghost to be run via Docker. See Documentation for details on the syntax. Now build the Ghost image using: cd docker-ghost docker build -t ghost-image . This will take a while, but you might have to Ctrl-C and re-run the command if, for more than a couple of minutes, you are stuck at the following step: > node-pre-gyp install --fallback-to-build Run Ghost Now start the Ghost container: docker run --name ghost-container -d -p 2368:2368 ghost-image If you run Boot2Docker, you'll have to figure out its IP address: boot2docker ip Usually, that's 192.168.59.103, so by going to http://192.168.59.103:2368, you will see your fresh new Ghost blog. Yay! For the admin interface, go to http://192.168.59.103:2368/ghost. Manage the Ghost container The following commands will come in handy to manage the Ghost container: # Show all running containers: docker ps -a # Show the container logs: docker logs [-f] ghost-container # Stop Ghost via a simulated Ctrl-C: docker kill -s INT ghost-container # After killing Ghost, this will restart it: docker start ghost-container # Remove the container AND THE DATA (!): docker rm ghost-container What you'll want to do next Some steps that are outside the scope of this post, but some steps that you might want to pursue next, are: Copy and change the Ghost configuration that currently resides in node_modules/ghost/config.js. Move the Ghost content directory into a separate Docker volume to allow for upgrades and data backups. Deploy the Ghost image to production on your public server at your hosting provider. Also, you might want to change the Ghost configuration to match your domain and change the port to 80. How I use Ghost with Docker I run Ghost in Docker successfully over at Named Data Education, a new blog about Named Data Networking. I like the fact that I can replicate an isolated setup identically on that server as well as on my own laptop. Ghost resources Official docs: The Ghost Guide, and the FAQ- / How-To-like User Guide. How To Install Ghost, Ghost for Beginners and All About Ghost are a collection of sites that provide more in-depth material on operating a Ghost blog. By the same guys: All Ghost Themes. Ghost themes on ThemeForest is also a great collection of themes. Docker resources The official documentation provides many guides and references. Docker volumes are explained here and in this post by Michael Crosby. About the Author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol. You can find our very best Docker content on our dedicated Docker page. Whatever you do with software, Docker will help you do it better.
Read more
  • 0
  • 0
  • 29148

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3
Mike Ball
07 Nov 2014
11 min read
Save for later

Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
07 Nov 2014
11 min read
Part 1: Getting up and running with Middleman Many of today’s most prominent web frameworks, such as Ruby on Rails, Django, Wordpress, Drupal, Express, and Spring MVC, rely on a server-side language to process HTTP requests, query data at runtime, and serve back dynamically constructed HTML. These platforms are great, yet developers of dynamic web applications often face complex performance challenges under heavy user traffic, independent of the underlying technology. High traffic, and frequent requests, may exploit processing-intensive code or network latency, in effect yielding a poor user experience or production outage. Static site generators such as Middleman, Jeckyll, and Wintersmith offer developers an elegant, highly scalable alternative to complex, dynamic web applications. Such tools perform dynamic processing and HTML construction during build time rather than runtime. These tools produce a directory of static HTML, CSS, and JavaScript files that can be deployed directly to a web server such as Nginx or Apache. This architecture reduces complexity and encourages a sensible separation of concerns; if necessary, user-specific customization can be handled via client-side interaction with third-party satellite services. In this three part series, we'll walk-through how to get started in developing a Middleman site, some basics of Middleman blogging, how to migrate content from an existing WordPress blog, and how to deploy a Middleman blog to production. We will also learn how to create automated tests, continuous integration, and automated deployments. In this part, we’ll cover the following: Creating a basic Middleman project Middleman configuration basics A quick overview of the Middleman template system Creating a basic Middleman blog Why should you use middleman? Middleman is a mature, full-featured static site generator. It supports a strong templating system, numerous Ruby-based HTML templating tools such as ERb and HAML, as well as a Sprockets-based asset pipeline used to manage CSS, JavaScript, and third-party client-side code. Middleman also integrates well with CoffeeScript, SASS, and Compass. Environment For this tutorial, I’m using an RVM-installed Ruby 2.1.2. I’m on Mac OSX 10.9.4. Installing middleman Install middleman via bundler: $ gem install middleman Create a basic middleman project called middleman-demo: $ middleman init middleman-demo This results in a middleman-demo directory with the following layout: ├── Gemfile ├── Gemfile.lock ├── config.rb └── source    ├── images    │   ├── background.png    │   └── middleman.png    ├── index.html.erb    ├── javascripts    │   └── all.js    ├── layouts    │   └── layout.erb    └── stylesheets        ├── all.css        └── normalize.css[SB4]  There are 5 directories and 10 files. A quick tour Here are a few notes on the middleman-demo layout: The Ruby Gemfile  cites Ruby gem dependencies; Gemfile.lock cites the full dependency chain, including  middleman-demo’s dependencies’ dependencies The config.rb  houses middleman-demo’s configuration The source directory houses middleman-demo ’s source code–the templates, style sheets, images, JavaScript, and other source files required by the  middleman-demo [SB7] site While a Middleman production build is simply a directory of static HTML, CSS, JavaScript, and image files, Middleman sites can be run via a simple web server in development. Run the middleman-demo development server: $ middleman Now, the middleman-demo site can be viewed in your web browser at  http://localhost:4567. Set up live-reloading Middleman comes with the middleman-livereload gem. The gem detects source code changes and automatically reloads the Middleman app. Activate middleman-livereload  by uncommenting the following code in config.rb: # Reload the browser automatically whenever files change configure :development do activate :livereload end Restart the middleman server to allow the configuration change to take effect. Now, middleman-demo should automatically reload on change to config.rb and your web browser should automatically refresh when you edit the source/* code. Customize the site’s appearance Middleman offers a mature HTML templating system. The source/layouts directory contains layouts, the common HTML surrounding individual pages and shared across your site. middleman-demo uses ERb as its template language, though Middleman supports other options such as HAML and Slim. Also note that Middleman supports the ability embed metadata within templates via frontmatter. Frontmatter allows page-specific variables to be embedded via YAML or JSON. These variables are available in a current_page.data namespace. For example, source/index.html.erb contains the following frontmatter specifying a title; it’s available to ERb templates as current_page.data.title: --- title: Welcome to Middleman --- Currently, middleman-demo is a default Middleman installation. Let’s customize things a bit. First, remove all the contents of source/stylesheets/all.css  to remove the default Middleman styles. Next, edit source/index.html.erb to be the following: --- title: Welcome to Middleman Demo --- <h1>Middleman Demo</h1> When viewing middleman-demo at http://localhost:4567, you’ll now see a largely unstyled HTML document with a single Middleman Demo heading. Install the middleman-blog plugin The middleman-blog plugin offers blog functionality to middleman applications. We’ll use middleman-blog in middleman-demo. Add the middleman-blog version 3.5.3 gem dependency to middleman-demo by adding the following to the Gemfile: gem "middleman-blog", "3.5.3 Re-install the middleman-demo gem dependencies, which now include middleman-blog: $ bundle install Activate middleman-blog and specify a URL pattern at which to serve blog posts by adding the following to config.rb: activate :blog do |blog| blog.prefix = "blog" blog.permalink = "{year}/{month}/{day}/{title}.html" end Write a quick blog post Now that all has been configured, let’s write a quick blog post to confirm that middleman-blog works. First, create a directory to house the blog posts: $ mkdir source/blog The source/blog directory will house markdown files containing blog post content and any necessary metadata. These markdown files highlight a key feature of middleman: rather than query a relational database within which content is stored, a middleman application typically reads data from flat files, simple text files–usually markdown–stored within the site’s source code repository. Create a markdown file for middleman-demo ’s first post: $ touch source/blog/2014-08-20-new-blog.markdown Next, add the required frontmatter and content to source/blog/2014-08-20-new-blog.markdown: --- title: New Blog date: 2014/08/20 tags: middleman, blog --- Hello world from Middleman! Features Rich templating system Built-in helpers Easy configuration Asset pipeline Lots more  Note that the content is authored in markdown, a plain text syntax, which is evaluated by Middleman as HTML. You can also embed HTML directly in the markdown post files. GitHub’s documentation provides a good overview of markdown. Next, add the following ERb template code to source/index.html.erb [SB37] to display a list of blog posts on middleman-demo ’s home page: <ul> <% blog.articles.each do |article| %> <li> <%= link_to article.title, article.path %> </li> <% end %> </ul> Now, when running middleman-demo and visiting http://localhost:4567, a link to the new blog post is listed on middleman-demo ’s home page. Clicking the link renders the permalink for the New Blog blog post at blog/2014-08-20/new-blog.html, as is specified in the blog configuration in config.rb. A few notes on the template code Note the use of a link_to method. This is a built-in middleman template helper. Middleman provides template helpers to simplify many common template tasks, such as rendering an anchor tag. In this case, we pass the link_to method two arguments, the intended anchor tag text and the intended href value. In turn, link_to generates the necessary HTML. Also note the use of a blog variable, within which an article’s method houses an array of all blog posts. Where did this come from?  middleman-demo is an instance of  Middleman::Application;  a blog  method on this instance. To explore other Middleman::Application methods, open middleman-demo via the built-in Middleman console by entering the following in your terminal: $ middleman console To view all the methods on the blog, including the aforementioned articles method, enter the following within the console: 2.1.2 :001 > blog.methods To view all the additional methods, beyond the blog, available to the Middleman::Application instance, enter the following within the console: 2.1.2 :001 > self.methods More can be read about all these methods on Middleman::Application’s rdoc.info class documentation. Cleaner URLs Note that the current new blog URL ends in .html. Let’s customize middleman-demo to omit .html from URLs. Add the following config.rb: activate :directory_indexes Now, rather than generating files such as /blog/2014-08-20/new-blog.html,  middleman-demo generates files such as /blog/2014-08-20/new-blog/index.html, thus enabling the page to be served by most web servers at a /blog/2014-08-20/new-blog/ path. Adjusting the templates Let’s adjust our the middleman-demo ERb templates a bit. First, note that <h1>Middleman Demo</h1> only displays on the home page; let’s make it render on all of the site’s pages. Move <h1>Middleman Demo</h1> from  source/index.html.erb  to source/layouts/layout.erb. Put it just inside the <body> tag: <body class="<%= page_classes %>"> <h1>Middleman Demo</h1> <%= yield %> </body> Next, let’s create a custom blog post template. Create the template file: $ touch source/layout/post.erb Add the following to extend the site-wide functionality of source/layouts/layout.erb to  source/layouts/post.erb: <% wrap_layout :layout do %> <h2><%= current_article.title %></h2> <p>Posted <%= current_article.date.strftime('%B %e, %Y') %></p> <%= yield %> <ul> <% current_article.tags.each do |tag| %> <li><a href="/blog/tags/<%= tag %>/"><%= tag %></a></li> <% end %> </ul> <% end %> Note the use of the wrap_layout  ERb helper.  The wrap_layout ERb helper takes two arguments. The first is the name of the layout to wrap, in this case :layout. The second argument is a Ruby block; the contents of the block are evaluated within the <%= yield %> call of source/layouts/layout.erb. Next, instruct  middleman-demo  to use  source/layouts/post.erb  in serving blog posts by adding the necessary configuration to  config.rb : page "blog/*", :layout => :post Now, when restarting the Middleman server and visiting  http://localhost:4567/blog/2014/08/20/new-blog/,  middleman-demo renders a more comprehensive blog template that includes the post’s title, date published, and tags. Let’s add a simple template to render a tags page that lists relevant tagged content. First, create the template: $ touch source/tag.html.erb And add the necessary ERb to list the relevant posts assigned a given tag: <h2>Posts tagged <%= tagname %></h2> <ul> <% page_articles.each do |post| %> <li> <a href="<%= post.url %>"><%= post.title %></a> </li> <% end %> </ul> Specify the blog’s tag template by editing the blog configuration in config.rb: activate :blog do |blog| blog.prefix = 'blog' blog.permalink = "{year}/{month}/{day}/{title}.html" # tag template: blog.tag_template = "tag.html" end Edit config.rb to configure middleman-demo’s tag template to use source/layout.erb rather than source/post.erb: page "blog/tags/*", :layout => :layout Now, when visiting http://localhost:4567/2014/08/20/new-blog/, you should see a linked list of New Blog’s tags. Clicking a tag should correctly render the tags page. Part 1 recap Thus far, middleman-demo serves as a basic Middleman-based blog example. It demonstrates Middleman templating, how to set up the middleman-blog  plugin, and how to make author markdown-based blog posts in Middleman. In part 2, we’ll cover migrating content from an existing Wordpress blog. We’ll also step through establishing an Amazon S3 bucket, building middleman-demo, and deploying to production. In part 3, we’ll cover how to create automated tests, continuous integration, and automated deployments. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 20173

article-image-configuring-distributed-rails-applications-chef-part-2
Rahmal Conda
07 Nov 2014
9 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 2

Rahmal Conda
07 Nov 2014
9 min read
In my Part 1 post, I gave you the low down about Chef. I covered what it’s for and what it’s capable of. Now let’s get into some real code and take a look at how you install and run Chef Solo and Chef Server. What we want to accomplish First let’s make a list of some goals. What are we trying to get out of deploying and provisioning with Chef? Once we have it set up, provisioning a new server should be simple; no more than a few simple commands. We want it to be platform-agnostic so we can deploy any VPS provider we choose with the same scripts. We want it to be easy to follow and understand. Any new developer coming later should have no problem figuring out what’s going on. We want the server to be nearly automated. It should take care of itself as much as possible, and alert us if anything goes wrong. Before we start, let’s decide on a stack. You should feel free to run any stack you choose. This is just what I’m using for this post setup: Ubuntu 12.04 LTS RVM Ruby 1.9.3+ Rails 3.2+ Postgres 9.3+ Redis 3.1+ Chef Git Now that we’ve got that out of the way, let’s get started! Step 1: Install the tools First, make sure that all of the packages we download to our VPS are up to date: ~$ sudo apt-get update Next, we'll install RVM (Ruby Version Manager). RVM is a great tool for installing Ruby. It allows you to use several versions of Ruby on one server. Don't get ahead of yourself though; at this point, we only care about one version. To install RVM, we’ll need curl: ~$ sudo apt-get install curl We also need to install Git. Git is an open source distributed version control system, primarily used to maintain software projects. (If you didn't know that much, you're probably reading the wrong post. But I digress!): ~$ sudo apt-get install git Now install RVM with this curl command: ~$ curl -sSL https://get.rvm.io | bash -s stable You’ll need to source RVM (you can add this to your bash profile): ~$ source ~/.rvm/scripts/rvm In order for it to work, RVM has some of its own dependencies that need to be installed. To automatically install them, use the following command: ~$ rvm requirements Once we have RVM set up, installing Ruby is simple: ~$ rvm install 1.9.3 Ruby 1.9.3 is now installed! Since we'll be accessing it through a tool that can potentially have a variety of Ruby versions loaded, we need to tell the system to use this version as the default: ~$ rvm use 1.9.3 --default Next we'll make sure that we can install any Ruby Gem we need into this new environment. We'll stick with RVM for installing gems as well. This'll ensure they get loaded into our Ruby version properly. Run this command: ~$ rvm rubygems current Don’t worry if it seems like you’re setting up a lot of things manually now. Once Chef is set up, all of this will be part of your cookbooks, so you’ll only have to do this once. Step 2: Install Chef and friends First, we'll start off by cloning the Opscode Chef repository: ~$ git clone git://github.com/opscode/chef-repo.git chef With Ruby and RubyGems set up, we can install some gems! We’ll start with a gem called Librarian-Chef. Librarian-Chef is sort of a Rails Bundler for Chef cookbooks. It'll download and manage cookbooks that you specify in Cheffile. Many useful cookbooks are published by different sources within the Chef community. You'll want to make use of them as you build out your own Chef environment. ~$ gem install librarian-chef  Initialize Librarian in your Chef repository with this command: ~$ cd chef ~/chef$ librarian-chef init This command will create a Cheffile in your Chef repository. All of your dependencies should be specified in that file. To deploy the stack we just built, your Cheffile should look like this: 1 site 'http://community.opscode.com/api/v1' 2 cookbook 'sudo' 3 cookbook 'apt' 4 cookbook 'user' 5 cookbook 'git' 6 cookbook 'rvm' 7 cookbook 'postgresql' 8 cookbook 'rails' ~ Now use Librarian to pull these community cookbooks: ~/chef$ librarian-chef install Librarian will pull the cookbooks you specify, along with their dependencies, to the cookbooks folder and create a Cheffile.lock file. Commit both Cheffile and Cheffile.lock to your repo: ~/chef$ git add Cheffile Cheffile.lock ~/chef$ git commit -m “updated cookbooks list” There is no need to commit the cookbooks folder, because you can always use the install command and Librarian will pull the same group of cookbooks with the correct versions. You should not touch the cookbooks folder—let Librarian manage it for you. Librarian will overwrite any changes you make inside that folder. If you want to manually create and manage cookbooks, outside of Librarian, add a new folder, like local-cookbooks, for instance. Step 3: Cooking up somethin’ good! Now that you see how to get the cookbooks, you can create your roles. You use roles to determine what role a server instance would have in you server stack, and you specify what that role would need. For instance, your Database Server role would most likely need a Postgresql server (or you DB of choice), a DB client, user authorization and management, while your Web Server would need Apache (or Nginx), Unicorn, Passenger, and so on. You can also make base roles, to have a basic provision that all your servers would have. Given what we’ve installed so far, our basic configuration might look something like this: name "base" description "Basic configuration for all nodes" run_list( 'recipe[git]', 'recipe[sudo]', 'recipe[apt]', 'recipe[rvm::user]', 'recipe[postgresql::client]' ) override_attributes( authorization: { sudo: { users: ['ubuntu'], passwordless: true } }, rvm: { rubies: ['ruby-1.9.3-p125'], default_ruby: 'ruby-1.9.3-p125', global_gems: ['bundler', 'rake'] } ) ~ Deploying locally with Chef Solo: Chef Solo is a Ruby gem that runs a self-contained Chef instance. Solo is great for running your recipes locally to test them, or to provision development machines. If you don’t have a hosted Chef Server set up, you can use Chef Solo to set up remote servers too. If your architecture is still pretty small, this might be just what you need. We need to create a Chef configuration file, so we’ll call it deploy.rb: root = File.absolute_path(File.dirname(__FILE__)) roles = File.join(root, 'cookbooks') books = File.join(root, 'roles') file_cache_path root cookbook_path books role_path roles ~ We’ll also need a JSON-formatted configuration file. Let’s call this one deploy.json: { "run_list": ["recipe[base]"] } ~ Now run Chef with this command: ~/chef$ sudo chef-solo -j deploy.json -c deploy.rb Deploying to a new Amazon EC2 instance: You’ll need the Chef server for this step. First you need to create a new VPS instance for your Chef server and configure it with a static IP or a domain name, if possible. We won’t go through that here, but you can find instructions for setting up a server instance on EC2 with a public IP and configuring a domain name in the documentation for your VPS. Once you have your server instance set up, SSH onto the instance and install Chef server. Start by downloading the dep package using the wget tool: ~$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ ubuntu/12.04/x86_64/chef-server_11.0.10-1.ubuntu.12.04_amd64.deb Once the dep package has downloaded, install Chef server like so: ~$ sudo dpkg -i chef-server* When it completes, it will print to the screen an instruction that you need to run this next command to actually configure the service for your specific machine. This command will configure everything automatically: ~$ sudo chef-server-ctl reconfigure Once the configuration step is complete, the Chef server should be up and running. You can access the web interface immediately by browsing to your server's domain name or IP address. Now that you’ve got Chef up and running, install the knife EC2 plugin. This will also install the knife gem as a dependency: ~$ gem install knife-ec2 You now have everything you need! So create another VPS to provision with Chef. Once you do that, you’ll need to copy your SSH keys over: ~$ ssh-copy-id root@yourserverip You can finally provision your server! Start by installing Chef on your new machine: ~$ knife solo prepare root@yourserverip This will generate a file, nodes/yourserverip.json. You need to change this file to add your own environment settings. For instance, you will need to add username and password for monit. You will also need to add a password for postgresql to the file. Run the openssl command again to create a password for postgresql. Take the generated password, and add it to the file. Now, you can finally provision your server! Start the Chef command: ~$ knife solo cook root@yourserverip Now just sit back, relax and watch Chef cook up your tasty app server. This process may take a while. But once it completes, you’ll have a server ready for a Rails, Postgres, and Redis! I hope these posts helped you get an idea of how much Chef can simplify your life and your deployments. Here’s a couple of links with more information and references about Chef: Chef community site:http://cookbooks.opscode.com/ Chef Wiki:https://wiki.opscode.com/display/chef/Home Chef Supermarket:https://community.opscode.com/cookbooks?utf8=%E2%9C%93&q=user Chef cookbooks for busy Ruby developers:http://teohm.com/blog/2013/04/17/chef-cookbooks-for-busy-ruby-developers/ Deploying Rails apps with Chef and Capistrano:http://www.slideshare.net/SmartLogic/guided-exploration-deploying-rails-apps-with-chef-and-capistrano About the author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 2867
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 12450

article-image-postmodel-workflow
Packt
04 Nov 2014
23 min read
Save for later

Postmodel Workflow

Packt
04 Nov 2014
23 min read
 This article written by Trent Hauck, the author of scikit-learn Cookbook, Packt Publishing, will cover the following recipes: K-fold cross validation Automatic cross validation Cross validation with ShuffleSplit Stratified k-fold Poor man's grid search Brute force grid search Using dummy estimators to compare results (For more resources related to this topic, see here.) Even though by design the articles are unordered, you could argue by virtue of the art of data science, we've saved the best for last. For the most part, each recipe within this article is applicable to the various models we've worked with. In some ways, you can think about this article as tuning the parameters and features. Ultimately, we need to choose some criteria to determine the "best" model. We'll use various measures to define best. Then in the Cross validation with ShuffleSplit recipe, we will randomize the evaluation across subsets of the data to help avoid overfitting. K-fold cross validation In this recipe, we'll create, quite possibly, the most important post-model validation exercise—cross validation. We'll talk about k-fold cross validation in this recipe. There are several varieties of cross validation, each with slightly different randomization schemes. K-fold is perhaps one of the most well-known randomization schemes. Getting ready We'll create some data and then fit a classifier on the different folds. It's probably worth mentioning that if you can keep a holdout set, then that would be best. For example, we have a dataset where N = 1000. If we hold out 200 data points, then use cross validation between the other 800 points to determine the best parameters. How to do it... First, we'll create some fake data, then we'll examine the parameters, and finally, we'll look at the size of the resulting dataset: >>> N = 1000>>> holdout = 200>>> from sklearn.datasets import make_regression>>> X, y = make_regression(1000, shuffle=True) Now that we have the data, let's hold out 200 points, and then go through the fold scheme like we normally would: >>> X_h, y_h = X[:holdout], y[:holdout]>>> X_t, y_t = X[holdout:], y[holdout:]>>> from sklearn.cross_validation import KFold K-fold gives us the option of choosing how many folds we want, if we want the values to be indices or Booleans, if want to shuffle the dataset, and finally, the random state (this is mainly for reproducibility). Indices will actually be removed in later versions. It's assumed to be True. Let's create the cross validation object: >>> kfold = KFold(len(y_t), n_folds=4) Now, we can iterate through the k-fold object: >>> output_string = "Fold: {}, N_train: {}, N_test: {}">>> for i, (train, test) in enumerate(kfold):       print output_string.format(i, len(y_t[train]),       len(y_t[test]))Fold: 0, N_train: 600, N_test: 200Fold: 1, N_train: 600, N_test: 200Fold: 2, N_train: 600, N_test: 200Fold: 3, N_train: 600, N_test: 200 Each iteration should return the same split size. How it works... It's probably clear, but k-fold works by iterating through the folds and holds out 1/n_folds * N, where N for us was len(y_t). From a Python perspective, the cross validation objects have an iterator that can be accessed by using the in operator. Often times, it's useful to write a wrapper around a cross validation object that will iterate a subset of the data. For example, we may have a dataset that has repeated measures for data points or we may have a dataset with patients and each patient having measures. We're going to mix it up and use pandas for this part: >>> import numpy as np>>> import pandas as pd>>> patients = np.repeat(np.arange(0, 100, dtype=np.int8), 8)>>> measurements = pd.DataFrame({'patient_id': patients,                   'ys': np.random.normal(0, 1, 800)}) Now that we have the data, we only want to hold out certain customers instead of data points: >>> custids = np.unique(measurements.patient_id)>>> customer_kfold = KFold(custids.size, n_folds=4)>>> output_string = "Fold: {}, N_train: {}, N_test: {}">>> for i, (train, test) in enumerate(customer_kfold):       train_cust_ids = custids[train]       training = measurements[measurements.patient_id.isin(                 train_cust_ids)]       testing = measurements[~measurements.patient_id.isin(                 train_cust_ids)]       print output_string.format(i, len(training), len(testing))Fold: 0, N_train: 600, N_test: 200Fold: 1, N_train: 600, N_test: 200Fold: 2, N_train: 600, N_test: 200Fold: 3, N_train: 600, N_test: 200 Automatic cross validation We've looked at the using cross validation iterators that scikit-learn comes with, but we can also use a helper function to perform cross validation for use automatically. This is similar to how other objects in scikit-learn are wrapped by helper functions, pipeline for instance. Getting ready First, we'll need to create a sample classifier; this can really be anything, a decision tree, a random forest, whatever. For us, it'll be a random forest. We'll then create a dataset and use the cross validation functions. How to do it... First import the ensemble module and we'll get started: >>> from sklearn import ensemble>>> rf = ensemble.RandomForestRegressor(max_features='auto') Okay, so now, let's create some regression data: >>> from sklearn import datasets>>> X, y = datasets.make_regression(10000, 10) Now that we have the data, we can import the cross_validation module and get access to the functions we'll use: >>> from sklearn import cross_validation>>> scores = cross_validation.cross_val_score(rf, X, y)>>> print scores[ 0.86823874 0.86763225 0.86986129] How it works... For the most part, this will delegate to the cross validation objects. One nice thing is that, the function will handle performing the cross validation in parallel. We can activate verbose mode play by play: >>> scores = cross_validation.cross_val_score(rf, X, y, verbose=3, cv=4)[CV] no parameters to be set[CV] no parameters to be set, score=0.872866 - 0.7s[CV] no parameters to be set[CV] no parameters to be set, score=0.873679 - 0.6s[CV] no parameters to be set[CV] no parameters to be set, score=0.878018 - 0.7s[CV] no parameters to be set[CV] no parameters to be set, score=0.871598 - 0.6s[Parallel(n_jobs=1)]: Done 1 jobs | elapsed: 0.7s[Parallel(n_jobs=1)]: Done 4 out of 4 | elapsed: 2.6s finished As we can see, during each iteration, we scored the function. We also get an idea of how long the model runs. It's also worth knowing that we can score our function predicated on which kind of model we're trying to fit. Cross validation with ShuffleSplit ShuffleSplit is one of the simplest cross validation techniques. This cross validation technique will simply take a sample of the data for the number of iterations specified. Getting ready ShuffleSplit is another cross validation technique that is very simple. We'll specify the total elements in the dataset, and it will take care of the rest. We'll walk through an example of estimating the mean of a univariate dataset. This is somewhat similar to resampling, but it'll illustrate one reason why we want to use cross validation while showing cross validation. How to do it... First, we need to create the dataset. We'll use NumPy to create a dataset, where we know the underlying mean. We'll sample half of the dataset to estimate the mean and see how close it is to the underlying mean: >>> import numpy as np>>> true_loc = 1000>>> true_scale = 10>>> N = 1000>>> dataset = np.random.normal(true_loc, true_scale, N)>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.hist(dataset, color='k', alpha=.65, histtype='stepfilled');>>> ax.set_title("Histogram of dataset");>>> f.savefig("978-1-78398-948-5_06_06.png") NumPy will give the following output: Now, let's take the first half of the data and guess the mean: >>> from sklearn import cross_validation>>> holdout_set = dataset[:500]>>> fitting_set = dataset[500:]>>> estimate = fitting_set[:N/2].mean()>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.set_title("True Mean vs Regular Estimate")>>> ax.vlines(true_loc, 0, 1, color='r', linestyles='-', lw=5,             alpha=.65, label='true mean')>>> ax.vlines(estimate, 0, 1, color='g', linestyles='-', lw=5,             alpha=.65, label='regular estimate')>>> ax.set_xlim(999, 1001)>>> ax.legend()>>> f.savefig("978-1-78398-948-5_06_07.png") We'll get the following output: Now, we can use ShuffleSplit to fit the estimator on several smaller datasets: >>> from sklearn.cross_validation import ShuffleSplit>>> shuffle_split = ShuffleSplit(len(fitting_set))>>> mean_p = []>>> for train, _ in shuffle_split:       mean_p.append(fitting_set[train].mean())       shuf_estimate = np.mean(mean_p)>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.vlines(true_loc, 0, 1, color='r', linestyles='-', lw=5,             alpha=.65, label='true mean')>>> ax.vlines(estimate, 0, 1, color='g', linestyles='-', lw=5,             alpha=.65, label='regular estimate')>>> ax.vlines(shuf_estimate, 0, 1, color='b', linestyles='-', lw=5,             alpha=.65, label='shufflesplit estimate')>>> ax.set_title("All Estimates")>>> ax.set_xlim(999, 1001)>>> ax.legend(loc=3) The output will be as follows: As we can see, we got an estimate that was similar to what we expected, but we were able to take many samples to get that estimate. Stratified k-fold In this recipe, we'll quickly look at stratified k-fold valuation. We've walked through different recipes where the class representation was unbalanced in some manner. Stratified k-fold is nice because its scheme is specifically designed to maintain the class proportions. Getting ready We're going to create a small dataset. In this dataset, we will then use stratified k-fold validation. We want it small so that we can see the variation. For larger samples. it probably won't be as big of a deal. We'll then plot the class proportions at each step to illustrate how the class proportions are maintained: >>> from sklearn import datasets>>> X, y = datasets.make_classification(n_samples=int(1e3), weights=[1./11]) Let's check the overall class weight distribution: >>> y.mean()0.90300000000000002 Roughly, 90.5 percent of the samples are 1, with the balance 0. How to do it... Let's create a stratified k-fold object and iterate it through each fold. We'll measure the proportion of verse that are 1. After that we'll plot the proportion of classes by the split number to see how and if it changes. This code will hopefully illustrate how this is beneficial. We'll also plot this code against a basic ShuffleSplit: >>> from sklearn import cross_validation>>> n_folds = 50>>> strat_kfold = cross_validation.StratifiedKFold(y,                 n_folds=n_folds)>>> shuff_split = cross_validation.ShuffleSplit(n=len(y),                 n_iter=n_folds)>>> kfold_y_props = []>>> shuff_y_props = []>>> for (k_train, k_test), (s_train, s_test) in zip(strat_kfold,         shuff_split):        kfold_y_props.append(y[k_train].mean())       shuff_y_props.append(y[s_train].mean()) Now, let's plot the proportions over each fold: >>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.plot(range(n_folds), shuff_y_props, label="ShuffleSplit",           color='k')>>> ax.plot(range(n_folds), kfold_y_props, label="Stratified",           color='k', ls='--')>>> ax.set_title("Comparing class proportions.")>>> ax.legend(loc='best') The output will be as follows: We can see that the proportion of each fold for stratified k-fold is stable across folds. How it works... Stratified k-fold works by taking the y value. First, getting the overall proportion of the classes, then intelligently splitting the training and test set into the proportions. This will generalize to multiple labels: >>> import numpy as np>>> three_classes = np.random.choice([1,2,3], p=[.1, .4, .5],                   size=1000)>>> import itertools as it>>> for train, test in cross_validation.StratifiedKFold(three_classes, 5):       print np.bincount(three_classes[train])[ 0 90 314 395][ 0 90 314 395][ 0 90 314 395][ 0 91 315 395][ 0 91 315 396] As we can see, we got roughly the sample sizes of each class for our training and testing proportions. Poor man's grid search In this recipe, we're going to introduce grid search with basic Python, though we will use sklearn for the models and matplotlib for the visualization. Getting ready In this recipe, we will perform the following tasks: Design a basic search grid in the parameter space Iterate through the grid and check the loss/score function at each point in the parameter space for the dataset Choose the point in the parameter space that minimizes/maximizes the evaluation function Also, the model we'll fit is a basic decision tree classifier. Our parameter space will be 2 dimensional to help us with the visualization: The parameter space will then be the Cartesian product of the those two sets: We'll see in a bit how we can iterate through this space with itertools. Let's create the dataset and then get started: >>> from sklearn import datasets>>> X, y = datasets.make_classification(n_samples=2000, n_features=10) How to do it... Earlier we said that we'd use grid search to tune two parameters—criteria and max_features. We need to represent those as Python sets, and then use itertools product to iterate through them: >>> criteria = {'gini', 'entropy'}>>> max_features = {'auto', 'log2', None}>>> import itertools as it>>> parameter_space = it.product(criteria, max_features) Great! So now that we have the parameter space, let's iterate through it and check the accuracy of each model as specified by the parameters. Then, we'll store that accuracy so that we can compare different parameter spaces. We'll also use a test and train split of 50, 50: import numpy as nptrain_set = np.random.choice([True, False], size=len(y))from sklearn.tree import DecisionTreeClassifieraccuracies = {}for criterion, max_feature in parameter_space:   dt = DecisionTreeClassifier(criterion=criterion,         max_features=max_feature)   dt.fit(X[train_set], y[train_set])   accuracies[(criterion, max_feature)] = (dt.predict(X[~train_set])                                         == y[~train_set]).mean()>>> accuracies{('entropy', None): 0.974609375, ('entropy', 'auto'): 0.9736328125,('entropy', 'log2'): 0.962890625, ('gini', None): 0.9677734375, ('gini','auto'): 0.9638671875, ('gini', 'log2'): 0.96875} So we now have the accuracies and its performance. Let's visualize the performance: >>> from matplotlib import pyplot as plt>>> from matplotlib import cm>>> cmap = cm.RdBu_r>>> f, ax = plt.subplots(figsize=(7, 4))>>> ax.set_xticklabels([''] + list(criteria))>>> ax.set_yticklabels([''] + list(max_features))>>> plot_array = []>>> for max_feature in max_features:m = []>>> for criterion in criteria:       m.append(accuracies[(criterion, max_feature)])       plot_array.append(m)>>> colors = ax.matshow(plot_array, vmin=np.min(accuracies.values())             - 0.001, vmax=np.max(accuracies.values()) + 0.001,             cmap=cmap)>>> f.colorbar(colors) The following is the output: It's fairly easy to see which one performed best here. Hopefully, you can see how this process can be taken to the further stage with a brute force method. How it works... This works fairly simply, we just have to perform the following steps: Choose a set of parameters. Iterate through them and find the accuracy of each step. Find the best performer by visual inspection. Brute force grid search In this recipe, we'll do an exhaustive grid search through scikit-learn. This is basically the same thing we did in the previous recipe, but we'll utilize built-in methods. We'll also walk through an example of performing randomized optimization. This is an alternative to brute force search. Essentially, we're trading computer cycles to make sure that we search the entire space. We were fairly calm in the last recipe. However, you could imagine a model that has several steps, first imputation for fix missing data, then PCA reduce the dimensionality to classification. Your parameter space could get very large, very fast; therefore, it can be advantageous to only search a part of that space. Getting ready To get started, we'll need to perform the following steps: Create some classification data. We'll then create a LogisticRegression object that will be the model we're fitting. After that, we'll create the search objects, GridSearch and RandomizedSearchCV. How to do it... Run the following code to create some classification data: >>> from sklearn.datasets import make_classification>>> X, y = make_classification(1000, n_features=5) Now, we'll create our logistic regression object: >>> from sklearn.linear_model import LogisticRegression>>> lr = LogisticRegression(class_weight='auto') We need to specify the parameters we want to search. For GridSearch, we can just specify the ranges that we care about, but for RandomizedSearchCV, we'll need to actually specify the distribution over the same space from which to sample: >>> lr.fit(X, y)LogisticRegression(C=1.0, class_weight={0: 0.25, 1: 0.75},                   dual=False,fit_intercept=True,                  intercept_scaling=1, penalty='l2',                   random_state=None, tol=0.0001)>>> grid_search_params = {'penalty': ['l1', 'l2'],'C': [1, 2, 3, 4]} The only change we'll need to make is to describe the C parameter as a probability distribution. We'll keep it simple right now, though we will use scipy to describe the distribution: >>> import scipy.stats as st>>> import numpy as np>>> random_search_params = {'penalty': ['l1', 'l2'],'C': st.randint(1, 4)} How it works... Now, we'll fit the classifier. This works by passing lr to the parameter search objects: >>> from sklearn.grid_search import GridSearchCV, RandomizedSearchCV>>> gs = GridSearchCV(lr, grid_search_params) GridSearchCV implements the same API as the other models: >>> gs.fit(X, y)GridSearchCV(cv=None, estimator=LogisticRegression(C=1.0,             class_weight='auto', dual=False, fit_intercept=True,             intercept_scaling=1, penalty='l2', random_state=None,             tol=0.0001), fit_params={}, iid=True, loss_func=None,             n_jobs=1, param_grid={'penalty': ['l1', 'l2'], 'C':             [1, 2, 3, 4]}, pre_dispatch='2*n_jobs', refit=True,             score_func=None, scoring=None, verbose=0) As we can see with the param_grid parameter, our penalty and C are both arrays. To access the scores, we can use the grid_scores_ attribute of the grid search. We also want to find the optimal set of parameters. We can also look at the marginal performance of the grid search: >>> gs.grid_scores_[mean: 0.90300, std: 0.01192, params: {'penalty': 'l1', 'C': 1},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 1},mean: 0.90200, std: 0.01117, params: {'penalty': 'l1', 'C': 2},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 2},mean: 0.90200, std: 0.01117, params: {'penalty': 'l1', 'C': 3},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 3},mean: 0.90100, std: 0.01258, params: {'penalty': 'l1', 'C': 4},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 4}] We might want to get the max score: >>> gs.grid_scores_[1][1]0.90100000000000002>>> max(gs.grid_scores_, key=lambda x: x[1])mean: 0.90300, std: 0.01192, params: {'penalty': 'l1', 'C': 1} The parameters obtained are the best choices for our logistic regression. Using dummy estimators to compare results This recipe is about creating fake estimators; this isn't the pretty or exciting stuff, but it is worthwhile to have a reference point for the model you'll eventually build. Getting ready In this recipe, we'll perform the following tasks: Create some data random data. Fit the various dummy estimators. We'll perform these two steps for regression data and classification data. How to do it... First, we'll create the random data: >>> from sklearn.datasets import make_regression, make_classification# classification if for later>>> X, y = make_regression()>>> from sklearn import dummy>>> dumdum = dummy.DummyRegressor()>>> dumdum.fit(X, y)DummyRegressor(constant=None, strategy='mean') By default, the estimator will predict by just taking the mean of the values and predicting the mean values: >>> dumdum.predict(X)[:5]array([ 2.23297907, 2.23297907, 2.23297907, 2.23297907, 2.23297907]) There are other two other strategies we can try. We can predict a supplied constant (refer to constant=None from the preceding command). We can also predict the median value. Supplying a constant will only be considered if strategy is "constant". Let's have a look: >>> predictors = [("mean", None),                 ("median", None),                 ("constant", 10)]>>> for strategy, constant in predictors:       dumdum = dummy.DummyRegressor(strategy=strategy,                 constant=constant)>>> dumdum.fit(X, y)>>> print "strategy: {}".format(strategy), ",".join(map(str,         dumdum.predict(X)[:5]))strategy: mean 2.23297906733,2.23297906733,2.23297906733,2.23297906733,2.23297906733strategy: median 20.38535248,20.38535248,20.38535248,20.38535248,20.38535248strategy: constant 10.0,10.0,10.0,10.0,10.0 We actually have four options for classifiers. These strategies are similar to the continuous case, it's just slanted toward classification problems: >>> predictors = [("constant", 0),                 ("stratified", None),                 ("uniform", None),                 ("most_frequent", None)] We'll also need to create some classification data: >>> X, y = make_classification()>>> for strategy, constant in predictors:       dumdum = dummy.DummyClassifier(strategy=strategy,                 constant=constant)       dumdum.fit(X, y)       print "strategy: {}".format(strategy), ",".join(map(str,             dumdum.predict(X)[:5]))strategy: constant 0,0,0,0,0strategy: stratified 1,0,0,1,0strategy: uniform 0,0,0,1,1strategy: most_frequent 1,1,1,1,1 How it works... It's always good to test your models against the simplest models and that's exactly what the dummy estimators give you. For example, imagine a fraud model. In this model, only 5 percent of the data set is fraud. Therefore, we can probably fit a pretty good model just by never guessing any fraud. We can create this model by using the stratified strategy, using the following command. We can also get a good example of why class imbalance causes problems: >>> X, y = make_classification(20000, weights=[.95, .05])>>> dumdum = dummy.DummyClassifier(strategy='most_frequent')>>> dumdum.fit(X, y)DummyClassifier(constant=None, random_state=None, strategy='most_frequent')>>> from sklearn.metrics import accuracy_score>>> print accuracy_score(y, dumdum.predict(X))0.94575 We were actually correct very often, but that's not the point. The point is that this is our baseline. If we cannot create a model for fraud that is more accurate than this, then it isn't worth our time. Summary This article taught us how we can take a basic model produced from one of the recipes and tune it so that we can achieve better results than we could with the basic model. Resources for Article: Further resources on this subject: Specialized Machine Learning Topics [article] Machine Learning in IPython with scikit-learn [article] Our First Machine Learning Method – Linear Classification [article]
Read more
  • 0
  • 0
  • 2245

article-image-loading-data-creating-app-and-adding-dashboards-and-reports-splunk
Packt
31 Oct 2014
13 min read
Save for later

Loading data, creating an app, and adding dashboards and reports in Splunk

Packt
31 Oct 2014
13 min read
In this article by Josh Diakun, Paul R Johnson, and Derek Mock, authors of Splunk Operational Intelligence Cookbook, we will take a look at how to load sample data into Splunk, how to create an application, and how to add dashboards and reports in Splunk. (For more resources related to this topic, see here.) Loading the sample data While most of the data you will index with Splunk will be collected in real time, there might be instances where you have a set of data that you would like to put into Splunk, either to backfill some missing or incomplete data, or just to take advantage of its searching and reporting tools. This recipe will show you how to perform one-time bulk loads of data from files located on the Splunk server. We will also use this recipe to load the data samples that will be used as we build our Operational Intelligence app in Splunk. There are two files that make up our sample data. The first is access_log, which represents data from our web layer and is modeled on an Apache web server. The second file is app_log, which represents data from our application layer and is modeled on the log4j application log data. Getting ready To step through this recipe, you will need a running Splunk server and should have a copy of the sample data generation app (OpsDataGen.spl). (This file is part of the downloadable code bundle, which is available on the book's website.) How to do it... Follow the given steps to load the sample data generator on your system: Log in to your Splunk server using your credentials. From the home launcher, select the Apps menu in the top-left corner and click on Manage Apps. Select Install App from file. Select the location of the OpsDataGen.spl file on your computer, and then click on the Upload button to install the application. After installation, a message should appear in a blue bar at the top of the screen, letting you know that the app has installed successfully. You should also now see the OpsDataGen app in the list of apps. By default, the app installs with the data-generation scripts disabled. In order to generate data, you will need to enable either a Windows or Linux script, depending on your Splunk operating system. To enable the script, select the Settings menu from the top-right corner of the screen, and then select Data inputs. From the Data inputs screen that follows, select Scripts. On the Scripts screen, locate the OpsDataGen script for your operating system and click on Enable. For Linux, it will be $SPLUNK_HOME/etc/apps/OpsDataGen/bin/AppGen.path For Windows, it will be $SPLUNK_HOMEetcappsOpsDataGenbinAppGen-win.path The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on. Select the Settings menu from the top-right corner of the screen, select Data inputs, and then select Files & directories. On the Files & directories screen, locate the two OpsDataGen inputs for your operating system and for each click on Enable. For Linux, it will be: $SPLUNK_HOME/etc/apps/OpsDataGen/data/access_log $SPLUNK_HOME/etc/apps/OpsDataGen/data/app_log For Windows, it will be: $SPLUNK_HOMEetcappsOpsDataGendataaccess_log $SPLUNK_HOMEetcappsOpsDataGendataapp_log The following screenshot displays both the Windows and Linux inputs that are available after installing the OpsDataGen app. It also displays where to click to enable the correct one based on the operating system Splunk is installed on. The data will now be generated in real time. You can test this by navigating to the Splunk search screen and running the following search over an All time (real-time) time range: index=main sourcetype=log4j OR sourcetype=access_combined After a short while, you should see data from both source types flowing into Splunk, and the data generation is now working as displayed in the following screenshot: How it works... In this case, you installed a Splunk application that leverages a scripted input. The script we wrote generates data for two source types. The access_combined source type contains sample web access logs, and the log4j source type contains application logs. Creating an Operational Intelligence application This recipe will show you how to create an empty Splunk app that we will use as the starting point in building our Operational Intelligence application. Getting ready To step through this recipe, you will need a running Splunk Enterprise server, with the sample data loaded from the previous recipe. You should be familiar with navigating the Splunk user interface. How to do it... Follow the given steps to create the Operational Intelligence application: Log in to your Splunk server. From the top menu, select Apps and then select Manage Apps. Click on the Create app button. Complete the fields in the box that follows. Name the app Operational Intelligence and give it a folder name of operational_intelligence. Add in a version number and provide an author name. Ensure that Visible is set to Yes, and the barebones template is selected. When the form is completed, click on Save. This should be followed by a blue bar with the message, Successfully saved operational_intelligence. Congratulations, you just created a Splunk application! How it works... When an app is created through the Splunk GUI, as in this recipe, Splunk essentially creates a new folder (or directory) named operational_intelligence within the $SPLUNK_HOME/etc/apps directory. Within the $SPLUNK_HOME/etc/apps/operational_intelligence directory, you will find four new subdirectories that contain all the configuration files needed for our barebones Operational Intelligence app that we just created. The eagle-eyed among you would have noticed that there were two templates, barebones and sample_app, out of which any one could have been selected when creating the app. The barebones template creates an application with nothing much inside of it, and the sample_app template creates an application populated with sample dashboards, searches, views, menus, and reports. If you wish to, you can also develop your own custom template if you create lots of apps, which might enforce certain color schemes for example. There's more... As Splunk apps are just a collection of directories and files, there are other methods to add apps to your Splunk Enterprise deployment. Creating an application from another application It is relatively simple to create a new app from an existing app without going through the Splunk GUI, should you wish to do so. This approach can be very useful when we are creating multiple apps with different inputs.conf files for deployment to Splunk Universal Forwarders. Taking the app we just created as an example, copy the entire directory structure of the operational_intelligence app and name it copied_app. cp -r $SPLUNK_HOME$/etc/apps/operational_intelligence/* $SPLUNK_HOME$/etc/apps/copied_app Within the directory structure of copied_app, we must now edit the app.conf file in the default directory. Open $SPLUNK_HOME$/etc/apps/copied_app/default/app.conf and change the label field to My Copied App, provide a new description, and then save the conf file. ## Splunk app configuration file#[install]is_configured = 0[ui]is_visible = 1label = My Copied App[launcher]author = John Smithdescription = My Copied applicationversion = 1.0 Now, restart Splunk, and the new My Copied App application should now be seen in the application menu. $SPLUNK_HOME$/bin/splunk restart Downloading and installing a Splunk app Splunk has an entire application website with hundreds of applications, created by Splunk, other vendors, and even users of Splunk. These are great ways to get started with a base application, which you can then modify to meet your needs. If the Splunk server that you are logged in to has access to the Internet, you can click on the Apps menu as you did earlier and then select the Find More Apps button. From here, you can search for apps and install them directly. An alternative way to install a Splunk app is to visit http://apps.splunk.com and search for the app. You will then need to download the application locally. From your Splunk server, click on the Apps menu and then on the Manage Apps button. After that, click on the Install App from File button and upload the app you just downloaded, in order to install it. Once the app has been installed, go and look at the directory structure that the installed application just created. Familiarize yourself with some of the key files and where they are located. When downloading applications from the Splunk apps site, it is best practice to test and verify them in a nonproduction environment first. The Splunk apps site is community driven and, as a result, quality checks and/or technical support for some of the apps might be limited. Adding dashboards and reports Dashboards are a great way to present many different pieces of information. Rather than having lots of disparate dashboards across your Splunk environment, it makes a lot of sense to group related dashboards into a common Splunk application, for example, putting operational intelligence dashboards into a common Operational Intelligence application. In this recipe, you will learn how to move the dashboards and associated reports into our new Operational Intelligence application. Getting ready To step through this recipe, you will need a running Splunk Enterprise server, with the sample data loaded from the Loading the sample data recipe. You should be familiar with navigating the Splunk user interface. How to do it... Follow these steps to move your dashboards into the new application: Log in to your Splunk server. Select the newly created Operational Intelligence application. From the top menu, select Settings and then select the User interface menu item. Click on the Views section. In the App Context dropdown, select Searching & Reporting (search) or whatever application you were in when creating the dashboards: Locate the website_monitoring dashboard row in the list of views and click on the Move link to the right of the row. In the Move Object pop up, select the Operational Intelligence (operational_intelligence) application that was created earlier and then click on the Move button. A message bar will then be displayed at the top of the screen to confirm that the dashboard was moved successfully. Repeat from step 5 to move the product_monitoring dashboard as well. After the Website Monitoring and Product Monitoring dashboards have been moved, we now want to move all the reports that were created, as these power the dashboards and provide operational intelligence insight. From the top menu, select Settings and this time select Searches, reports, and alerts. Select the Search & Reporting (search) context and filter by cp0* to view the searches (reports) that are created. Click on the Move link of the first cp0* search in the list. Select to move the object to the Operational Intelligence (operational_intelligence) application and click on the Move button. A message bar will then be displayed at the top of the screen to confirm that the dashboard was moved successfully. Select the Search & Reporting (search) context and repeat from step 11 to move all the other searches over to the new Operational Intelligence application—this seems like a lot but will not take you long! All of the dashboards and reports are now moved over to your new Operational Intelligence application. How it works... In the previous recipe, we revealed how Splunk apps are essentially just collections of directories and files. Dashboards are XML files found within the $SPLUNK_HOME/etc/apps directory structure. When moving a dashboard from one app to another, Splunk is essentially just moving the underlying file from a directory inside one app to a directory in the other app. In this recipe, you moved the dashboards from the Search & Reporting app to the Operational Intelligence app, as represented in the following screenshot: As visualizations on the dashboards leverage the underlying saved searches (or reports), you also moved these reports to the new app so that the dashboards maintain permissions to access them. Rather than moving the saved searches, you could have changed the permissions of each search to Global such that they could be seen from all the other apps in Splunk. However, the other reason you moved the reports was to keep everything contained within a single Operational Intelligence application, which you will continue to build on going forward. It is best practice to avoid setting permissions to Global for reports and dashboards, as this makes them available to all the other applications when they most likely do not need to be. Additionally, setting global permissions can make things a little messy from a housekeeping perspective and crowd the lists of reports and views that belong to specific applications. The exception to this rule might be for knowledge objects such as tags, event types, macros, and lookups, which often have advantages to being available across all applications. There's more… As you went through this recipe, you likely noticed that the dashboards had application-level permissions, but the reports had private-level permissions. The reports are private as this is the default setting in Splunk when they are created. This private-level permission restricts access to only your user account and admin users. In order to make the reports available to other users of your application, you will need to change the permissions of the reports to Shared in App as we did when adjusting the permissions of reports. Changing the permissions of saved reports Changing the sharing permission levels of your reports from the default Private to App is relatively straightforward: Ensure that you are in your newly created Operational Intelligence application. Select the Reports menu item to see the list of reports. Click on Edit next to the report you wish to change the permissions for. Then, click on Edit Permissions from the drop-down list. An Edit Permissions pop-up box will appear. In the Display for section, change from Owner to App, and then, click on Save. The box will close, and you will see that the Sharing permissions in the table will now display App for the specific report. This report will now be available to all the users of your application. Summary In this article, we loaded the sample data into Splunk. We also saw how to organize dashboards and knowledge into a custom Splunk app. Resources for Article: Further resources on this subject: VWorking with Pentaho Mobile BI [Article] Visualization of Big Data [Article] Highlights of Greenplum [Article]
Read more
  • 0
  • 0
  • 10125

article-image-getting-ready-launch-your-phonegap-app-real-world
Packt
31 Oct 2014
7 min read
Save for later

Getting Ready to Launch Your PhoneGap App in the Real World

Packt
31 Oct 2014
7 min read
In this article by Yuxian, Eugene Liang, author of PhoneGap and AngularJS for Cross-platform Development, we will run through some of the stuff that you should be doing before launching your app to the world, whether it's through Apple App Store or Google Android Play Store. (For more resources related to this topic, see here.) Using phonegap.com The services on https://build.phonegap.com/ are a straightforward way for you to get your app compiled for various devices. While this is a paid service, there is a free plan if you only have one app that you want to work on. This would be fine in our case. Choose a plan from PhoneGap You will need to have an Adobe ID in order to use PhoneGap services. If not, feel free to create one. Since the process for generating compiled apps from PhoneGap may change, it's best that you visit https://build.phonegap.com/ and sign up for their services and follow their instructions. Preparing your PhoneGap app for an Android release This section generally focuses on things that are specific for the Android platform. This is by no means a comprehensive checklist, but has some of the common tasks that you should go through before releasing your app to the Android world. Testing your app on real devices It is always good to run your app on an actual handset to see how the app is working. To run your PhoneGap app on a real device, issue the following command after you plug your handset into your computer: cordova run android You will see that your app now runs on your handset. Exporting your app to install on other devices In the previous section we talked about installing your app on your device. What if you want to export the APK so that you can test the app on other devices? Here's what you can do: As usual, build your app using cordova build android Alternatively, if you can, run cordova build release The previous step will create an unsigned release APK at /path_to_your_project/platforms/android/ant-build. This app is called YourAppName-release-unsigned.apk. Now, you can simply copy YourAppName-release-unsigned.apk and install it on any android based device you want. Preparing promotional artwork for release In general, you will need to include screenshots of your app for upload to Google Play. In case your device does not allow you to take screenshots, here's what you can do: The first technique that you can use is to simply run your app in the emulator and take screenshots off it. The size of the screenshot may be substantially larger, so you can crop it using GIMP or some other online image resizer. Alternatively, use the web app version and open it in your Google Chrome Browser. Resize your browser window so that it is narrow enough to resemble the width of mobile devices. Building your app for release To build your app for release, you will need Eclipse IDE. To start your Eclipse IDE, navigate to File | New | Project. Next, navigate to Existing Code | Android | Android Project. Click on Browse and select the root directory of your app. The Project to Import window should show platforms/android. Now, select Copy projects into workspace if you want and then click on Finish. Signing the app We have previously exported the app (unsigned) so that we can test it on devices other than those plugged into our computer. However, to release your app to the Play Store, you need to sign them with keys. The steps here are the general steps that you need to follow in order to generate "signed" APK apps to upload your app to the Play Store. Right-click on the project that you have imported in the previous section, and then navigate to Android Tools | Export Signed Application Package. You will see the Project Checks dialog. In the Project Checks dialog, you will see if your project has any errors or not. Next, you should see the Keystore selection dialog. You will now create the key using the app name (without space) and the extension .keystore. Since this app is the first version, there is no prior original name to use. Now, you can browse to the location and save the keystore, and in the same box, give the name of the keystore. In the Keystore election dialog, add your desired password twice and click on Next. You will now see the Key Creation dialog. In the Key Creation dialog, use app_name as your alias (without any spaces) and give the password of your keystore. Feel free to enter 50 for validity (which means the password is valid for 50 years). The remaining fields such as names, organization, and so on are pretty straightforward, so you can just go ahead and fill them in. Finally, select the Destination APK file, which is the location to which you will export your .apk file. Bear in mind that the preceding steps are not a comprehensive list of instructions. For the official documentation, feel free to visit http://developer.android.com/tools/publishing/app-signing.html. Now that we are done with Android, it's time to prepare our app for iOS. iOS As you might already know, preparing your PhoneGap app for Apple App Store requires similar levels, if not more, as compared to your usual Android deployment. In this section, I will not be covering things like making sure your app is in tandem with Apple User Interface guidelines, but rather, how to improve your app before it reaches the App Store. Before we get started, there are some basic requirements: Apple Developer Membership (if you ultimately want to deploy to the App Store) Xcode Running your app on an iOS device If you already have an iOS device, all you need to do is to plug your iOS device to your computer and issue the following command: cordova run ios You should see that your PhoneGap app will build and launch on your device. Note that before running the preceding command, you will need to install the ios-deploy package. You can install it using the following command: sudo npm install –g ios-deploy Other techniques There are other ways to test and deploy your apps. These methods can be useful if you want to deploy your app to your own devices or even for external device testing. Using Xcode Now let's get started with Xcode: After starting your project using the command-line tool and after adding in iOS platform support, you may actually start developing using Xcode. You can start your Xcode and click on Open Other, as shown in the following screenshot: Once you have clicked on Open Other, you will need to browse to your ToDo app folder. Drill down until you see ToDo.xcodeproj (navigate to platforms | ios). Select and open this file. You will see your Xcode device importing the files. After it's all done, you should see something like the following screenshot: Files imported into Xcode Notice that all the files are now imported to your Xcode, and you can start working from here. You can also deploy your app either to devices or simulators: Deploy on your device or on simulators Summary In this article, we went through the basics of packaging your app before submission to the respective app stores. In general, you should have a good idea of how to develop AngularJS apps and apply mobile skins on them so that it can be used on PhoneGap. You should also notice that developing for PhoneGap apps typically takes the pattern of creating a web app first, before converting it to a PhoneGap version. Of course, you may structure your project so that you can build a PhoneGap version from day one, but it may make testing more difficult. Anyway, I hope that you enjoyed this article and feel free to follow me at http://www.liangeugene.com and http://growthsnippets.com. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Working with the sharing plugin [Article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article]
Read more
  • 0
  • 0
  • 6563
article-image-distributed-rails-applications-with-chef
Rahmal Conda
31 Oct 2014
4 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 1

Rahmal Conda
31 Oct 2014
4 min read
Since the Advent of Rails (and Ruby by extension), in the period between 2005 and 2010, Rails went from a niche Web Application Framework to being the center of a robust web application platform. To do this it needed more than Ruby and a few complementary gems. Anyone who has ever tried to deploy a Rails application into a production environment knows that Rails doesn’t run in a vacuum. Rails still needs a web server in front of it to help manage requests like Apache or Nginx. Oops, you’ll need unicorn or Passenger too. Almost all of the Rails apps are backed by some sort of data persistence layer. Usually that is some sort of relational database. More and more it’s a NoSQL DB like MongoDB or depending on the application, you’re probably going to deploy a caching strategy at some point: Memcached, Redis, the list goes on. What about background jobs? You’ll need another server instance for that too, and not just one either. High availability systems need to be redundant. If you’re lucky enough to get a lot of traffic, you’ll need a way to scale all of this. Why Chef? Chances are that you’re managing all of this traffic manually. Don’t feel bad, everyone starts out that way. But as you grow, how do you manage all of this without going insane? Most Rails developers start off with Capistrano, which is a great choice. Capistrano is a remote server automation tool. It’s used most often as a deployment tool for Rails. For the most part it’s a great solution for managing multiple servers that make up your Rails stack. It’s only when your architecture reaches a certain size that I’d recommend choosing Chef over Capistrano. But really, there’s no reason to choose one over the other since they actually work pretty well together, and they are both similar regarding deployment. Where Chef excels, however, is when you need to provision multiple servers with different roles, and changing software stacks. This is what I’m going to focus on in this post. But let’s introduce Chef first. What is Chef anyway? Basically, Chef is a Ruby-based configuration management engine. It is a software configuration management tool, used for provisioning servers for certain roles within a platform stack, and deploying applications to those servers. It is used to automate server configuration and integration into your infrastructure. You define your infrastructure in configuration files written in Chef’s Ruby DSL and Chef takes care of setting up individual machines and linking them together. Chef server You set up one of your server instances (virtual or otherwise) as the server and all your other instances are clients that communicate with the Chef "server" via REST over HTTPS. The server is an application that stores cookbooks for your nodes. Recipes and cookbooks Recipes are files that contain sets of instructions written in Chef’s Ruby DSL. These instructions perform some kind of procedure, usually installing software and configuring some service. These recipes are bound together along with configuration file templates, resources, and helper scripts as cookbooks. Cookbooks generally correspond to a specific server configuration. For instance, a Postgres cookbook might contain a recipe for Postgres Server, Postgres Client, maybe PostGIS, and some configuration files for how the DB instance should be provisioned. Chef Solo For stacks that don’t necessarily need a full Chef server setup, but use cookbooks to set up Rails and DB servers, there’s Chef Solo. Chef Solo is a local standalone Chef application that can be used to remotely deploy servers and applications. Wait, where is the code? In Part 2 of this post I’m going to walk you through the setting up of a Rails application with Chef Solo, then I’ll expand to show a full Chef server configuration management engine. While Chef can be used for many different application stacks, I’m going to focus on Rails configuration and deployment, provisioning and deploying the entire stack. See you next time! About the Author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 4141

article-image-creating-our-first-animation-angularjs
Packt
31 Oct 2014
36 min read
Save for later

Creating Our First Animation in AngularJS

Packt
31 Oct 2014
36 min read
In this article by Richard Keller, author of the book Learning AngularJS Animations, we will learn how to apply CSS animations within the context of AngularJS by creating animations using CSS transitions and CSS keyframe animations that are integrated with AngularJS native directives using the ngAnimate module. In this article, we will learn: The ngAnimate module setup and usage AngularJS directives with support for out-of-the-box animation AngularJS animations with the CSS transition AngularJS animations with CSS keyframe animations The naming convention of the CSS animation classes Animation of the ngMessage and ngMessages directives (For more resources related to this topic, see here.) The ngAnimate module setup and usage AngularJS is a module-based framework; if we want our AngularJS application to have the animation feature, we need to add the animation module (ngAnimate). We have to include this module in the application by adding the module as a dependency in our AngularJS application. However, before that, we should include the JavaScript angular-animate.js file in HTML. Both files are available on the Google content distribution network (CDN), Bower, Google Code, and https://angularjs.org/. The Google developers' CDN hosts many versions of AngularJS, as listed here: https://developers.google.com/speed/libraries/devguide#angularjs Currently, AngularJS Version 1.3 is the latest stable version, so we will use AngularJS Version 1.3.0 on all samples files of this book; we can get them from https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular.min.js and https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular-animate.min.js. You might want to use Bower. To do so, check out this great video article at https://thinkster.io/egghead/intro-to-bower/, explaining how to use Bower to get AngularJS. We include the JavaScript files of AngularJS and the ngAnimate module, and then we include the ngAnimate module as a dependency of our app. This is shown in the following sample, using the Google CDN and the minified versions of both files: <!DOCTYPE html> <html ng-app"myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular-animate.min.js"></script> <script>    var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> Here, we already have an AngularJS web app configured to use animations. Now, we will learn how to animate using AngularJS directives. AngularJS directives with native support for animations AngularJS has the purpose of changing the way web developers and designers manipulate the Document Object Model (DOM). We don't directly manipulate the DOM when developing controllers, services, and templates. AngularJS does all the DOM manipulation work for us. The only place where an application touches the DOM is within directives. For most of the DOM manipulation requirements, AngularJS already provides are built-in directives that fit our needs. There are many important AngularJS directives that already have built-in support for animations, and they use the ngAnimate module. This is why this module is so useful; it allows us to use animations within AngularJS directives DOM manipulation. This way, we don't have to replicate native directives by extending them just to add animation functionality. The ngAnimate module provides us a way to hook animations in between AngularJS directives execution. It even allows us to hook on custom directives. As we are dealing with animations between DOM manipulations, we can have animations before and after an element is added to or removed from the DOM, after an element changes (by adding or removing classes), and before and after an element is moved in the DOM. These events are the moments when we might add animations. Fade animations using AngularJS Now that we already know how to install a web app with the ngAnimate module enabled, let's create fade-in and fade-out animations to get started with AngularJS animations. We will use the same HTML from the installation topic and add a simple controller, just to change an ngShow directive model value and add a CSS transition. The ngShow directive shows or hides the given element based on the expression provided to the ng-show attribute. For this sample, we have a Toggle fade button that changes the ngShow model value, so we can see what happens when the element fades in and fades out from the DOM. The ngShow directive shows and hides an element by adding and removing the ng-hide class from the element that contains the directive, shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <style type="text/css">    .firstSampleAnimation.ng-hide-add,    .firstSampleAnimation.ng-hide-remove {     -webkit-transition: 1s ease-in-out opacity;     transition: 1s ease-in-out opacity;     opacity: 1;  } .firstSampleAnimation.ng-hide { opacity: 0; } </style> <div> <div ng-controller="animationsCtrl"> <h1>ngShow animation</h1> <button ng-click="fadeAnimation = !fadeAnimation">Toggle fade</button> fadeAnimation value: {{fadeAnimation}} <div class="firstSampleAnimation" ng-show="fadeAnimation"> This element appears when the fadeAnimation model is true </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> In the CSS code, we declared an opacity transition to elements with the firstAnimationSample and ng-hide-add classes, or elements with the firstAnimationSample and ng-hide-remove classes. We also added the firstAnimationSample class to the same element that has the ng-show directive attribute. The fadeAnimation model is initially false, so the element with the ngShow directive is initially hidden, as the ngShow directive adds the ng-hide class to the element to set the display property as none. When we first click on the Toggle fade button, the fadeAnimation model will become true. Then, the ngShow directive will remove the ng-hide class to display the element. But before that, the ngAnimate module knows there is a transition declared for this element. Because of that, the ngAnimate module will append the ng-hide-remove class to trigger the hide animation start. Then, ngAnimate will add the ng-hide-remove-active class that can contain the final state of the animation to the element and remove the ng-hide class at the same time. Both classes will last until the animation (1 second in this sample) finishes, and then they are removed. This is the fade-in animation; ngAnimate triggers animations by adding and removing the classes that contain the animations; this is why we say that AngularJS animations are class based. This is where the magic happens. All that we did to create this fade-in animation was declare a CSS transition with the class name, ng-hide-remove. This class name means that it's appended when the ng-hide class is removed. The fade-out animation will happen when we click on the Toggle fade button again, and then, the fadeAnimation model will become false. The ngShow directive will add the ng-hide class to remove the element, but before this, the ngAnimate module knows that there is a transition declared for that element too. The ngAnimate module will append the ng-hide-add class and then add the ng-hide and ng-hide-add-active classes to the element at the same time. Both classes will last until the animation (1 second in this sample) finishes, then they are removed, and only the ng-hide class is kept, to hide the element. The fade-out animation was created by just declaring the CSS transition with the class name of ng-hide-add. It is easy to understand that this class is appended to the element when the ng-hide class is about to be added. The AngularJS animations convention As this article is intended to teach you how to create animations with AngularJS, you need to know which directives already have built-in support for AngularJS animations to make our life easier. Here, we have a table of directives with the directive names and the events of the directive life cycle when animation hooks are supported. The first row means that the ngRepeat directive supports animation on enter, leave, and move event times. All events are relative to DOM manipulations, for example, when an element enters or leaves DOM, or when a class is added to or removed from an element. Directive Supported animations ngRepeat Enter, leave, and move ngView Enter and leave ngInclude Enter and leave ngSwitch Enter and leave ngIf Enter and leave ngClass Add and remove ngShow and ngHide Add and remove form and ngModel Add and remove ngMessages Add and remove ngMessage Enter and leave Perhaps, the more experienced AngularJS users have noticed that the most frequently used directives are attended in this list. This is great; it means that animating with AngularJS isn't hard for most use cases. AngularJS animation with CSS transitions We need to know how to bind the CSS animation as well as the AngularJS directives listed in the previous table. The ngIf directive, for example, has support for the enter and leave animations. When the value of the ngIf model is changed to true, it triggers the animation by adding the ng-enter class to the element just after the ngIf DOM element is created and injected. This triggers the animation, and the classes are kept for the duration of the transition ends. Then, the ng-enter class is removed. When the value of ngIf is changed to false, the ng-leave class is added to the element just before the ngIf content is removed from the DOM, and so, the animation is triggered while the element still exists. To illustrate the AngularJS ngIf directive and ngAnimate module behavior, let's see what happens in a sample. First, we have to declare a button that toggles the value of the fadeAnimation model, and one div tag that uses ng-if="fadeAnimation", so we can see what happens when the element is removed and added back. Here, we create the HTML code using the HTML template we used in the last topic to install the ngAnimate module: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngIf sample</title> </head> <body> <style> /* ngIf animation */ .animationIf.ng-enter, .animationIf.ng-leave { -webkit-transition: opacity ease-in-out 1s; transition: opacity ease-in-out 1s; } .animationIf.ng-enter, .animationIf.ng-leave.ng-leave-active { opacity: 0; } .animationIf.ng-leave, .animationIf.ng-enter.ng-enter-active { opacity: 1; } </style> <div ng-controller="animationsCtrl"> <h1>ngIf animation</h1> <div> fadeAnimation value: {{fadeAnimation}} </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <div ng-if="fadeAnimation" class="animationIf"> This element appears when the fadeAnimation model is true </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> So, let's see what happens in the DOM just after we click on the Toggle fade button. We will use Chrome Developer Tools (Chrome DevTools) to check the HTML in each animation step. It's a native tool that comes with the Chrome browser. To open Chrome DevTools, you just need to right-click on any part of the page and click on Inspect Element. The ng-enter class Our CSS declaration added an animation to the element with the animationIf and ng-enter classes. So, the transition is applied when the element has the ng-enter class too. This class is appended to the element when the element has just entered the DOM. It's important to add the specific class of the element you want to animate in the selector, which in this case is the animationIf class, because many other elements might trigger animation and add the ng-enter class too. We should be careful to use the specific target element class. Until the animation is completed, the resulting HTML fragment will be as follows: Consider the following snippet: <div ng-if="fadeAnimation" class="animationIf ng-scope ng-animate ng-enter ng-enter-active"> fadeAnimation value: true </div> We can see that the ng-animate, ng-enter, and ng-enter-active classes were added to the element. After the animation is completed, the DOM will have the animation classes removed as the next screenshot shows: As you can see, the animation classes are removed: <div ng-if="fadeAnimation" class="animationIf ng-scope"> This element appears when the fadeAnimation model is true </div> The ng-leave class We added the same transition of the ng-enter class to the element with the animationIf and ng-leave classes. The ng-leave class is added to the element before the element leaves the DOM. So, before the element vanishes, it will display the fade effect too. If we click again on the Toggle fade button, the leave animation will be displayed and the following HTML fragment and screen will be rendered: The fragment rendered is as follows: <div ng-if="fadeAnimation" class="animationIf ng-scope g-animate ng-leave ng-leave-active"> This element appears when the fadeAnimation model is true </div> We can notice that the ng-animate, ng-leave, and ng-leave-active classes were added to the element. Finally, after the element is removed from the DOM, the rendered result will be as follows: The code after removing the element is as follows: <div ng-controller="animationsCtrl" class="ng-scope"> <div class="ng-binding"> fadeAnimation value: false </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <!-- ngIf: fadeAnimation --> </div> Furthermore, there are the ng-enter-active and ng-leave-active classes. They are appended to the element classes too. Both are used to define the target value of the transition, and the -active classes define the destination CSS so that we can create a transition between the start and the end of an event. For example, ng-enter is the initial class of the enter event and ng-enter-active is the final class of the enter event. They are used to determine the style applied at the start of the animation beginning and the final transition style, and they are displayed when the transition completes the cycle. A use case of the -active class is when we want to set an initial color and a final color using the CSS transition. In the last sample case, the ng-leave class has opacity set to 1 and the ng-leave-active class has the opacity set to 0; so, the element will fade away at the end of the animation. Great, we just created our first animation using AngularJS and CSS transitions. AngularJS animation with CSS keyframe animations We created an animation using the ngIf directive and CSS transitions. Now we are going to create an animation using ngRepeat and CSS animations (keyframes). As we saw in the earlier table on directives and the supported animation events, the ngRepeat directive supports animation on the enter, leave, and move events. We already used the enter and leave events in the last sample. The move event is triggered when an item is moved around on the list of items. For this sample, we will create three functions on the controller scope: one to add elements to the list in order to execute the enter event, one to remove an item from list in order to execute the leave event, and one to sort the elements so that we can see the move event. Here is the JavaScript with the functions; $scope.items is the array that we will use on the ngRepeat directive: var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.items = [{ name: 'Richard' }, { name: 'Bruno' } , { name: 'Jobson' }]; $scope.counter = 0; $scope.addItem = function () { var name = 'Item' + $scope.counter++; $scope.items.push({ name: name }); }; $scope.removeItem = function () { var length = $scope.items.length; var indexRemoved = Math.floor(Math.random() * length); $scope.items.splice(indexRemoved, 1); }; $scope.sortItems = function () { $scope.items.sort(function (a, b) { return a[name] < b[name] ? -1 : 1 }); }; }); The HTML is as follows; it is without the CSS styles because we will see them later separating each animation block: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngRepeat sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngRepeat Animation</h1> <div> <div ng-repeat="item in items" class="repeatItem"> {{item.name}} </div> <button ng-click="addItem()">Add item</button> <button ng-click="removeItem()">Remove item</button><button ng-click="sortItems()"> Sort items</button> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> </body> </html> We will add an animation to the element with the repeatItem and ng-enter classes, and we will declare the from and to keyframes. So, when an element appears, it starts with opacity set to 0 and color set as red and will animate for 1 second until opacity is 1 and color is black. This will be seen when an item is added to the ngRepeat array. The enter animation definition is declared as follows: /* ngRepeat ng-enter animation */ .repeatItem.ng-enter { -webkit-animation: 1s ng-enter-repeat-animation; animation: 1s ng-enter-repeat-animation; } @-webkit-keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } @keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } The move animation is declared next is to be triggered when we move an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-move classes. We will declare the from and to keyframes. So, when an element moves, it starts with opacity set to 0 and color set as black and will animate for 1 second until opacity is 0.5 and color is blue, shown as follows: /* ngRepeat ng-move animation */ .repeatItem.ng-move { -webkit-animation: 1s ng-move-repeat-animation; animation: 1s ng-move-repeat-animation; } @-webkit-keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } @keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } The leave animation is declared next and is to be triggered when we remove an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-leave classes; we will declare the from and to keyframes; so, when an element leaves the DOM, it starts with opacity set to 1 and color set as black and animates for 1 second until opacity is 0 and color is red, shown as follows: /* ngRepeat ng-leave animation */ .repeatItem.ng-leave { -webkit-animation: 1s ng-leave-repeat-animation; animation: 1s ng-leave-repeat-animation; } @-webkit-keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } @keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } We can see that the ng-enter-active and ng-leave-active classes aren't used on this sample, as the keyframe animation already determines the initial and final properties' states. In this case, as we used CSS keyframes, the classes with the -active suffix are useless, although for CSS transitions, it's useful to set an animation destination. The CSS naming convention In the last few sections, we saw how to create animations using AngularJS, CSS transitions, and CSS keyframe animations. Creating animations using both CSS transitions and CSS animations is very similar because all animations in AngularJS are class based, and AngularJS animations have a well-defined class name pattern. We must follow the CSS naming convention by adding a specific class to the directive element so that we can determine the element animation. Otherwise, the ngAnimate module will not be able to recognize which element the animation applies to. We already know that both ngIf and ngRepeat use the ng-enter, ng-enter-active, ng-leave, and ng-leave-active classes that are added to the element in the enter and leave events. It's the same naming convention used by the ngInclude, ngSwitch, ngMessage, and ngView directives. The ngHide and ngShow directives follow a different convention. They add the ng-hide-add and ng-hide-add-active classes when the element is going to be hidden. When the element is going to be shown, they add the ng-hide-remove and ng-hide-remove-active classes. These class names are more intuitive for the purpose of hiding and showing elements. There is also the ngClass directive convention that uses the class name added to create the animation classes with the -add, -add-active, -remove, and -remove-active suffixes, similar to the ngHide directive. The ngRepeat directive uses the ng-move and ng-move-active classes when elements move their position in the DOM, as we already saw in the last sample. The ngClass directive animation sample The ngClass directive allows us to dynamically set CSS classes. So, we can programmatically add and remove CSS from DOM elements. Classes are already used to change element styles, so it's very good to see how useful animating the ngClass directive is. Let's see a sample of ngClass so that it's easier to understand. We will create the HTML code with a Toggle ngClassbutton that will add and remove the animationClass class from the element with the initialClass class through the ngClass directive: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngClass sample</title> </head> <body> <link href="ngClassSample.css" rel="stylesheet" /> <div> <h1>ngClass Animation</h1> <div> <button ng-click="toggleNgClass = !toggleNgClass">Toggle ngClass</button> <div class="initialClass" ng-class=" {'animationClass' : toggleNgClass}"> This element has class 'initialClass' and the ngClass directive is declared as ng-class="{'animationClass' : toggleNgClass}" </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> For this sample, we will use two basic classes: an initial class and the class that the ngClass directive will add to and remove from the element: /* ngclass animation */ /*This is the initialClass, that keeps in the element*/ .initialClass { background-color: white; color: black; border: 1px solid black; } /* This is the animationClass, that is added or removed by the ngClass expression*/ .animationClass { background-color: black; color: white; border: 1px solid white; } To create the animation, we will define a CSS animation using keyframes; so, we only will need to use the animationClass-add and animationClass-remove classes to add animations: @-webkit-keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } @keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } The initial state is shown as follows: So, we want to display an animation when animationClass is added to the element with the initialClass class by the ngClass directive. This way, our animation selector will be: .initialClass.animationClass-add{ -webkit-animation: 1s ng-class-animation; animation: 1s ng-class-animation; } After 500 ms, the result should be a complete gray div tag because the text, border, and background colors are halfway through the transition between black and white, as we can see in this screenshot: After a second of animation, this is the result: The remove animation, which occurs when animationClass is removed, is similar to the enter animation. However, this animation should be the reverse of the enter animation, and so, the CSS selector of the animation will be: initialClass.animationClass-remove { -webkit-animation: 1s ng-class-animation reverse; animation: 1s ng-class-animation reverse; } The animation result will be the same as we saw in previous screenshots, but in the reverse order. The ngHide and ngShow animation sample Let's see one sample of the ngHide animation, which is the directive that shows and hides the given HTML code based on an expression, such as the ngShow directive. We will use this directive to create a success notification message that fades in and out. To have a lean CSS file in this sample, we will use the Bootstrap CSS library, which is a great library to use with AngularJS. There is an AngularJS version of this library created by the Angular UI team, available at http://angular-ui.github.io/bootstrap/. The Twitter Bootstrap library is available at http://getbootstrap.com/. For this sample, we will use the Microsoft CDN; you can check out the Microsoft CDN libraries at http://www.asp.net/ajax/cdn. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngHide sample</title> </head> <body> <link href="http://ajax.aspnetcdn.com/ajax/bootstrap/3.2.0/css/ bootstrap.css" rel="stylesheet" /> <style> /* ngHide animation */ .ngHideSample { padding: 10px; } .ngHideSample.ng-hide-add { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 1; } .ngHideSample.ng-hide-add-active { opacity: 0; } .ngHideSample.ng-hide-remove { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 0; } .ngHideSample.ng-hide-remove-active { opacity: 1; } </style> <div> <h1>ngHide animation</h1> <div> <button ng-click="disabled = !disabled">Toggle ngHide animation</button> <div ng-hide="disabled" class="ngHideSample bg-success"> This element has the ng-hide directive. </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> In this sample, we created an animation in which when the element is going to hide, its opacity is transitioned until it's set to 0. Also, when the element appears again, its opacity transitions back to 1 as we can see in the sequence of the following sequence of screenshots. In the initial state, the output is as follows: After we click on the button, the notification message starts to fade: After the add (ng-hide-add) animation has completed, the output is as follows: Then, if we toggle again, we will see the success message fading in: After the animation has completed, it returns to the initial state: The ngShow directive uses the same convention; the only difference is that each directive has the opposite behavior for the model value. When the model is true, ngShow removes the ng-hide class and ngHide adds the ng-hide class, as we saw in the first sample of this article. The ngModel directive and form animations We can easily animate form controls such as input, select, and textarea on ngModel changes. Form controls already work with validation CSS classes such as ng-valid, ng-invalid, ng-dirty, and ng-pristine. These classes are appended to form controls by AngularJS, based on validations and the current form control status. We are able to animate on the add and remove features of those classes. So, let's see an example of how to change the input color to red when a field becomes invalid. This helps users to check for errors while filling in the form before it is submitted. The animation eases the validation error experience. For this sample, a valid input will contain only digits and will become invalid once a character is entered. Consider the following HTML: <h1>ngModel and form animation</h1> <div> <form> <input ng-model="ngModelSample" ng-pattern="/^d+$/" class="inputSample" /> </form> </div> This ng-pattern directive validates using the regular expression if the model ngModelSample is a number. So, if we want to warn the user when the input is invalid, we will set the input text color to red using a CSS transition. Consider the following CSS: /* ngModel animation */ .inputSample.ng-invalid-add { -webkit-transition: 1s linear all; transition: 1s linear all; color: black; } .inputSample.ng-invalid { color: red; } .inputSample.ng-invalid-add-active { color: red; } We followed the same pattern as ngClass. So, when the ng-invalid class is added, it will append the ng-invalid-add class and the transition will change the text color to red in a second; it will then continue to be red, as we have defined the ng-invalid color as red too. The test is easy; we just need to type in one non-numeric character on the input and it will display the animation. The ngMessage and ngMessages directive animations Both the ngMessage and ngMessages directives are complimentary, but you can choose which one you want to animate, or even animate both of them. They became separated from the core module, so we have to add the ngMessages module as a dependency of our AngularJS application. These directives were added to AngularJS in Version 1.3, and they are useful to display messages based on the state of the model of a form control. So, we can easily display a custom message if an input has a specific validation error, for example, when the input is required but is not filled in yet. Without these directives, we would rely on JavaScript code and/or complex ngIf statements to accomplish the same result. For this sample, we will create three different error messages for three different validations of a password field, as described in the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>ngMessages animation</title> </head> <body> <link href="ngMessageAnimation.css" rel="stylesheet" /> <h1>ngMessage and ngMessages animation</h1> <div> <form name="messageAnimationForm"> <label for="modelSample">Password validation input</label> <div> <input ng-model="ngModelSample" id="modelSample" name="modelSample" type="password" ng-pattern= "/^d+$/" ng-minlength="5" ng-maxlength="10" required class="ngMessageSample" /> <div ng-messages="messageAnimationForm. modelSample.$error" class="ngMessagesClass" ng-messages-multiple> <div ng-message="pattern" class="ngMessageClass">* This field is invalid, only numbers are allowed</div> <div ng-message="minlength" class="ngMessageClass">* It's mandatory at least 5 characters</div> <div ng-message="maxlength" class="ngMessageClass">* It's mandatory at most 10 characters</div> </div> </div> </form> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-messages.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngMessages']); </script> </body> </html> We included the ngMessage file too, as it's required for this sample. For the ngMessages directive, that is, the container of the ngMessage directives, we included an animation on ng-active-addthat changes the container background color from white to red and ng-inactive-add that does the opposite, changing the background color from red to white. This works because the ngMessages directive appends the ng-active class when there is any message to be displayed. When there is no message, it appends the ng-inactive class to the element. Let's see the ngMessages animation's declaration: .ngMessagesClass { height: 50px; width: 350px; } .ngMessagesClass.ng-active-add { transition: 0.3s linear all; background-color: red; } .ngMessagesClass.ng-active { background-color: red; } .ngMessagesClass.ng-inactive-add { transition: 0.3s linear all; background-color: white; } .ngMessagesClass.ng-inactive { background-color: white; } For the ngMessage directive, which contains a message, we created an animation that changes the color of the error message from transparent to white when the message enters the DOM, and changes the color from white to transparent when the message leaves DOM, shown as follows: .ngMessageClass { color: white; } .ngMessageClass.ng-enter { transition: 0.3s linear all; color: transparent; } .ngMessageClass.ng-enter-active { color: white; } .ngMessageClass.ng-leave { transition: 0.3s linear all; color: white; } .ngMessageClass.ng-leave-active { color: transparent; } This sample illustrates two animations for two directives that are related to each other. The initial result, before we add a password, is as follows: We can see both animations being triggered when we type in the a character, for example, in the password input. Between 0 and 300 ms of the animation, we will see both the background and text appearing for two validation messages: After 300 ms, the animation has completed, and the output is as follows: The ngView directive animation The ngView directive is used to add a template to the main layout. It has support for animation, for both enter and leave events. It's nice to have an animation for ngView, so the user has a better notion that we are switching views. For this directive sample, we need to add the ngRoute JavaScript file to the HTML and the ngRoute module as a dependency of our app. We will create a sample that slides the content of the current view to the left, and the new view appears sliding from the right to the left too so that we can see the current view leaving and the next view appearing. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngView sample</title> </head> <body> <style> .ngViewRelative { position: relative; height: 300px; } .ngViewContainer { position: absolute; width: 500px; display: block; } .ngViewContainer.ng-enter, .ngViewContainer.ng-leave { -webkit-transition: 600ms linear all; transition: 600ms linear all; } .ngViewContainer.ng-enter { transform: translateX(500px); } .ngViewContainer.ng-enter-active { transform: translateX(0px); } .ngViewContainer.ng-leave { transform: translateX(0px); } .ngViewContainer.ng-leave-active { transform: translateX(-1000px); } </style> <h1>ngView sample</h1> <div class="ngViewRelative"> <a href="#/First">First page</a> <a href="#/Second">Second page</a> <div ng-view class="ngViewContainer"> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-route.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngRoute']); app.config(['$routeProvider', function ($routeProvider) { $routeProvider .when('/First', { templateUrl: 'first.html' }) .when('/Second', { templateUrl: 'second.html' }) .otherwise({ redirectTo: '/First' }); }]); </script> </body> </html> We need to configure the routes on config, as the JavaScript shows us. We then create the two HTML templates on the same directory. The content of the templates are just plain lorem ipsum. The first.html file content is shown as follows: <div> <h2>First page</h2> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras consectetur dui nunc, vel feugiat lectus imperdiet et. In hac habitasse platea dictumst. In rutrum malesuada justo, sed porttitor dolor rutrum eu. Sed condimentum tempus est at euismod. Donec in faucibus urna. Fusce fermentum in mauris at pretium. Aenean ut orci nunc. Nulla id velit interdum nibh feugiat ultricies eu fermentum dolor. Pellentesque lobortis rhoncus nisi, imperdiet viverra leo ullamcorper sed. Donec condimentum tincidunt mollis. Curabitur lorem nibh, mattis non euismod quis, pharetra eu nibh. </p> </div> The second.html file content is shown as follows: <div> <h2>Second page</h2> <p> Ut eu metus vel ipsum tristique fringilla. Proin hendrerit augue quis nisl pellentesque posuere. Aliquam sollicitudin ligula elit, sit amet placerat augue pulvinar eget. Aliquam bibendum pulvinar nisi, quis commodo lorem volutpat in. Donec et felis sit amet mauris venenatis feugiat non id metus. Fusce leo elit, egestas non turpis sed, tincidunt consequat tellus. Fusce quis auctor neque, a ultricies urna. Cras varius purus id sagittis luctus. Sed id lectus tristique, euismod ipsum ut, congue augue. </p> </div> Great, we now have our app set up to enable ngView and routes. The animation was defined by adding animation to the enter and leave events, using translateX(). This animation is defined to the new view coming from 500 px from the right and animating until the position on the x-axis is 0, leaving the view in the left corner. The leaving view goes from the initial position until it is at -1000 px on the x-axis. Then, it leaves the DOM. This animation creates a sliding effect; the leaving view leaves faster as it has to move the double of the distance of the entering view in the same animation duration. We can change the translation using the y-axis to change the animation direction, creating the same sliding effect but with different aesthetics. The ngSwitch directive animation The ngSwitch directive is a directive that is used to conditionally swap the DOM structure based on an expression. It supports animation on the enter and leave events, for example, the ngView directive animation events. For this sample, we will create the same sliding effect of the ngView sample, but in this case, we will create a sliding effect from top to bottom instead of right to left. This animation helps the user to understand that one item is being replaced by the other. The ngSwitch sample HTML is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngSwitch sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngSwitch sample</h1> <p>Choose an item:</p> <select ng-model="ngSwitchSelected" ng-options="item for item in ngSwitchItems"></select> <p>Selected item:</p> <div class="switchItemRelative" ng-switch on="ngSwitchSelected"> <div class="switchItem" ng-switch-when="item1">Item 1</div> <div class="switchItem" ng-switch-when="item2">Item 2</div> <div class="switchItem" ng-switch-when="item3">Item 3</div> <div class="switchItem" ng-switch-default>Default Item</div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngSwitchItems = ['item1', 'item2', 'item3']; }); </script> </body> </html> In the JavaScript controller, we added the ngSwitchItems array to the scope, and the animation CSS is defined as follows: /* ngSwitch animation */ .switchItemRelative { position: relative; height: 25px; overflow: hidden; } .switchItem { position: absolute; width: 500px; display: block; } /*The transition is added when the switch item is about to enter or about to leave DOM*/ .switchItem.ng-enter, .switchItem.ng-leave { -webkit-transition: 300ms linear all; -moz-transition: 300ms linear all; -ms-transition: 300ms linear all; -o-transition: 300ms linear all; transition: 300ms linear all; } /* When the element is about to enter DOM*/ .switchItem.ng-enter { bottom: 100%; } /* When the element completes the enter transition */ .switchItem.ng-enter-active { bottom: 0; } /* When the element is about to leave DOM*/ .switchItem.ng-leave { bottom: 0; } /*When the element end the leave transition*/ .switchItem.ng-leave-active { bottom: -100%; } This is almost the same CSS as the ngView sample; we just used the bottom property, added a different height to the switchItemRelative class, and included overflow:hidden. The ngInclude directive sample The ngInclude directive is used to fetch, compile, and include an HTML fragment; it supports animations for the enter and leave events, such as the ngView and ngSwitch directives. For this sample, we will use both templates created in the last ngView sample, first.html and second.html. The ngInclude animation sample HTML with JavaScript and CSS included is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngInclude sample</title> </head> <body> <style> .ngIncludeRelative { position: relative; height: 500px; overflow: hidden; } .ngIncludeItem { position: absolute; width: 500px; display: block; } .ngIncludeItem.ng-enter, .ngIncludeItem.ng-leave { -webkit-transition: 300ms linear all; transition: 300ms linear all; } .ngIncludeItem.ng-enter { top: 100%; } .ngIncludeItem.ng-enter-active { top: 0; } .ngIncludeItem.ng-leave { top: 0; } .ngIncludeItem.ng-leave-active { top: -100%; } </style> <div ng-controller="animationsCtrl"> <h1>ngInclude sample</h1> <p>Choose one template</p> <select ng-model="ngIncludeSelected" ng-options="item.name for item in ngIncludeTemplates"></select> <p>ngInclude:</p> <div class="ngIncludeRelative"> <div class="ngIncludeItem" nginclude=" ngIncludeSelected.url"></div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngIncludeTemplates = [{ name: 'first', url: 'first.html' }, { name: 'second', url: 'second.html' }]; }) </script> </body> </html> In the JavaScript controller, we included the templates array. Finally, we can animate ngInclude using CSS. In this sample, we will animate by sliding the templates using the top property, using the enter and leave events animation. To test this sample, just change the template value selected. Do it yourself exercises The following are some exercises that you can refer to as an exercise that will help you understand the concepts of this article better: Create a spinning loading animation, using the ngShow or ngHide directives that appears when the scope controller variable, $scope.isLoading, is equal to true. Using exercise 1, create a gray background layer with opacity 0.5 that smoothly fills the entire page behind the loading spin, and after page content is loaded, covers all the content until isProcessing becomes false. The effect should be that of a drop of ink that is dropped on a piece of paper and spreads until it's completely stained. Create a success notification animation, similar to the ngShow example, but instead of using the fade animation, use a slide-down animation. So, the success message starts with height:0px. Check http://api.jquery.com/slidedown/ for the expected animation effect. Copy any animation from the http://capptivate.co/ website, using AngularJS and CSS animations. Summary In this article, we learned how to animate AngularJS native directives using the CSS transitions and CSS keyframe concepts. This article taught you how to create animations on AngularJS web apps. Resources for Article: Further resources on this subject: Important Aspect of AngularJS UI Development [article] Setting Up The Rig [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 3109

article-image-importance-windows-rds-horizon-view
Packt
30 Oct 2014
15 min read
Save for later

Importance of Windows RDS in Horizon View

Packt
30 Oct 2014
15 min read
In this article by Jason Ventresco, the author of VMware Horizon View 6 Desktop Virtualization Cookbook, has explained about the Windows Remote Desktop Services (RDS) and how they are implemented in Horizon View. He will discuss about configuring the Windows RDS server and also about creating RDS farm in Horizon View. (For more resources related to this topic, see here.) Configuring the Windows RDS server for use with Horizon View This recipe will provide an introduction to the minimum steps required to configure Windows RDS and integrate it with our Horizon View pod. For a more in-depth discussion on Windows RDS optimization and management, consult the Microsoft TechNet page for Windows Server 2012 R2 (http://technet.microsoft.com/en-us/library/hh801901.aspx). Getting ready VMware Horizon View supports the following versions of Window server for use with RDS: Windows Server 2008 R2: Standard, Enterprise, or Datacenter, with SP1 or later installed Windows Server 2012: Standard or Datacenter Windows Server 2012 R2: Standard or Datacenter The examples shown in this article were performed on Windows Server 2012 R2. Additionally, all of the applications required have already been installed on the server, which in this case included Microsoft Office 2010. Microsoft Office has specific licensing requirements when used with a Windows Server RDS. Consult Microsoft's Licensing of Microsoft Desktop Application Software for Use with Windows Server Remote Desktop Services document (http://www.microsoft.com/licensing/about-licensing/briefs/remote-desktop-services.aspx), for additional information. The Windows RDS feature requires a licensing server component called the Remote Desktop Licensing role service. For reasons of availability, it is not recommended that you install it on the RDS host itself, but rather, on an existing server that serves some other function or even on a dedicated server if possible. Ideally, the RDS licensing role should be installed on multiple servers for redundancy reasons. The Remote Desktop Licensing role service is different from the Microsoft Windows Key Management System (KMS), as it is used solely for Windows RDS hosts. Consult the Microsoft TechNet article, RD Licensing Configuration on Windows Server 2012 (http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx), for the steps required to install the Remote Desktop Licensing role service. Additionally, consult Microsoft document Licensing Windows Server 2012 R2 Remote Desktop Services (http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf) for information about the licensing options for Windows RDS, which include both per-user and per-device options. Windows RDS host – hardware recommendations The following resources represent a starting point for assigning CPU and RAM resources to Windows RDS hosts. The actual resources required will vary based on the applications being used and the number of concurrent users; so, it is important to monitor server utilization and adjust the CPU and RAM specifications if required. The following are the requirements: One vCPU for each of the 15 concurrent RDS sessions 2 GB RAM, base RAM amount equal to 2 GB per vCPU, plus 64 MB of additional RAM for each concurrent RDS session An additional RAM equal to the application requirements, multiplied by the estimated number of concurrent users of the application Sufficient hard drive space to store RDS user profiles, which will vary based on the configuration of the Windows RDS host: Windows RDS supports multiple options to control user profiles' configuration and growth, including a RD user home directory, RD roaming user profiles, and mandatory profiles. For information about these and other options, consult the Microsoft TechNet article, Manage User Profiles for Remote Desktop Services, at http://technet.microsoft.com/en-us/library/cc742820.aspx. This space is only required if you intend to store user profiles locally on the RDS hosts. Horizon View Persona Management is not supported and will not work with Windows RDS hosts. Consider native Microsoft features such as those described previously in this recipe, or third-party tools such as AppSense Environment Manager (http://www.appsense.com/products/desktop/desktopnow/environment-manager). Based on these values, a Windows Server 2012 R2 RDS host running Microsoft Office 2010 that will support 100 concurrent users will require the following resources: Seven vCPU to support upto 105 concurrent RDS sessions 45.25 GB of RAM, based on the following calculations: 20.25 GB of base RAM (2 GB for each vCPU, plus 64 MB for each of the 100 users) A total of 25 GB additional RAM to support Microsoft Office 2010 (Office 2010 recommends 256 MB of RAM for each user) While the vCPU and RAM requirements might seem excessive at first, remember that to deploy a virtual desktop for each of these 100 users, we would need at least 100 vCPUs and 100 GB of RAM, which is much more than what our Windows RDS host requires. By default, Horizon View allows only 150 unique RDS user sessions for each available Windows RDS host; so, we need to deploy multiple RDS hosts if users need to stream two applications at once or if we anticipate having more than 150 connections. It is possible to change the number of supported sessions, but it is not recommended due to potential performance issues. Importing the Horizon View RDS AD group policy templates Some of the settings configured throughout this article are applied using AD group policy templates. Prior to using the RDS feature, these templates should be distributed to either the RDS hosts in order to be used with the Windows local group policy editor, or to an AD domain controller where they can be applied using the domain. Complete the following steps to install the View RDS group policy templates: When referring to VMware Horizon View installation packages, y.y.y refers to the version number and xxxxxx refers to the build number. When you download packages, the actual version and build numbers will be in a numeric format. For example, the filename of the current Horizon View 6 GPO bundle is VMware-Horizon-View-Extras-Bundle-3.1.0-2085634.zip. Obtain the VMware-Horizon-View-GPO-Bundle-x.x.x-yyyyyyy.zip file, unzip it, and copy the en-US folder, the vmware_rdsh.admx file, and the vmware_rdsh_server.admx file to the C:WindowsPolicyDefinitions folder on either an AD domain controller or your target RDS host, based on how you wish to manage the policies. Make note of the following points while doing so: If you want to set the policies locally on each RDS host, you will need to copy the files to each server If you wish to set the policies using domain-based AD group policies, you will need to copy the files to the domain controllers, the group policy Central Store (http://support.microsoft.com/kb/929841), or to the workstation from which we manage these domain-based group policies. How to do it… The following steps outline the procedure to enable RDS on a Windows Server 2012 R2 host. The host used in this recipe has already been connected to the domain and has logged in with an AD account that has administrative permissions on the server. Perform the following steps: Open the Windows Server Manager utility and go to Manage | Add Roles and Features to open the Add Roles and Features Wizard. On the Before you Begin page, click on Next. On the Installation Type page, shown in the following screenshot, select Remote Desktop Services installation and click on Next. This is shown in the following screenshot: On the Deployment Type page, select Quick Start and click on Next. You can also implement the required roles using the standard deployment method outlined in the Deploy the Session Virtualization Standard deployment section of the Microsoft TechNet article, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment (http://technet.microsoft.com/en-us/library/hh831610.aspx). If you use this method, you will complete the component installation and proceed to step 9 in this recipe. On the Deployment Scenario page, select Session-based desktop deployment and click on Next. On the Server Selection page, select a server from the list under Server Pool, click the red, highlighted button to add the server to the list of selected servers, and click on Next. This is shown in the following screenshot: On the Confirmation page, check the box marked Restart the destination server automatically if required and click on Deploy. On the Completion page, monitor the installation process and click on Close when finished in order to complete the installation. If a reboot is required, the server will reboot without the need to click on Close. Once the reboot completes, proceed with the remaining steps. Set the RDS licensing server using the Set-RDLicenseConfiguration Windows PowerShell command. In this example, we are configuring the local RDS host to point to redundant license servers (RDS-LIC1 and RDS-LIC2) and setting the license mode to PerUser. This command must be executed on the target RDS host. After entering the command, confirm the values for the license mode and license server name by answering Y when prompted. Refer to the following code: Set-RDLicenseConfiguration -LicenseServer @("RDS-LIC1.vjason.local","RDS-LIC2.vjason.local") -Mode PerUser This setting might also be set using group policies applied either to the local computer or using Active Directory (AD). The policies are shown in the following screenshot, and you can locate them by going to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Licensing when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path: Use local computer or AD group policies to limit users to one session per RDS host using the Restrict Remote Desktop Services users to a single Remote Desktop Services session policy. The policy is shown in the following screenshot, and you can locate it by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connections: Use local computer or AD group policies to enable Timezone redirection. You can locate the policy by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Horizon View RDSH Services | Remote Desktop Session Host | Device and Resource Redirection when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To enable the setting, set Allow time zone redirection to Enabled. Use local computer or AD group policies to enable Windows Basic Aero-Styled Theme. You can locate the policy by going to User Configuration | Policies | Administrative Templates | Control Panel | Personalization when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the theme, set Force a specific visual style file or force Windows Classic to Enabled and set Path to Visual Style to %windir%resourcesThemesAeroaero.msstyles. Use local computer or AD group policies to start Runonce.exe when the RDS session starts. You can locate the policy by going to User Configuration | Policies | Windows Settings | Scripts (Logon/Logoff) when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the logon settings, double-click on Logon, click on Add, enter runonce.exe in the Script Name box, and enter /AlternateShellStartup in the Script Parameters box. On the Windows RDS host, double-click on the 64-bit Horizon View Agent installer to begin the installation process. The installer should have a name similar to VMware-viewagent-x86_64-y.y.y-xxxxxx.exe. On the Welcome to the Installation Wizard for VMware Horizon View Agent page, click on Next. On the License Agreement page, select the I accept the terms in the license agreement radio check box and click on Next. On the Custom Setup page, either leave all the options set to default, or if you are not using vCenter Operations Manager, deselect this optional component of the agent and click on Next. On the Register with Horizon View Connection Server page, shown in the following screenshot, enter the hostname or IP address of one of the Connection Servers in the pod where the RDS host will be used. If the user performing the installation of the agent software is an administrator in the Horizon View environment, leave the Authentication setting set to default; otherwise, select the Specify administrator credentials radio check box and provide the username and password of an account that has administrative rights in Horizon View. Click on Next to continue: On the Ready to Install the Program page, click on Install to begin the installation. When the installation completes, reboot the server if prompted. The Windows RDS service is now enabled, configured with the optimal settings for use with VMware Horizon View, and has the necessary agent software installed. This process should be repeated on additional RDS hosts, as needed, to support the target number of concurrent RDS sessions. How it works… The following resources provide detailed information about the configuration options used in this recipe: Microsoft TechNet's Set-RDLicenseConfiguration article at http://technet.microsoft.com/en-us/library/jj215465.aspx provides the complete syntax of the PowerShell command used to configure the RDS licensing settings. Microsoft TechNet's Remote Desktop Services Client Access Licenses (RDS CALs) article at http://technet.microsoft.com/en-us/library/cc753650.aspx explains the different RDS license types, which reveals that an RDS per-user Client Access License (CAL) allows our Horizon View clients to access the RDS servers from an unlimited number of endpoints while still consuming only one RDS license. The Microsoft TechNet article, Remote Desktop Session Host, Licensing (http://technet.microsoft.com/en-us/library/ee791926(v=ws.10).aspx) provides additional information on the group policies used to configure the RDS licensing options. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/index.jsp?topic=%2Fcom.vmware.horizon-view.desktops.doc%2FGUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html) explains that the Windows Basic aero-styled theme is the only theme supported by Horizon View, and demonstrates how to implement it. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-443F9F6D-C9CB-4CD9-A783-7CC5243FBD51.html) explains why time zone redirection is required, as it ensures that the Horizon View RDS client session will use the same time zone as the client device. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-85E4EE7A-9371-483E-A0C8-515CF11EE51D.html) explains why we need to add the runonce.exe /AlternateShellStartup command to the RDS logon script. This ensures that applications which require Windows Explorer will work properly when streamed using Horizon View. Creating an RDS farm in Horizon View This recipe will discuss the steps that are required to create an RDS farm in our Horizon View pod. An RDS farm is a collection of Windows RDS hosts and serves as the point of integration between the View Connection Server and the individual applications installed on each RDS server. Additionally, key settings concerning client session handling and client connection protocols are set at the RDS farm level within Horizon View. Getting ready To create an RDS farm in Horizon View, we need to have at least one RDS host registered with our View pod. Assuming that the Horizon View Agent installation completed successfully in the previous recipe, we should see the RDS hosts registered in the Registered Machines menu under View Configuration of our View Manager Admin console. The tasks required to create the RDS pod are performed using the Horizon View Manager Admin console. How to do it… The following steps outline the procedure used to create a RDS farm. In this example, we have already created and registered two Window RDS hosts named WINRDS01 and WINRDS02. Perform the following steps: Navigate to Resources | Farms and click on Add, as shown in the following screenshot: On the Identification and Settings page, shown in the following screenshot, provide a farm ID, a description if desired, make any desired changes to the default settings, and then click on Next. The settings can be changed to On if needed: On the Select RDS Hosts page, shown in the following screenshot, click on the RDS hosts to be added to the farm and then click on Next: On the Ready to Complete page, review the configuration and click on Finish. The RDS farm has been created, which allows us to create application. How it works… The following RDS farm settings can be changed at any time and are described in the following points: Default display protocol: PCoIP (default) and RDP are available. Allow users to choose protocol: By default, Horizon View Clients can select their preferred protocol; we can change this setting to No in order to enforce the farm defaults. Empty session timeout (applications only): This denotes the amount of time that must pass after a client closes all RDS applications before the RDS farm will take the action specified in the When timeout occurs setting. The default setting is 1 minute. When timeout occurs: This determines which action is taken by the RDS farm when the session's timeout deadline passes; the options are Log off or Disconnect (default). Log off disconnected sessions: This determines what happens when a View RDS session is disconnected; the options are Never (default), Immediate, or After. If After is selected, a time in minutes must be provided. Summary We have learned about configuring the Windows RDS server for use in Horizon View and also about creating RDS farm in Horizon View. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] An Introduction to VMware Horizon Mirage [Article] Designing and Building a Horizon View 6.0 Infrastructure [Article]
Read more
  • 0
  • 0
  • 11468
article-image-theming-highcharts
Packt
30 Oct 2014
10 min read
Save for later

Theming with Highcharts

Packt
30 Oct 2014
10 min read
Besides the charting capabilities offered by Highcharts, theming is yet another strong feature of Highcharts. With its extensive theming API, charts can be customized completely to match the branding of a website or an app. Almost all of the chart elements are customizable through this API. In this article by Bilal Shahid, author of Highcharts Essentials, we will do the following things: (For more resources related to this topic, see here.) Use different fill types and fonts Create a global theme for our charts Use jQuery easing for animations Using Google Fonts with Highcharts Google provides an easy way to include hundreds of high quality web fonts to web pages. These fonts work in all major browsers and are served by Google CDN for lightning fast delivery. These fonts can also be used with Highcharts to further polish the appearance of our charts. This section assumes that you know the basics of using Google Web Fonts. If you are not familiar with them, visit https://developers.google.com/fonts/docs/getting_started. We will style the following example with Google Fonts. We will use the Merriweather family from Google Fonts and link to its style sheet from our web page inside the <head> tag: <link href='http://fonts.googleapis.com/css?family=Merriweather:400italic,700italic' rel='stylesheet' type='text/css'> Having included the style sheet, we can actually use the font family in our code for the labels in yAxis: yAxis: [{ ... labels: {    style: {      fontFamily: 'Merriweather, sans-serif',      fontWeight: 400,      fontStyle: 'italic',      fontSize: '14px',      color: '#ffffff'    } } }, { ... labels: {    style: {      fontFamily: 'Merriweather, sans-serif',      fontWeight: 700,      fontStyle: 'italic',      fontSize: '21px',      color: '#ffffff'    },    ... } }] For the outer axis, we used a font size of 21px with font weight of 700. For the inner axis, we lowered the font size to 14px and used font weight of 400 to compensate for the smaller font size. The following is the modified speedometer: In the next section, we will continue with the same example to include jQuery UI easing in chart animations. Using jQuery UI easing for series animation Animations occurring at the point of initialization of charts can be disabled or customized. The customization requires modifying two properties: animation.duration and animation.easing. The duration property accepts the number of milliseconds for the duration of the animation. The easing property can have various values depending on the framework currently being used. For a standalone jQuery framework, the values can be either linear or swing. Using the jQuery UI framework adds a couple of more options for the easing property to choose from. In order to follow this example, you must include the jQuery UI framework to the page. You can also grab the standalone easing plugin from http://gsgd.co.uk/sandbox/jquery/easing/ and include it inside your <head> tag. We can now modify the series to have a modified animation: plotOptions: { ... series: {    animation: {      duration: 1000,      easing: 'easeOutBounce'    } } } The preceding code will modify the animation property for all the series in the chart to have duration set to 1000 milliseconds and easing to easeOutBounce. Each series can have its own different animation by defining the animation property separately for each series as follows: series: [{ ... animation: {    duration: 500,    easing: 'easeOutBounce' } }, { ... animation: {    duration: 1500,    easing: 'easeOutBounce' } }, { ... animation: {      duration: 2500,    easing: 'easeOutBounce' } }] Different animation properties for different series can pair nicely with column and bar charts to produce visually appealing effects. Creating a global theme for our charts A Highcharts theme is a collection of predefined styles that are applied before a chart is instantiated. A theme will be applied to all the charts on the page after the point of its inclusion, given that the styling options have not been modified within the chart instantiation. This provides us with an easy way to apply custom branding to charts without the need to define styles over and over again. In the following example, we will create a basic global theme for our charts. This way, we will get familiar with the fundamentals of Highcharts theming and some API methods. We will define our theme inside a separate JavaScript file to make the code reusable and keep things clean. Our theme will be contained in an options object that will, in turn, contain styling for different Highcharts components. Consider the following code placed in a file named custom-theme.js. This is a basic implementation of a Highcharts custom theme that includes colors and basic font styles along with some other modifications for axes: Highcharts.customTheme = {      colors: ['#1BA6A6', '#12734F', '#F2E85C', '#F27329', '#D95D30', '#2C3949', '#3E7C9B', '#9578BE'],      chart: {        backgroundColor: {            radialGradient: {cx: 0, cy: 1, r: 1},            stops: [                [0, '#ffffff'],                [1, '#f2f2ff']            ]        },        style: {            fontFamily: 'arial, sans-serif',            color: '#333'        }    },    title: {        style: {            color: '#222',            fontSize: '21px',            fontWeight: 'bold'        }    },    subtitle: {        style: {            fontSize: '16px',            fontWeight: 'bold'        }    },    xAxis: {        lineWidth: 1,        lineColor: '#cccccc',        tickWidth: 1,        tickColor: '#cccccc',        labels: {            style: {                fontSize: '12px'            }        }    },    yAxis: {        gridLineWidth: 1,        gridLineColor: '#d9d9d9',        labels: {           style: {                fontSize: '12px'            }        }    },    legend: {        itemStyle: {            color: '#666',            fontSize: '9px'        },        itemHoverStyle:{            color: '#222'        }      } }; Highcharts.setOptions( Highcharts.customTheme ); We start off by modifying the Highcharts object to include an object literal named customTheme that contains styles for our charts. Inside customTheme, the first option we defined is for series colors. We passed an array containing eight colors to be applied to series. In the next part, we defined a radial gradient as a background for our charts and also defined the default font family and text color. The next two object literals contain basic font styles for the title and subtitle components. Then comes the styles for the x and y axes. For the xAxis, we define lineColor and tickColor to be #cccccc with the lineWidth value of 1. The xAxis component also contains the font style for its labels. The y axis gridlines appear parallel to the x axis that we have modified to have the width and color at 1 and #d9d9d9 respectively. Inside the legend component, we defined styles for the normal and mouse hover states. These two states are stated by itemStyle and itemHoverStyle respectively. In normal state, the legend will have a color of #666 and font size of 9px. When hovered over, the color will change to #222. In the final part, we set our theme as the default Highcharts theme by using an API method Highcharts.setOptions(), which takes a settings object to be applied to Highcharts; in our case, it is customTheme. The styles that have not been defined in our custom theme will remain the same as the default theme. This allows us to partially customize a predefined theme by introducing another theme containing different styles. In order to make this theme work, include the file custom-theme.js after the highcharts.js file: <script src="js/highcharts.js"></script> <script src="js/custom-theme.js"></script> The output of our custom theme is as follows: We can also tell our theme to include a web font from Google without having the need to include the style sheet manually in the header, as we did in a previous section. For that purpose, Highcharts provides a utility method named Highcharts.createElement(). We can use it as follows by placing the code inside the custom-theme.js file: Highcharts.createElement( 'link', {    href: 'http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,700italic,400,300,700',    rel: 'stylesheet',    type: 'text/css' }, null, document.getElementsByTagName( 'head' )[0], null ); The first argument is the name of the tag to be created. The second argument takes an object as tag attributes. The third argument is for CSS styles to be applied to this element. Since, there is no need for CSS styles on a link element, we passed null as its value. The final two arguments are for the parent node and padding, respectively. We can now change the default font family for our charts to 'Open Sans': chart: {    ...    style: {        fontFamily: "'Open Sans', sans-serif",        ...    } } The specified Google web font will now be loaded every time a chart with our custom theme is initialized, hence eliminating the need to manually insert the required font style sheet inside the <head> tag. This screenshot shows a chart with 'Open Sans' Google web font. Summary In this article, you learned about incorporating Google fonts and jQuery UI easing into our chart for enhanced styling. Resources for Article: Further resources on this subject: Integrating with other Frameworks [Article] Highcharts [Article] More Line Charts, Area Charts, and Scatter Plots [Article]
Read more
  • 0
  • 0
  • 8224

Packt
30 Oct 2014
7 min read
Save for later

concrete5 – Creating Blocks

Packt
30 Oct 2014
7 min read
In this article by Sufyan bin Uzayr, author of the book concrete5 for Developers, you will be introduced to concrete5. Basically, we will be talking about the creation of concrete5 blocks. (For more resources related to this topic, see here.) Creating a new block Creating a new block in concrete5 can be a daunting task for beginners, but once you get the hang of it, the process is pretty simple. For the sake of clarity, we will focus on the creation of a new block from scratch. If you already have some experience with block building in concrete5, you can skip the initial steps of this section. The steps to create a new block are as follows: First, create a new folder within your project's blocks folder. Ideally, the name of the folder should bear relevance to the actual purpose of the block. Thus, a slideshow block can be slide. Assuming that we are building a contact form block, let's name our block's folder contact. Next, you need to add a controller class to your block. Again, if you have some level of expertise with concrete5 development, you will already be aware of the meaning and purpose of the controller class. Basically, a controller is used to control the flow of an application, say, it can accept requests from the user, process them, and then prepare the data to present it in the result, and so on. For now, we need to create a file named controller.php in our block's folder. For the contact form block, this is how it is going to look (don't forget the PHP tags): class ContactBlockController extends BlockController {protected $btTable = 'btContact';/*** Used for internationalization (i18n).*/public function getBlockTypeDescription() {return t('Display a contact form.');}public function getBlockTypeName() {return t('Contact');}public function view() {// If the block is rendered}public function add() {// If the block is added to a page}public function edit() {// If the block instance is edited}} The preceding code is pretty simple and seems to have become the industry norm when it comes to block creation in concrete5. Basically, our class extends BlockController, which is responsible for installing the block, saving the data, and rendering templates. The name of the class should be the Camel Case version of the block handle, followed by BlockController. We also need to specify the name of the database table in which the block's data will be saved. More importantly, as you must have noticed, we have three separate functions: view(), add(), and edit(). The roles of these functions have been described earlier. Next, create three files within the block's folder: view.php, add.php, and edit.php (yes, the same names as the functions in our code). The names are self-explanatory: add.php will be used when a new block is added to a given page, edit.php will be used when an existing block is edited, and view.php jumps into action when users view blocks live on the page. Often, it becomes necessary to have more than one template file within a block. If so, you need to dynamically render templates in order to decide which one to use in a given situation. As discussed in the previous table, the BlockController class has a render($view) method that accepts a single parameter in the form of the template's filename. To do this from controller.php, we can use the code as follows: public function view() {if ($this->isPost()) {$this->render('block_pb_view');}} In the preceding example, the file named block_pb_view.php will be rendered instead of view.php. To reiterate, we should note that the render($view) method does not require the .php extension with its parameters. Now, it is time to display the contact form. The file in question is view.php, where we can put virtually any HTML or PHP code that suits our needs. For example, in order to display our contact form, we can hardcode the HTML markup or make use of Form Helper to display the HTML markup. Thus, a hardcoded version of our contact form might look as follows: <?php defined('C5_EXECUTE') or die("Access Denied.");global $c; ?><form method="post" action="<?php echo $this->action('contact_submit');?>"><label for="txtContactTitle">SampleLabel</label><input type="text" name="txtContactTitle" /><br /><br /><label for="taContactMessage"></label><textarea name="taContactMessage"></textarea><br /><br /><input type="submit" name="btnContactSubmit" /></form> Each time the block is displayed, the view() function from controller.php will be called. The action() method in the previous code generates URLs and verifies the submitted values each time a user inputs content in our contact form. Much like any other contact form, we now need to handle contact requests. The procedure is pretty simple and almost the same as what we will use in any other development environment. We need to verify that the request in question is a POST request and accordingly, call the $post variable. If not, we need to discard the entire request. We can also use the mail helper to send an e-mail to the website owner or administrator. Before our block can be fully functional, we need to add a database table because concrete5, much like most other CMSs in its league, tends to work with a database system. In order to add a database table, create a file named db.xml within the concerned block's folder. Thereafter, concrete5 will automatically parse this file and create a relevant table in the database for your block. For our previous contact form block, and for other basic block building purposes, this is how the db.xml file should look: <?xml version="1.0"?><schema version="0.3"><table name="btContact"><field name="bID" type="I"><key /><unsigned /></field></table></schema> You can make relevant changes in the preceding schema definitions to suit your needs. For instance, this is how the default YouTube block's db.xml file will look: <?xml version="1.0"?><schema version="0.3"><table name="btYouTube"><field name="bID" type="I"><key /><unsigned /></field><field name="title" type="C" size="255"></field><field name="videoURL" type="C" size="255"></field></table></schema> The preceding steps enumerate the process of creating your first block in concrete5. However, while you are now aware of the steps involved in the creation of blocks and can easily work with concrete5 blocks for the most part, there are certain additional details that you should be aware of if you are to utilize the block's functionality in concrete5 to its fullest abilities. The first and probably the most useful of such detail is validation of user inputs within blocks and forms. Summary In this article, we learned how to create our very first block in concrete5. Resources for Article: Further resources on this subject: Alfresco 3: Writing and Executing Scripts [Article] Integrating Moodle 2.0 with Alfresco to Manage Content for Business [Article] Alfresco 3 Business Solutions: Types of E-mail Integration [Article]
Read more
  • 0
  • 0
  • 10371
Modal Close icon
Modal Close icon