Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-cross-site-request-forgery
Packt
17 Nov 2014
9 min read
Save for later

Cross-site Request Forgery

Packt
17 Nov 2014
9 min read
In this article by Y.E Liang, the author of JavaScript Security, we will cover cross-site forgery. This topic is not exactly new. In this article, we will go deeper into cross-site forgery and learn the various techniques of defending against it. (For more resources related to this topic, see here.) Introducing cross-site request forgery Cross-site request forgery (CSRF) exploits the trust that a site has in a user's browser. It is also defined as an attack that forces an end user to execute unwanted actions on a web application in which the user is currently authenticated. Examples of CSRF We will now take a look at a basic CSRF example: Go to the source code and change the directory. Run the following command: python xss_version.py Remember to start your MongoDB process as well. Next, open external.html found in templates, in another host, say http://localhost:8888. You can do this by starting the server, which can be done by running pyton xss_version.py –port=8888, and then visiting http://loaclhost:8888/todo_external. You will see the following screenshot: Adding a new to-do item Click on Add To Do, and fill in a new to-do item, as shown in the following screenshot: Adding a new to-do item and posting it Next, click on Submit. Going back to your to-do list app at http://localhost:8000/todo and refreshing it, you will see the new to-do item added to the database, as shown in the following screenshot: To-do item is added from an external app; this is dangerous! To attack the to-do list app, all we need to do is add a new item that contains a line of JavaScript, as shown in the following screenshot: Adding a new to do for the Python version Now, click on Submit. Then, go back to your to-do app at http://localhost:8000/todo, and you will see two subsequent alerts, as shown in the following screenshot: Successfully injected JavaScript part 1 So here's the first instance where CSRF happens: Successfully injected JavaScript part 2 Take note that this can happen to the other backend written in other languages as well. Now go to your terminal, turn off the Python server backend, and change the directory to node/. Start the node server by issuing this command: node server.js This time around, the server is running at http://localhost:8080, so remember to change the $.post() endpoint to http://localhost:8080 instead of http://localhost:8000 in external.html, as shown in the following code:    function addTodo() {      var data = {        text: $('#todo_title').val(),        details:$('#todo_text').val()      }      // $.post('http://localhost:8000/api/todos', data,      function(result) {      $.post('http://localhost:8080/api/todos', data,      function(result) {        var item = todoTemplate(result.text, result.details);        $('#todos').prepend(item);        $("#todo-form").slideUp();      })    } The line changed is found at addTodo(); the highlighted code is the correct endpoint for this section. Now, going back to external.html, add a new to-do item containing JavaScript, as shown in the following screenshot: Trying to inject JavaScript into a to-do app based on Node.js As usual, submit the item. Go to http://localhost:8080/api/ and refresh; you should see two alerts (or four alerts if you didn't delete the previous ones). The first alert is as follows: Successfully injected JavaScript part 1 The second alert is as follows: Successfully injected JavaScript part 1 Now that we have seen what can happen to our app if we suffered a CSRF attack, let's think about how such attacks can happen. Basically, such attacks can happen when our API endpoints (or URLs accepting the requests) are not protected at all. Attackers can exploit such vulnerabilities by simply observing which endpoints are used and attempt to exploit them by performing a basic HTTP POST operation to it. Basic defense against CSRF attacks If you are using modern frameworks or packages, the good news is that you can easily protect against such attacks by turning on or making use of CSRF protection. For example, for server.py, you can turn on xsrf_cookie by setting it to True, as shown in the following code: class Application(tornado.web.Application):    def __init__(self):        handlers = [            (r"/api/todos", Todos),            (r"/todo", TodoApp)          ]        conn = pymongo.Connection("localhost")        self.db = conn["todos"]        settings = dict(            xsrf_cookies=True,            debug=True,            template_path=os.path.join(os.path.dirname(__file__),            "templates"),            static_path=os.path.join(os.path.dirname(__file__),            "static")        )        tornado.web.Application.__init__(self, handlers, **settings) Note the highlighted line, where we set xsrf_cookies=True. Have a look at the following code snippet: var express   = require('express'); var bodyParser = require('body-parser'); var app       = express(); var session   = require('cookie-session'); var csrf   = require('csrf');   app.use(csrf()); app.use(bodyParser()); The highlighted lines are the new lines (compared to server.js) to add in CSRF protection. Now that both backends are equipped with CSRF protection, you can try to make the same post from external.html. You will not be able to make any post from external.html. For example, you can open Chrome's developer tool and go to Network. You will see the following: POST forbidden On the terminal, you will see a 403 error from our Python server, which is shown in the following screenshot: POST forbidden from the server side Other examples of CSRF CSRF can also happen in many other ways. In this section, we'll cover the other basic examples on how CSRF can happen. CSRF using the <img> tags This is a classic example. Consider the following instance: <img src=http://yousite.com/delete?id=2 /> Should you load a site that contains this img tag, chances are that a piece of data may get deleted unknowingly. Now that we have covered the basics of preventing CSRF attacks through the use of CSRF tokens, the next question you may have is: what if there are times when you need to expose an API to an external app? For example, Facebook's Graph API, Twitter's API, and so on, allow external apps not only to read, but also write data to their system. How do we prevent malicious attacks in this situation? We'll cover this and more in the next section. Other forms of protection Using CSRF tokens may be a convenient way to protect your app from CSRF attacks, but it can be a hassle at times. As mentioned in the previous section, what about the times when you need to expose an API to allow mobile access? Or, your app is growing so quickly that you want to accelerate that growth by creating a Graph API of your own. How do you manage it then? In this section, we will go quickly over the techniques for protection. Creating your own app ID and app secret – OAuth-styled Creating your own app ID and app secret is similar to what the major Internet companies are doing right now: we require developers to sign up for developing accounts and to attach an application ID and secret key for each of the apps. Using this information, the developers will need to exchange OAuth credentials in order to make any API calls, as shown in the following screenshot: Google requires developers to sign up, and it assigns the client ID On the server end, all you need to do is look for the application ID and secret key; if it is not present, simply reject the request. Have a look at the following screenshot: The same thing with Facebook; Facebook requires you to sign up, and it assigns app ID and app secret Checking the Origin header Simply put, you want to check where the request is coming from. This is a technique where you can check the Origin header. The Origin header, in layman's terms, refers to where the request is coming from. There are at least two use cases for the usage of the Origin header, which are as follows: Assuming your endpoint is used internally (by your own web application) and checking whether the requests are indeed made from the same website, that is, your website. If you are creating an endpoint for external use, such as those similar to Facebook's Graph API, then you can make those developers register the website URL where they are going to use the API. If the website URL does not match with the one that is being registered, you can reject this request. Note that the Origin header can also be modified; for example, an attacker can provide a header that is modified. Limiting the lifetime of the token Assuming that you are generating your own tokens, you may also want to limit the lifetime of the token, for instance, making the token valid for only a certain time period if the user is logged in to your site. Similarly, your site can make this a requirement in order for the requests to be made; if the token does not exist, HTTP requests cannot be made. Summary In this article, we covered the basic forms of CSRF attacks and how to defend against it. Note that these security loopholes can come from both the frontend and server side. Resources for Article: Further resources on this subject: Implementing Stacks using JavaScript [Article] Cordova Plugins [Article] JavaScript Promises – Why Should I Care? [Article]
Read more
  • 0
  • 0
  • 12359

article-image-deployment-and-post-deployment
Packt
17 Nov 2014
30 min read
Save for later

Deployment and Post Deployment

Packt
17 Nov 2014
30 min read
In this article by Shalabh Aggarwal, the author of Flask Framework Cookbook, we will talk about various application-deployment techniques, followed by some monitoring tools that are used post-deployment. (For more resources related to this topic, see here.) Deployment of an application and managing the application post-deployment is as important as developing it. There can be various ways of deploying an application, where choosing the best way depends on the requirements. Deploying an application correctly is very important from the points of view of security and performance. There are multiple ways of monitoring an application after deployment of which some are paid and others are free to use. Using them again depends on requirements and features offered by them. Each of the tools and techniques has its own set of features. For example, adding too much monitoring to an application can prove to be an extra overhead to the application and the developers as well. Similarly, missing out on monitoring can lead to undetected user errors and overall user dissatisfaction. Hence, we should choose the tools wisely and they will ease our lives to the maximum. In the post-deployment monitoring tools, we will discuss Pingdom and New Relic. Sentry is another tool that will prove to be the most beneficial of all from a developer's perspective. Deploying with Apache First, we will learn how to deploy a Flask application with Apache, which is, unarguably, the most popular HTTP server. For Python web applications, we will use mod_wsgi, which implements a simple Apache module that can host any Python applications that support the WSGI interface. Remember that mod_wsgi is not the same as Apache and needs to be installed separately. Getting ready We will start with our catalog application and make appropriate changes to it to make it deployable using the Apache HTTP server. First, we should make our application installable so that our application and all its libraries are on the Python load path. This can be done using a setup.py script. There will be a few changes to the script as per this application. The major changes are mentioned here: packages=[    'my_app',    'my_app.catalog', ], include_package_data=True, zip_safe = False, First, we mentioned all the packages that need to be installed as part of our application. Each of these needs to have an __init__.py file. The zip_safe flag tells the installer to not install this application as a ZIP file. The include_package_data statement reads from a MANIFEST.in file in the same folder and includes any package data mentioned here. Our MANIFEST.in file looks like: recursive-include my_app/templates * recursive-include my_app/static * recursive-include my_app/translations * Now, just install the application using the following command: $ python setup.py install Installing mod_wsgi is usually OS-specific. Installing it on a Debian-based distribution should be as easy as just using the packaging tool, that is, apt or aptitude. For details, refer to https://code.google.com/p/modwsgi/wiki/InstallationInstructions and https://github.com/GrahamDumpleton/mod_wsgi. How to do it… We need to create some more files, the first one being app.wsgi. This loads our application as a WSGI application: activate_this = '<Path to virtualenv>/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this))   from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) As we perform all our installations inside virtualenv, we need to activate the environment before our application is loaded. In the case of system-wide installations, the first two statements are not needed. Then, we need to import our app object as application, which is used as the application being served. The last two lines are optional, as they just stream the output to the the standard logger, which is disabled by mod_wsgi by default. The app object needs to be imported as application, because mod_wsgi expects the application keyword. Next comes a config file that will be used by the Apache HTTP server to serve our application correctly from specific locations. The file is named apache_wsgi.conf: <VirtualHost *>      WSGIScriptAlias / <Path to application>/flask_catalog_deployment/app.wsgi      <Directory <Path to application>/flask_catalog_deployment>        Order allow,deny        Allow from all    </Directory>   </VirtualHost> The preceding code is the Apache configuration, which tells the HTTP server about the various directories where the application has to be loaded from. The final step is to add the apache_wsgi.conf file to apache2/httpd.conf so that our application is loaded when the server runs: Include <Path to application>/flask_catalog_deployment/ apache_wsgi.conf How it works… Let's restart the Apache server service using the following command: $ sudo apachectl restart Open up http://127.0.0.1/ in the browser to see the application's home page. Any errors coming up can be seen at /var/log/apache2/error_log (this path can differ depending on OS). There's more… After all this, it is possible that the product images uploaded as part of the product creation do not work. For this, we should make a small modification to our application's configuration: app.config['UPLOAD_FOLDER'] = '<Some static absolute path>/flask_test_uploads' We opted for a static path because we do not want it to change every time the application is modified or installed. Now, we will include the path chosen in the preceding code to apache_wsgi.conf: Alias /static/uploads/ "<Some static absolute path>/flask_test_uploads/" <Directory "<Some static absolute path>/flask_test_uploads">    Order allow,deny    Options Indexes    Allow from all    IndexOptions FancyIndexing </Directory> After this, install the application and restart apachectl. See also http://httpd.apache.org/ https://code.google.com/p/modwsgi/ http://wsgi.readthedocs.org/en/latest/ https://pythonhosted.org/setuptools/setuptools.html#setting-the-zip-safe-flag Deploying with uWSGI and Nginx For those who are already aware of the usefulness of uWSGI and Nginx, there is not much that can be explained. uWSGI is a protocol as well as an application server and provides a complete stack to build hosting services. Nginx is a reverse proxy and HTTP server that is very lightweight and capable of handling virtually unlimited requests. Nginx works seamlessly with uWSGI and provides many under-the-hood optimizations for better performance. Getting ready We will use our application from the last recipe, Deploying with Apache, and use the same app.wsgi, setup.py, and MANIFEST.in files. Also, other changes made to the application's configuration in the last recipe will apply to this recipe as well. Disable any other HTTP servers that might be running, such as Apache and so on. How to do it… First, we need to install uWSGI and Nginx. On Debian-based distributions such as Ubuntu, they can be easily installed using the following commands: # sudo apt-get install nginx # sudo apt-get install uWSGI You can also install uWSGI inside a virtualenv using the pip install uWSGI command. Again, these are OS-specific, so refer to the respective documentations as per the OS used. Make sure that you have an apps-enabled folder for uWSGI, where we will keep our application-specific uWSGI configuration files, and a sites-enabled folder for Nginx, where we will keep our site-specific configuration files. Usually, these are already present in most installations in the /etc/ folder. If not, refer to the OS-specific documentations to figure out the same. Next, we will create a file named uwsgi.ini in our application: [uwsgi] http-socket   = :9090 plugin   = python wsgi-file = <Path to application>/flask_catalog_deployment/app.wsgi processes   = 3 To test whether uWSGI is working as expected, run the following command: $ uwsgi --ini uwsgi.ini The preceding file and command are equivalent to running the following command: $ uwsgi --http-socket :9090 --plugin python --wsgi-file app.wsgi Now, point your browser to http://127.0.0.1:9090/; this should open up the home page of the application. Create a soft link of this file to the apps-enabled folder mentioned earlier using the following command: $ ln -s <path/to/uwsgi.ini> <path/to/apps-enabled> Before moving ahead, edit the preceding file to replace http-socket with socket. This changes the protocol from HTTP to uWSGI (read more about it at http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html). Now, create a new file called nginx-wsgi.conf. This contains the Nginx configuration needed to serve our application and the static content: location /{    include uwsgi_params;    uwsgi_pass 127.0.0.1:9090; } location /static/uploads/{    alias <Some static absolute path>/flask_test_uploads/; } In the preceding code block, uwsgi_pass specifies the uWSGI server that needs to be mapped to the specified location. Create a soft link of this file to the sites-enabled folder mentioned earlier using the following command: $ ln -s <path/to/nginx-wsgi.conf> <path/to/sites-enabled> Edit the nginx.conf file (usually found at /etc/nginx/nginx.conf) to add the following line inside the first server block before the last }: include <path/to/sites-enabled>/*; After all of this, reload the Nginx server using the following command: $ sudo nginx -s reload Point your browser to http://127.0.0.1/ to see the application that is served via Nginx and uWSGI. The preceding instructions of this recipe can vary depending on the OS being used and different versions of the same OS can also impact the paths and commands used. Different versions of these packages can also have some variations in usage. Refer to the documentation links provided in the next section. See also Refer to http://uwsgi-docs.readthedocs.org/en/latest/ for more information on uWSGI. Refer to http://nginx.com/ for more information on Nginx. There is a good article by DigitalOcean on this. I advise you to go through this to have a better understanding of the topic. It is available at https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-applications-using-uwsgi-web-server-with-nginx. To get an insight into the difference between Apache and Nginx, I think the article by Anturis at https://anturis.com/blog/nginx-vs-apache/ is pretty good. Deploying with Gunicorn and Supervisor Gunicorn is a WSGI HTTP server for Unix. It is very simple to implement, ultra light, and fairly speedy. Its simplicity lies in its broad compatibility with various web frameworks. Supervisor is a monitoring tool that controls various child processes and handles the starting/restarting of these child processes when they exit abruptly due to some reason. It can be extended to control the processes via the XML-RPC API over remote locations without logging in to the server (we won't discuss this here as it is out of the scope of this book). One thing to remember is that these tools can be used along with the other tools mentioned in the applications in the previous recipe, such as using Nginx as a proxy server. This is left to you to try on your own. Getting ready We will start with the installation of both the packages, that is, gunicorn and supervisor. Both can be directly installed using pip: $ pip install gunicorn $ pip install supervisor How to do it… To check whether the gunicorn package works as expected, just run the following command from inside our application folder: $ gunicorn -w 4 -b 127.0.0.1:8000 my_app:app After this, point your browser to http://127.0.0.1:8000/ to see the application's home page. Now, we need to do the same using Supervisor so that this runs as a daemon and will be controlled by Supervisor itself rather than human intervention. First of all, we need a Supervisor configuration file. This can be achieved by running the following command from virtualenv. Supervisor, by default, looks for an etc folder that has a file named supervisord.conf. In system-wide installations, this folder is /etc/, and in virtualenv, it will look for an etc folder in virtualenv and then fall back to /etc/: $ echo_supervisord_conf > etc/supervisord.conf The echo_supervisord_conf program is provided by Supervisor; it prints a sample config file to the location specified. This command will create a file named supervisord.conf in the etc folder. Add the following block in this file: [program:flask_catalog] command=<path/to/virtualenv>/bin/gunicorn -w 4 -b 127.0.0.1:8000 my_app:app directory=<path/to/virtualenv>/flask_catalog_deployment user=someuser # Relevant user autostart=true autorestart=true stdout_logfile=/tmp/app.log stderr_logfile=/tmp/error.log Make a note that one should never run the applications as a root user. This is a huge security flaw in itself as the application crashes, which can harm the OS itself. How it works… Now, run the following commands: $ supervisord $ supervisorctl status flask_catalog   RUNNING   pid 40466, uptime 0:00:03 The first command invokes the supervisord server, and the next one gives a status of all the child processes. The tools discussed in this recipe can be coupled with Nginx to serve as a reverse proxy server. I suggest that you try it by yourself. Every time you make a change to your application and then wish to restart Gunicorn in order for it to reflect the changes, run the following command: $ supervisorctl restart all You can also give specific processes instead of restarting everything: $ supervisorctl restart flask_catalog See also http://gunicorn-docs.readthedocs.org/en/latest/index.html http://supervisord.org/index.html Deploying with Tornado Tornado is a complete web framework and a standalone web server in itself. Here, we will use Flask to create our application, which is basically a combination of URL routing and templating, and leave the server part to Tornado. Tornado is built to hold thousands of simultaneous standing connections and makes applications very scalable. Tornado has limitations while working with WSGI applications. So, choose wisely! Read more at http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers. Getting ready Installing Tornado can be simply done using pip: $ pip install tornado How to do it… Next, create a file named tornado_server.py and put the following code in it: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from my_app import app   http_server = HTTPServer(WSGIContainer(app)) http_server.listen(5000) IOLoop.instance().start() Here, we created a WSGI container for our application; this container is then used to create an HTTP server, and the application is hosted on port 5000. How it works… Run the Python file created in the previous section using the following command: $ python tornado_server.py Point your browser to http://127.0.0.1:5000/ to see the home page being served. We can couple Tornado with Nginx (as a reverse proxy to serve static content) and Supervisor (as a process manager) for the best results. It is left for you to try this on your own. Using Fabric for deployment Fabric is a command-line tool in Python; it streamlines the use of SSH for application deployment or system-administration tasks. As it allows the execution of shell commands on remote servers, the overall process of deployment is simplified, as the whole process can now be condensed into a Python file, which can be run whenever needed. Therefore, it saves the pain of logging in to the server and manually running commands every time an update has to be made. Getting ready Installing Fabric can be simply done using pip: $ pip install fabric We will use the application from the Deploying with Gunicorn and Supervisor recipe. We will create a Fabric file to perform the same process to the remote server. For simplicity, let's assume that the remote server setup has been already done and all the required packages have also been installed with a virtualenv environment, which has also been created. How to do it… First, we need to create a file called fabfile.py in our application, preferably at the application's root directory, that is, along with the setup.py and run.py files. Fabric, by default, expects this filename. If we use a different filename, then it will have to be explicitly specified while executing. A basic Fabric file will look like: from fabric.api import sudo, cd, prefix, run   def deploy_app():    "Deploy to the server specified"    root_path = '/usr/local/my_env'      with cd(root_path):        with prefix("source %s/bin/activate" % root_path):            with cd('flask_catalog_deployment'):                run('git pull')                run('python setup.py install')              sudo('bin/supervisorctl restart all') Here, we first moved into our virtualenv, activated it, and then moved into our application. Then, the code is pulled from the Git repository, and the updated application code is installed using setup.py install. After this, we restarted the supervisor processes so that the updated application is now rendered by the server. Most of the commands used here are self-explanatory, except prefix, which wraps all the succeeding commands in its block with the command provided. This means that the command to activate virtualenv will run first and then all the commands in the with block will execute with virtualenv activated. The virtualenv will be deactivated as soon as control goes out of the with block. How it works… To run this file, we need to provide the remote server where the script will be executed. So, the command will look something like: $ fab -H my.remote.server deploy_app Here, we specified the address of the remote host where we wish to deploy and the name of the method to be called from the fab script. There's more… We can also specify the remote host inside our fab script, and this can be good idea if the deployment server remains the same most of the times. To do this, add the following code to the fab script: from fabric.api import settings   def deploy_app_to_server():    "Deploy to the server hardcoded"    with settings(host_string='my.remote.server'):        deploy_app() Here, we have hardcoded the host and then called the method we created earlier to start the deployment process. S3 storage for file uploads Amazon explains S3 as the storage for the Internet that is designed to make web-scale computing easier for developers. S3 provides a very simple interface via web services; this makes storage and retrieval of any amount of data very simple at any time from anywhere on the Internet. Until now, in our catalog application, we saw that there were issues in managing the product images uploaded as a part of the creating process. The whole headache will go away if the images are stored somewhere globally and are easily accessible from anywhere. We will use S3 for the same purpose. Getting ready Amazon offers boto, a complete Python library that interfaces with Amazon Web Services via web services. Almost all of the AWS features can be controlled using boto. It can be installed using pip: $ pip install boto How to do it… Now, we should make some changes to our existing catalog application to accommodate support for file uploads and retrieval from S3. First, we need to store the AWS-specific configuration to allow boto to make calls to S3. Add the following statements to the application's configuration file, that is, my_app/__init__.py: app.config['AWS_ACCESS_KEY'] = 'Amazon Access Key' app.config['AWS_SECRET_KEY'] = 'Amazon Secret Key' app.config['AWS_BUCKET'] = 'flask-cookbook' Next, we need to change our views.py file: from boto.s3.connection import S3Connection This is the import that we need from boto. Next, replace the following two lines in create_product(): filename = secure_filename(image.filename) image.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) Replace these two lines with: filename = image.filename conn = S3Connection(    app.config['AWS_ACCESS_KEY'], app.config['AWS_SECRET_KEY'] ) bucket = conn.create_bucket(app.config['AWS_BUCKET']) key = bucket.new_key(filename) key.set_contents_from_file(image) key.make_public() key.set_metadata(    'Content-Type', 'image/' + filename.split('.')[-1].lower() ) The last change will go to our product.html template, where we need to change the image src path. Replace the original img src statement with the following statement: <img src="{{ 'https://s3.amazonaws.com/' + config['AWS_BUCKET'] + '/' + product.image_path }}"/> How it works… Now, run the application as usual and create a product. When the created product is rendered, the product image will take a bit of time to come up as it is now being served from S3 (and not from a local machine). If this happens, then the integration with S3 has been successfully done. Deploying with Heroku Heroku is a cloud application platform that provides an easy and quick way to build and deploy web applications. Heroku manages the servers, deployment, and related operations while developers spend their time on developing applications. Deploying with Heroku is pretty simple with the help of the Heroku toolbelt, which is a bundle of some tools that make deployment with Heroku a cakewalk. Getting ready We will proceed with the application from the previous recipe that has S3 support for uploads. As mentioned earlier, the first step will be to download the Heroku toolbelt, which can be downloaded as per the OS from https://toolbelt.heroku.com/. Once the toolbelt is installed, a certain set of commands will be available at the terminal; we will see them later in this recipe. It is advised that you perform Heroku deployment from a fresh virtualenv where we have only the required packages for our application installed and nothing else. This will make the deployment process faster and easier. Now, run the following command to log in to your Heroku account and sync your machined SSH key with the server: $ heroku login Enter your Heroku credentials. Email: shalabh7777@gmail.com Password (typing will be hidden): Authentication successful. You will be prompted to create a new SSH key if one does not exist. Proceed accordingly. Remember! Before all this, you need to have a Heroku account on available on https://www.heroku.com/. How to do it… Now, we already have an application that needs to be deployed to Heroku. First, Heroku needs to know the command that it needs to run while deploying the application. This is done in a file named Procfile: web: gunicorn -w 4 my_app:app Here, we will tell Heroku to run this command to run our web application. There are a lot of different configurations and commands that can go into Procfile. For more details, read the Heroku documentation. Heroku needs to know the dependencies that need to be installed in order to successfully install and run our application. This is done via the requirements.txt file: Flask==0.10.1 Flask-Restless==0.14.0 Flask-SQLAlchemy==1.0 Flask-WTF==0.10.0 Jinja2==2.7.3 MarkupSafe==0.23 SQLAlchemy==0.9.7 WTForms==2.0.1 Werkzeug==0.9.6 boto==2.32.1 gunicorn==19.1.1 itsdangerous==0.24 mimerender==0.5.4 python-dateutil==2.2 python-geoip==1.2 python-geoip-geolite2==2014.0207 python-mimeparse==0.1.4 six==1.7.3 wsgiref==0.1.2 This file contains all the dependencies of our application, the dependencies of these dependencies, and so on. An easy way to generate this file is using the pip freeze command: $ pip freeze > requirements.txt This will create/update the requirements.txt file with all the packages installed in virtualenv. Now, we need to create a Git repo of our application. For this, we will run the following commands: $ git init $ git add . $ git commit -m "First Commit" Now, we have a Git repo with all our files added. Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to create a Heroku application and push our application to Heroku: $ heroku create Creating damp-tor-6795... done, stack is cedar http://damp-tor-6795.herokuapp.com/ | git@heroku.com:damp-tor- 6795.git Git remote heroku added $ git push heroku master After the last command, a whole lot of stuff will get printed on the terminal; this will indicate all the packages being installed and finally, the application being launched. How it works… After the previous commands have successfully finished, just open up the URL provided by Heroku at the end of deployment in a browser or run the following command: $ heroku open This will open up the application's home page. Try creating a new product with an image and see the image being served from Amazon S3. To see the logs of the application, run the following command: $ heroku logs There's more… There is a glitch with the deployment we just did. Every time we update the deployment via the git push command, the SQLite database gets overwritten. The solution to this is to use the Postgres setup provided by Heroku itself. I urge you to try this by yourself. Deploying with AWS Elastic Beanstalk In the last recipe, we saw how deployment to servers becomes easy with Heroku. Similarly, Amazon has a service named Elastic Beanstalk, which allows developers to deploy their application to Amazon EC2 instances as easily as possible. With just a few configuration options, a Flask application can be deployed to AWS using Elastic Beanstalk in a couple of minutes. Getting ready We will start with our catalog application from the previous recipe, Deploying with Heroku. The only file that remains the same from this recipe is requirement.txt. The rest of the files that were added as a part of that recipe can be ignored or discarded for this recipe. Now, the first thing that we need to do is download the AWS Elastic Beanstalk command-line tool library from the Amazon website (http://aws.amazon.com/code/6752709412171743). This will download a ZIP file that needs to be unzipped and placed in a suitable place, preferably your workspace home. The path of this tool should be added to the PATH environment so that the commands are available throughout. This can be done via the export command as shown: $ export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ This can also be added to the ~/.profile or ~/.bash_profile file using: export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ How to do it… There are a few conventions that need to be followed in order to deploy using Beanstalk. Beanstalk assumes that there will be a file called application.py, which contains the application object (in our case, the app object). Beanstalk treats this file as the WSGI file, and this is used for deployment. In the Deploying with Apache recipe, we had a file named app.wgsi where we referred our app object as application because apache/mod_wsgi needed it to be so. The same thing happens here too because Amazon, by default, deploys using Apache behind the scenes. The contents of this application.py file can be just a few lines as shown here: from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) Now, create a Git repo in the application and commit with all the files added: $ git init $ git add . $ git commit -m "First Commit" Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to deploy to Elastic Beanstalk. Run the following command to do this: $ eb init The preceding command initializes the process for the configuration of your Elastic Beanstalk instance. It will ask for the AWS credentials followed by a lot of other configuration options needed for the creation of the EC2 instance, which can be selected as needed. For more help on these options, refer to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html. After this is done, run the following command to trigger the creation of servers, followed by the deployment of the application: $ eb start Behind the scenes, the preceding command creates the EC2 instance (a volume), assigns an elastic IP, and then runs the following command to push our application to the newly created server for deployment: $ git aws.push This will take a few minutes to complete. When done, you can check the status of your application using the following command: $ eb status –verbose Whenever you need to update your application, just commit your changes using the git and push commands as follows: $ git aws.push How it works… When the deployment process finishes, it gives out the application URL. Point your browser to it to see the application being served. Yet, you will find a small glitch with the application. The static content, that is, the CSS and JS code, is not being served. This is because the static path is not correctly comprehended by Beanstalk. This can be simply fixed by modifying the application's configuration on your application's monitoring/configuration page in the AWS management console. See the following screenshots to understand this better: Click on the Configuration menu item in the left-hand side menu. Notice the highlighted box in the preceding screenshot. This is what we need to change as per our application. Open Software Settings. Change the virtual path for /static/, as shown in the preceding screenshot. After this change is made, the environment created by Elastic Beanstalk will be updated automatically, although it will take a bit of time. When done, check the application again to see the static content also being served correctly. Application monitoring with Pingdom Pingdom is a website-monitoring tool that has the USP of notifying you as soon as your website goes down. The basic idea behind this tool is to constantly ping the website at a specific interval, say, 30 seconds. If a ping fails, it will notify you via an e-mail, SMS, tweet, or push notifications to mobile apps, which inform that your site is down. It will keep on pinging at a faster rate until the site is back up again. There are other monitoring features too, but we will limit ourselves to uptime checks in this book. Getting ready As Pingdom is a SaaS service, the first step will be to sign up for an account. Pingdom offers a free trial of 1 month in case you just want to try it out. The website for the service is https://www.pingdom.com. We will use the application deployed to AWS in the Deploying with AWS Elastic Beanstalk recipe to check for uptime. Here, Pingdom will send an e-mail in case the application goes down and will send an e-mail again when it is back up. How to do it… After successful registration, create a check for time. Have a look at the following screenshot: As you can see, I already added a check for the AWS instance. To create a new check, click on the ADD NEW button. Fill in the details asked by the form that comes up. How it works… After the check is successfully created, try to break the application by consciously making a mistake somewhere in the code and then deploying to AWS. As soon as the faulty application is deployed, you will get an e-mail notifying you of this. This e-mail will look like: Once the application is fixed and put back up again, the next e-mail should look like: You can also check how long the application has been up and the downtime instances from the Pingdom administration panel. Application performance management and monitoring with New Relic New Relic is an analytics software that provides near real-time operational and business analytics related to your application. It provides deep analytics on the behavior of the application from various aspects. It does the job of a profiler as well as eliminating the need to maintain extra moving parts in the application. It actually works in a scenario where our application sends data to New Relic rather than New Relic asking for statistics from our application. Getting ready We will use the application from the last recipe, which is deployed to AWS. The first step will be to sign up with New Relic for an account. Follow the simple signup process, and upon completion and e-mail verification, it will lead to your dashboard. Here, you will have your license key available, which we will use later to connect our application to this account. The dashboard should look like the following screenshot: Here, click on the large button named Reveal your license key. How to do it… Once we have the license key, we need to install the newrelic Python library: $ pip install newrelic Now, we need to generate a file called newrelic.ini, which will contain details regarding the license key, the name of our application, and so on. This can be done using the following commands: $ newrelic-admin generate-config LICENSE-KEY newrelic.ini In the preceding command, replace LICENSE-KEY with the actual license key of your account. Now, we have a new file called newrelic.ini. Open and edit the file for the application name and anything else as needed. To check whether the newrelic.ini file is working successfully, run the following command: $ newrelic-admin validate-config newrelic.ini This will tell us whether the validation was successful or not. If not, then check the license key and its validity. Now, add the following lines at the top of the application's configuration file, that is, my_app/__init__.py in our case. Make sure that you add these lines before anything else is imported: import newrelic.agent newrelic.agent.initialize('newrelic.ini') Now, we need to update the requirements.txt file. So, run the following command: $ pip freeze > requirements.txt After this, commit the changes and deploy the application to AWS using the following command: $ git aws.push How it works… Once the application is successfully updated on AWS, it will start sending statistics to New Relic, and the dashboard will have a new application added to it. Open the application-specific page, and a whole lot of statistics will come across. It will also show which calls have taken the most amount of time and how the application is performing. You will also see multiple tabs that correspond to a different type of monitoring to cover all the aspects. Summary In this article, we have seen the various techniques used to deploy and monitor Flask applications. Resources for Article: Further resources on this subject: Understanding the Python regex engine [Article] Exploring Model View Controller [Article] Plotting Charts with Images and Maps [Article]
Read more
  • 0
  • 0
  • 2187

article-image-building-beowulf-cluster
Packt
17 Nov 2014
19 min read
Save for later

Building a Beowulf Cluster

Packt
17 Nov 2014
19 min read
A Beowulf cluster is nothing more than a bunch of computers interconnected by Ethernet and running with a Linux or BSD operating system. A key feature is the communication over IP (Internet Protocol) that distributes problems among the boards. The entity of the boards or computers is called a cluster and each board or computer is called a node. In this article, written by Andreas Joseph Reichel, the author of Building a BeagleBone Black Super Cluster, we will first see what is really required for each board to run inside a cluster environment. You will see examples of how to build a cheap and scalable cluster housing and how to modify an ATX power supply in order to use it as a power source. I will then explain the network interconnection of the Beowulf cluster and have a look at its network topology. The article concludes with an introduction to the microSD card usage for installation images and additional swap space as well as external network storage. The following topics will be covered: Describing the minimally required equipment Building a scalable housing Modifying an ATX power source Introducing the Beowulf network topology Managing microSD cards Using external network storage We will first start with a closer look at the utilization of a single BBB and explain the minimal hardware configuration required. (For more resources related to this topic, see here.) Minimal configuration and optional equipment BBB is a single-board computer that has all the components needed to run Linux distributions that support ARMhf platforms. Due to the very powerful network utilities that come with Linux operating systems, it is not necessary to install a mouse or keyboard. Even a monitor is not required in order to install and configure a new BBB. First, we will have a look at the minimal configuration required to use a single board over a network. Minimal configuration A very powerful interface of Linux operating systems is its standard support for SSH. SSH is the abbreviation of Secure Shell, and it enables users to establish an authenticated and encrypted network connection to a remote PC that provides a Shell. Its command line can then be utilized to make use of the PC without any local monitor or keyboard. SSH is the secure replacement for the telnet service. The following diagram shows you the typical configuration of a local area network using SSH for the remote control of a BBB board: The minimal configuration for the SSH control SSH is a key feature of Linux and comes preinstalled on most distributions. If you use Microsoft ® Windows™ as your host operating system, you will require additional software such as putty, which is an SSH client that is available at http://www.putty.org. On Linux and Mac OS, there is usually an SSH client already installed, which can be started using the ssh command. Using a USB keyboard It is practical for several boards to be configured using the same network computer and an SSH client. However, if a system does not boot up, it can be hard for a beginner to figure out the reason. If you get stuck with such a problem and don't find a solution using SSH, or the SSH login is not possible for some reason anymore, it might be helpful to use a local keyboard and a local monitor to control the problematic board such as a usual PC. Installing a keyboard is possible with the onboard USB host port. A very practical way is to use a wireless keyboard and mouse combination. In this case, you only need to plug the wireless control adapter into the USB host port. Using the HDMI adapter and monitor The BBB board supports high definition graphics and, therefore, uses a mini HDMI port for the video output. In order to use a monitor, you need an adapter for mini HDMI to HDMI, DVI, or VGA, respectively. Building a scalable board-mounting system The following image shows you the finished board housing with its key components as well as some installed BBBs. Here, a indicates the threaded rod with the straw as the spacer, b indicates BeagleBone Black, c indicates the Ethernet cable, d indicates 3.5" hard disc cooling fans, e indicates the 5 V power cable, and f indicates the plate with drilled holes. The finished casing with installed BBBs One of the most important things that you have to consider before building a super computer is the space you require. It is not only important to provide stable and practical housing for some BBB boards, but also to keep in mind that you might want to upgrade the system to more boards in the future. This means that you require a scalable system that is easy to upgrade. Also, you need to keep in mind that every single board requires its own power and has to be accessible by hand (reset, boot-selection, and the power button as well as the memory card, and so on). The networking cables also need some place depending on their lengths. There are also flat Ethernet cables that need less space. The tidier the system is built, the easier it will be to track down errors or exchange faulty boards, cables, or memory cards. However, there is a more important point. Although the BBB boards are very power-efficient, they get quite warm depending on their utilization. If you have 20 boards stacked onto each other and do not provide sufficient space for air flow, your system will overheat and suffer from data loss or malfunctions. Insufficient air flow can result in the burning of devices and other permanent hardware damage. Please remember that I'm is not liable for any damages resulting from an insufficient cooling system. Depending on your taste, you can spend a lot of money on your server housing and put some lights inside and make it glow like a Christmas tree. However, I will show you very cheap housing, which is easy and fast to build and still robust enough, scalable, and practical to use. Board-holding rods The key idea of my board installation is to use the existing corner holes of the BBB boards and attach the boards on four rods in order to build a horizontal stack. This stack is then held by two side plates and a base plate. Usually, when I experiment and want to build a prototype, it is helpful not to predefine every single measurement, and then invest money into the sawing and cutting of these parts. Instead, I look around in some hobby markets and see what they have and think about whether I can use these parts. However, drilling some holes is not unavoidable. When you get to drilling holes and using screws and threads, you might know or not know that there are two different systems. One is the metric system and the other is the English system. The BBB board has four holes and their size fits to 1/8" in the English or M3 in the metric system. According to the international standard, this article will only name metric dimensions. For easy and quick installation of the boards, I used four M3 threaded rods that are obtainable at model making or hobby shops. I got mine at Conrad Electronic. For the base plates, I went to a local raw material store. The following diagram shows you the mounting hole positions for the side walls with the dimensions of BBB (dashed line). The measurements are given for the English and metric system. The mounting hole's positions Board spacers As mentioned earlier, it is important to leave enough space between the boards in order to provide finger access to the buttons and, of course, for airflow. First, I mounted each board with eight nuts. However, when you have 16 boards installed and want to uninstall the eighth board from the left, then it will take you a lot of time and nerves to get the nuts along the threaded rods. A simple solution with enough stability is to use short parts of straws. You can buy some thick drinking straws and cut them into equally long parts, each of two or three centimeters in length. Then, you can put them between the boards onto the threaded rods in order to use them as spacers. Of course, this is not the most stable way, but it is sufficient, cheap, and widely available. Cooling system One nice possibility I found for cooling the system is to use hard disk fans. They are not so cheap but I had some lying around for years. Usually, they are mounted to the lower side of 3.5" hard discs, and their width is approximately the length of one BBB. So, they are suitable for the base plate of our casing and can provide enough air flow to cool the whole system. I installed two with two fans each for eight boards and a third one for future upgrades. The following image shows you my system with eight boards installed: A board housing with BBBs and cooling system Once you have built housing with a cooling system, you can install your boards. The next step will be the connection of each board to a power source as well as the network interconnection. Both are described in the following sections. Using a low-cost power source I have seen a picture on the Web where somebody powered a dozen older Beagle Boards with a lot of single DC adapters and built everything into a portable case. The result was a huge mess of cables. You should always try to keep your cables well organized in order to save space and improve the cooling performance. Using an ATX power supply with a cable tree can save you a lot of money compared to buying several standalone power supplies. They are stable and can also provide some protection for hardware, which cheap DC adapters don't always do. In the following section, I will explain the power requirements and how to modify an ATX power supply to fit our needs. Power requirements If you do not use an additional keyboard and mouse and only onboard flash memory, one board needs around 500 mA at 5 V voltage, which gives you a total power of 2.5 Watts for one board. Depending on the installed memory card or other additional hardware, you might need more. Please note that using the Linux distribution described in this article is not compatible with the USB-client port power supply. You have to use the 5 V power jack. Power cables If you want to use an ATX power supply, then you need to build an adapter from the standard PATA or SATA power plugs to a low voltage plug that fits the 5 V jack of the board. You need a 5.5/2.1 mm low voltage plug and they are obtainable from VOLTCRAFT with cables already attached. I got mine from Conrad Electronics (item number 710344). Once you have got your power cables, you can build a small distribution box. Modifying the ATX power supply ATX power supplies are widely available and power-efficient. They cost around 60 dollars, providing more than 500 Watts of output power. For our purpose, we will only need most power on the 5 V rail and some for fans on the 12 V rail. It is not difficult to modify an ATX supply. The trick is to provide the soft-on signal, because ATX supplies are turned on from the mainboard via a soft-on signal on the green wire. If this green wire is connected to the ground, it turns on. If the connection is lost, it turns off. The following image shows you which wires of the ATX mainboard plug have to be cut and attached to a manual switch in order to build a manual on/off switch: The ATX power plug with a green and black wire, as indicated by the red circle, cut and soldered to a switch As we are using the 5 V and most probably the 12 V rail (for the cooling fans) of the power supply, it is not necessary to add resistors. If the output voltage of the supply is far too low, this means that not enough current is flowing for its internal regulation circuitry. If this happens, you can just add a 1/4 Watt 200 Ohms resistor between any +5 V (red) and GND (neighboring black) pin to drain a current of 25 mA. This should never happen when driving the BBB boards, as their power requirements are much higher and the supply should regulate well. The following image shows you the power cable distribution box. I soldered the power cables together with the cut ends of a PATA connector to a PCB board. The power cable distribution box What could happen is that the resistance of one PATA wire is too high and the voltage drop leads to a supply voltage of below 4.5 Volts. If that happens, some of the BBB boards will not power up. Either you need to retry booting these boards separately by their power button later when all others are booted up, or you need to use two PATA wires instead of one to decrease the resistance. Please have a look if this is possible with your power supply and if the two 5 V lines you want to connect do not belong to different regulation circuitries. Setting up the network backbone To interconnect BBB boards via Ethernet, we need a switch or a hub. There is a difference in the functionality between a switch and a hub: With hubs, computers can communicate with each other. Every computer is connected to the hub with a separate Ethernet cable. The hub is nothing more than a multiport repeater. This means that it just repeats all the information it receives for all other ports, and every connected PC has to decide whether the data is for it or not. This produces a lot of network traffic and can slow down the speed. Switches in comparison can control the flow of network traffic based on the address information in each packet. It learns which traffic packets are received by which PC and then forwards them only to the proper port. This allows simultaneous communication across the switch and improves the bandwidth. This is the reason why switches are the preferred choice of network interconnection for our BBB Beowulf cluster. The following table summarizes the main differences between a hub and a switch:   hub Switch Traffic control no yes Bandwidth low high I bought a 24-port Ethernet switch on eBay with 100 Megabit/s ports. This is enough for the BBB boards. The total bandwidth of the switch is 2.4 Gigabit/s. The network topology The typical network topology is a star configuration. This means that every BBB board has its own connection to the switch, and the switch itself is connected to the local area network (LAN). On most Beowulf clusters, there is one special board called the master node. This master node is used to provide the bridge between the cluster and the rest of the LAN. All users (if there are more persons that use the cluster) log in to the master node, and it is only responsible for user management and starting the correct programs on specified nodes. It usually doesn't contribute to any calculation tasks. However, as BBB only has one network connector, it is not possible to use it as a bridge, because a bridge requires two network ports: One connected to the LAN. The other connected to the switch of the cluster. Because of this, we only define one node as the master node, providing some special software features but also contributing to the calculations of the cluster. This way, all BBBs contribute to the overall calculation power, and we do not need any special hardware to build a network bridge. Regarding security, we can manage everything with SSH login rules and the kernel firewall, if required. The following diagram shows you the network topology used in this article. Every BBB has its own IP address, and you have to reserve the required amount of IP addresses in your LAN. They do not have to be successive; however, it makes it easier if you note down every IP for every board. You can give the boards hostnames such as node1, node2, node3, and so on to make them easier to follow. The network topology The RJ45 network cables There is only one thing you have to keep in mind regarding RJ45 Ethernet cables and 100 Megabit/s transmission speed. There are crossover cables and normal ones. The crossover cables have crossed lines regarding data transmission and receiving. This means that one cable can be used to connect two PCs without a hub or switch. Most modern switches can detect when data packets collide, which means when they are received on the transmitting ports and then automatically switch over the lines again. This feature is called auto MDI-X or auto-uplink. If you have a newer switch, you don't need to pay attention to which sort of cable you buy. Usually, normal RJ45 cables without crossover are the preferred choice. The Ethernet multiport switch As described earlier, we use an Ethernet switch rather than a hub. When buying a switch, you have to decide how many ports you want. For future upgrades, you can also buy an 8-port switch, for example, and later, if you want to go from seven boards (one port for the uplink) to 14 boards, you can upgrade with a second 8-port switch and connect both to the LAN. If you want to build a big system from the beginning, you might want to buy a 24-port switch or an even bigger one. The following image shows you my 24-port Ethernet switch with some connected RJ45 cables below the board housing: A 24-port cluster switch The storage memory One thing you might want to think of in the beginning is the amount of space you require for applications and data. The standard version of BBB has 2 GB flash memory onboard and newer ones have 4 GB. A critical feature of computational nodes is the amount of RAM they have installed. On BBB, this is only 512 MB. If you are of the opinion that this is not enough for your tasks, then you can extend the RAM by installing Linux on an external SD card and create a swap partition on it. However, you have to keep in mind that the external swap space is much slower than the DDR3 memory (MB/s compared to GB/s). If the software is nicely programmed, data can always be sufficiently distributed on the nodes, and each node does not need much RAM. However, with more complicated libraries and tasks, you might want to upgrade some day. Installing images on microSD cards For the installation of Linux, we will need Linux Root File System Images on the microSD card. It is always a good idea to keep these cards for future repair or extension purposes. I keep one installation SD for the master node and one for all slave nodes. When I upgrade the system to more slave nodes, I can just insert the installation SD and easily incorporate the new system with a few commands. The swap space on an SD card Usually, it should be possible to boot Linux from the internal memory and utilize the external microSD card solely as the swap space. However, I had problems utilizing the additional space as it was not properly recognized by the system. I obtained best results when booting from the same card I want the swap partition on. The external network storage To reduce the size of the used software, it is always a good idea to compile it dynamically. Each node you want to use for computations has to start the same program. Programs have to be accessible by each node, which means that you would have to install every program on every node. This is a lot of work when adding additional boards and is a waste of memory in general. To circumvent this problem, I use external network storage on the basis of Samba. The master node can then access the Samba share and create a share for all the client nodes by itself. This way, each node has access to the same software and data, and upgrades can be performed easily. Also, the need for local storage memory is reduced. Important libraries that have to be present on each local filesystem can be introduced by hard links pointing to the network storage location. The following image shows you the storage system of my BBB cluster: The storage topology Some of you might worry when I chose Samba over NFS and think that a 100 Megabit networking is too slow for cluster computations. First of all, I chose Samba because I was used to it and it is well known to most hobbyists. It is very easy to install, and I have used it for over 10 years. Only thing you have to keep in mind is that using Samba will cause your filesystem to treat capital and small letters equally. So, your Linux filenames (ext2, ext3, ext4, and so on) will behave like FAT/NTFS filenames. Regarding the network bandwidth, a double value will require 8 bytes of memory and thus, you can transfer a maximum of 2.4 billion double values per second on a hub with 24 ports and 100 Megabit/s. Additionally, libraries are optimized to keep the network talk as low as possible and solve as much as possible on the local CPU memory system. Thus, for most applications, the construction as described earlier will be sufficient. Summary In this article, you were introduced to the whole cluster concept regarding its hardware and interconnection. You were shown a working system configuration using only the minimally required amount of equipment and also some optional possibilities. A description of very basic housing including a cooling system was given as an example for a cheap yet nicely scalable possibility to mount the boards. You also learned how to build a cost-efficient power supply using a widely available ATX supply, and you were shown how to modify it to power several BBBs. Finally, you were introduced to the network topology and the purpose of network switches. A short description about the used storage system ended this article. If you interconnect everything as described in this article, it means that you have created the hardware basis of a super computer cluster. Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 11210

article-image-web-application-testing
Packt
14 Nov 2014
15 min read
Save for later

Web Application Testing

Packt
14 Nov 2014
15 min read
This article is written by Roberto Messora, the author of the Web App Testing Using Knockout.JS book. This article will give you an overview of various design patterns used in web application testing. It will also tech you web development using jQuery. (For more resources related to this topic, see here.) Presentation design patterns in web application testing The Web has changed a lot since HTML5 has made its appearance. We are witnessing a gradual shift from a classical full server-side web development, to a new architectural asset that moves much of the application logic to the client-side. The general objective is to deliver rich internet applications (commonly known as RIA) with a desktop-like user experience. Think about web applications such as Gmail or Facebook: if you maximize your browser, they look like complete desktop applications in terms of usability, UI effects, responsiveness, and richness. Once we have established that testing is a pillar of our solutions, we need to understand which is the best way to proceed, in terms of software architecture and development. In this regard, it's very important to determine the very basic design principles that allow a proper approach to unit testing. In fact, even though HTML5 is a recent achievement, HTML in general and JavaScript are technologies that have been in use for quite some time. The problem here is that many developers tend to approach the modern web development in the same old way. This is a grave mistake because, back in time, client-side JavaScript development was a lot underrated and mostly confined to simple UI graphic management. Client-side development is historically driven by libraries such as Prototype, jQuery, and Dojo, whose primary feature is DOM (HTML Document Object Model, in other words HTML markup) management. They can work as-is in small web applications, but as soon as these grow in complexity, code base starts to become unmanageable and unmaintainable. We can't really think that we can continue to develop JavaScript in the same way we did 10 years ago. In those days, we only had to dynamically apply some UI transformations. Today we have to deliver complete working applications. We need a better design, but most of all we need to reconsider client-side JavaScript development and apply the advanced design patterns and principles. jQuery web application development JavaScript is the programming language of the web, but its native DOM API is something rudimental. We have to write a lot of code to manage and transform HTML markup to bring UI to life with some dynamic user interaction. Also not full standardization means that the same code can work differently (or not work at all) in different browsers. Over the past years, developers decided to resolve this situation: JavaScript libraries, such as Prototype, jQuery and Dojo have come to light. jQuery is one of the most known open-source JavaScript libraries, which was published for the first time in 2006. Its huge success is mainly due to: A simple and detailed API that allows you to manage HTML DOM elements Cross-browser support Simple and effective extensibility Since its appearance, it's been used by thousands of developers as the foundation library. A large amount of JavaScript code all around the world has been built with keeping jQuery in mind. jQuery ecosystem grew up very quickly and nowadays there are plenty of jQuery plugins that implement virtually everything related to the web development. Despite its simplicity, a typical jQuery web application is virtually untestable. There are two main reasons: User interface items are tightly coupled with the user interface logic User interface logic spans inside event handler callback functions The real problem is that everything passes through a jQuery reference, which is a jQuery("something") call. This means that we will always need a live reference of the HTML page, otherwise these calls will fail, and this is also true for a unit test case. We can't think about testing a piece of user interface logic running an entire web application! Large jQuery applications tend to be monolithic because jQuery itself allows callback function nesting too easily, and doesn't really promote any particular design strategy. The result is often spaghetti code. jQuery is a good option if you want to develop some specific custom plugin, also we will continue to use this library for pure user interface effects and animations, but we need something different to maintain a large web application logic. Presentation design patterns To move a step forward, we need to decide what's the best option in terms of testable code. The main topic here is the application design, in other words, how we can build our code base following a general guideline with keeping testability in mind. In software engineering there's nothing better than not to reinvent the wheel, we can rely on a safe and reliable resource: design patterns. Wikipedia provides a good definition for the term design pattern (http://en.wikipedia.org/wiki/Software_design_pattern): In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. There are tens of specific design patterns, but we also need something that is related to the presentation layer because this is where a JavaScript web application belongs to. The most important aspect in terms of design and maintainability of a JavaScript web application is a clear separation between the user interface (basically, the HTML markup) and the presentation logic. (The JavaScript code that turns a web page dynamic and responsive to user interaction.) This is what we learned digging into a typical jQuery web application. At this point, we need to identify an effective implementation of a presentation design pattern and use it in our web applications. In this regard, I have to admit that the JavaScript community has done an extraordinary job in the last two years: up to the present time, there are literally tens of frameworks and libraries that implement a particular presentation design pattern. We only have to choose the framework that fits our needs, for example, we can start taking a look at MyTodo MVC website (http://todomvc.com/): this is an open source project that shows you how to build the same web application using a different library each time. Most of these libraries implement a so-called MV* design pattern (also Knockout.JS does). MV* means that every design pattern belongs to a broader family with a common root: Model-View-Controller. The MVC pattern is one of the oldest and most enduring architectural design patterns: originally designed by Trygve Reenskaug working on Smalltalk-80 back in 1979, it has been heavily refactored since then. Basically, the MVC pattern enforces the isolation of business data (Models) from user interfaces (Views), with a third component (Controllers) that manages the logic and user-input. It can be described as (Addy Osmani, Learning JavaScript Design Patterns, http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvc): A Model represented domain-specific data and was ignorant of the user-interface (Views and Controllers). When a model changed, it would inform its observers A View represented the current state of a Model. The Observer pattern was used for letting the View know whenever the Model was updated or modified Presentation was taken care of by the View, but there wasn't just a single View and Controller - a View-Controller pair was required for each section or element being displayed on the screen The Controllers role in this pair was handling user interaction (such as key-presses and actions e.g. clicks), making decisions for the View This general definition has slightly changed over the years, not only to adapt its implementation to different technologies and programming languages, but also because changes have been made to the Controller part. Model-View-Presenter, Model-View-ViewModel are the most known alternatives to the MVC pattern. MV* presentation design patterns are a valid answer to our need: an architectural design guideline that promotes the separation of concerns and isolation, the two most important factors that are needed for software testing. In this way, we can separately test models, views, and the third actor whatever it is (a Controller, Presenter, ViewModel, etc.). On the other hand, adopting a presentation design pattern doesn't mean at all that we cease to use jQuery. jQuery is a great library, we will continue to add its reference to our pages, but we will also integrate its use wisely in a better design context. Knockout.JS and Model-View-ViewModel Knockout.JS is one of the most popular JavaScript presentation libraries, it implements the Model-View-ViewModel design pattern. The most important concepts that feature Knockout:JS are: An HTML fragment (or an entire page) is considered as a View. A View is always associated with a JavaScript object called ViewModel: this is a code representation of the View that contains the data (model) to be shown (in the form of properties) and the commands that handle View events triggered by the user (in the form of methods). The association between View and ViewModel is built around the concept of data-binding, a mechanism that provides automatic bidirectional synchronization: In the View, it's declared placing the data-bind attributes into DOM elements, the attributes' value must follow a specific syntax that specifies the nature of the association and the target ViewModel property/method. In the ViewModel, methods are considered as commands and properties that are defined as special objects called observables: their main feature is the capability to notify every state modification A ViewModel is a pure-code representation of the View: it contains data to show and commands that handle events triggered by the user. It's important to remember that a ViewModel shouldn't have any knowledge about the View and the UI: pure-code representation means that a ViewModel shouldn't contain any reference to HTML markup elements (buttons, textboxes, and so on), but only pure JavaScript properties and methods. Model-View-ViewModel's objective is to promote a clear separation between View and ViewModel, this principle is called Separation of Concerns. Why is this so important? The answer is quite easy: because, in this way a developer can achieve a real separation of responsibilities: the View is only responsible for presenting data to the user and react to her/his inputs, the ViewModel is only responsible for holding the data and providing the presentation logic. The following diagram from Microsoft MSDN depicts the existing relationships between the three pattern actors very well (http://msdn.microsoft.com/en-us/library/ff798384.aspx): Thinking about a web application in these terms leads to a ViewModel development without any reference to DOM elements' IDs or any other markup related code as in the classic jQuery style. The two main reasons behind this are: As the web application becomes more complex, the number of DOM elements increases and is not uncommon to reach a point where it becomes very difficult to manage all those IDs with the typical jQuery fluent interface style: the JavaScript code base turns into a spaghetti code nightmare very soon. A clear separation between View and ViewModel allows a new way of working: JavaScript developers can concentrate on the presentation logic, UX experts on the other hand, can provide an HTML markup that focuses on the user interaction and how a web application will look. The two groups can work quite independently and agree on the basic contact points using the data-bind tag attributes. The key feature of a ViewModel is the observable object: a special object that is capable to notify its state modifications to any subscribers. There are three types of observable objects: The basic observable that is based on JavaScript data types (string, number, and so on) The computed observable that is dependent on other observables or computed observables The observable array that is a standard JavaScript array, with a built-in change notification mechanism On the View-side, we talk about declarative data-binding because we need to place the data-bind attributes inside HTML tags, and specify what kind of binding is associated to a ViewModel property/command. MVVM and unit testing Why a clear separation between the user interface and presentation logic is a real benefit? There are several possible answers, but, if we want to remain in the unit testing context, we can assert that we can apply proper unit testing specifications to the presentation logic, independently, from the concrete user interface. In Model-View-ViewModel, the ViewModel is a pure-code representation of the View. The View itself must remain a thin and simple layer, whose job is to present data and receive the user interaction. This is a great scenario for unit testing: all the logic in the presentation layer is located in the ViewModel, and this is a JavaScript object. We can definitely test almost everything that takes place in the presentation layer. Ensuring a real separation between View and ViewModel means that we need to follow a particular development procedure: Think about a web application page as a composition of sub-views: we need to embrace the divide et impera principle when we build our user interface, the more sub-views are specific and simple, the more we can test them easily. Knockout.JS supports this kind of scenario very well. Write a class for every View and a corresponding class for its ViewModel: the first one is the starting point to instantiate the ViewModel and apply bindings, after all, the user interface (the HTML markup) is what the browser loads initially. Keep each View class as simple as possible, so simple that it might not even need be tested, it should be just a container for:     Its ViewModel instance     Sub-View instances, in case of a bigger View that is a composition of smaller ones     Pure user interface code, in case of particular UI JavaScript plugins that cannot take place in the ViewModel and simply provide graphical effects/enrichments (in other words they don't change the logical functioning) If we look carefully at a typical ViewModel class implementation, we can see that there are no HTML markup references: no tag names, no tag identifiers, nothing. All of these references are present in the View class implementation. In fact, if we were to test a ViewModel that holds a direct reference to an UI item, we also need a live instance of the UI, otherwise accessing that item reference would cause a null reference runtime error during the test. This is not what we want, because it is very difficult to test a presentation logic having to deal with a live instance of the user interface: there are many reasons, from the need of a web server that delivers the page, to the need of a separate instance of a web browser to load the page. This is not very different from debugging a live page with Mozilla Firebug or Google Chrome Developer Tools, our objective is the test automation, but also we want to run the tests easily and quickly in isolation: we don't want to run the page in any way! An important application asset is the event bus: this is a global object that works as an event/message broker for all the actors that are involved in the web page (Views and ViewModels). Event bus is one of the alternative forms of the Event Collaboration design pattern (http://martinfowler.com/eaaDev/EventCollaboration.html): Multiple components work together by communicating with each other by sending events when their internal state changes (Marting Fowler) The main aspect of an event bus is that: The sender is just broadcasting the event, the sender does not need to know who is interested and who will respond, this loose coupling means that the sender does not have to care about responses, allowing us to add behaviour by plugging new components (Martin Fowler) In this way, we can maintain all the different components of a web page that are completely separated: every View/ViewModel couple sends and receives events, but they don't know anything about all the other couples. Again, every ViewModel is completely decoupled from its View (remember that the View holds a reference to the ViewModel, but not the other way around) and in this case, it can trigger some events in order to communicate something to the View. Concerning unit testing, loose coupling means that we can test our presentation logic a single component at a time, simply ensuring that events are broadcasted when they need to. Event buses can also be mocked so we don't need to rely on concrete implementation. In real-world development, the production process is an iterative task. Usually, we need to: Define a View markup skeleton, without any data-bind attributes. Start developing classes for the View and the ViewModel, which are empty at the beginning. Start developing the presentation logic, adding observables to the ViewModel and their respective data bindings in the View. Start writing test specifications. This process is repetitive, adds more presentation logic at every iteration, until we reach the final result. Summary In this article, you learned about web development using jQuery, presentation design patters, and unit testing using MVVM. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 2624

article-image-hbases-data-storage
Packt
13 Nov 2014
9 min read
Save for later

The HBase's Data Storage

Packt
13 Nov 2014
9 min read
In this article by Nishant Garg author of HBase Essentials, we will look at HBase's data storage from its architectural view point. (For more resources related to this topic, see here.) For most of the developers or users, the preceding topics are not of big interest, but for an administrator, it really makes sense to understand how underlying data is stored or replicated within HBase. Administrators are the people who deal with HBase, starting from its installation to cluster management (performance tuning, monitoring, failure, recovery, data security and so on). Let's start with data storage in HBase first. Data storage In HBase, tables are split into smaller chunks that are distributed across multiple servers. These smaller chunks are called regions and the servers that host regions are called RegionServers. The master process handles the distribution of regions among RegionServers, and each RegionServer typically hosts multiple regions. In HBase implementation, the HRegionServer and HRegion classes represent the region server and the region, respectively. HRegionServer contains the set of HRegion instances available to the client and handles two types of files for data storage: HLog (the write-ahead log file, also known as WAL) HFile (the real data storage file) In HBase, there is a system-defined catalog table called hbase:meta that keeps the list of all the regions for user-defined tables. In older versions prior to 0.96.0, HBase had two catalog tables called-ROOT- and .META. The -ROOT- table was used to keep track of the location of the .META table. Version 0.96.0 onwards, the -ROOT- table is removed. The .META table is renamed as hbase:meta. Now, the location of .META is stored in Zookeeper. The following is the structure of the hbase:meta table. Key—the region key of the format ([table],[region start key],[region id]). A region with an empty start key is the first region in a table. The values are as follows: info:regioninfo(serialized the HRegionInfo instance for this region) info:server(server:port of the RegionServer containing this region) info:serverstartcode(start time of the RegionServer process that contains this region) When the table is split, two new columns will be created as info:splitA and info:splitB. These columns represent the two newly created regions. The values for these columns are also serialized as HRegionInfo instances. Once the split process is complete, the row that contains the old region information is deleted. In the case of data reading, the client application first connects to ZooKeeper and looks up the location of the hbase:meta table. For the next client, the HTable instance queries the hbase:meta table and finds out the region that contains the rows of interest and also locate the region server that is serving the identified region. The information about the region and region server is then cached by the client application for future interactions and avoids the lookup process. If the region is reassigned by the load balancer process or if the region server has expired, fresh lookup is done on the hbase:meta catalog table to get the new location of the user table region and cache is updated accordingly. At the object level, the HRegionServer class is responsible to create a connection with the region by creating HRegion objects. This HRegion instance sets up a store instance that has one or more StoreFile instances (wrapped around HFile) and MemStore. MemStore accumulates the data edits as it happens and buffers them into the memory. This is also important for accessing the recent edits of table data. As shown in the preceding diagram, the HRegionServer instance (the region server) contains the map of HRegion instances (regions) and also has an HLog instance that represents the WAL. There is a single block cache instance at the region-server level, which holds data from all the regions hosted on that region server. A block cache instance is created at the time of the region server startup and it can have an implementation of LruBlockCache, SlabCache, or BucketCache. The block cache also supports multilevel caching; that is, a block cache might have first-level cache, L1, as LruBlockCache and second-level cache, L2, as SlabCache or BucketCache. All these cache implementations have their own way of managing the memory; for example, LruBlockCache is like a data structure and resides on the JVM heap whereas the other two types of implementation also use memory outside of the JVM heap. HLog (the write-ahead log – WAL) In the case of writing the data, when the client calls HTable.put(Put), the data is first written to the write-ahead log file (which contains actual data and sequence number together represented by the HLogKey class) and also written in MemStore. Writing data directly into MemStrore can be dangerous as it is a volatile in-memory buffer and always open to the risk of losing data in case of a server failure. Once MemStore is full, the contents of the MemStore are flushed to the disk by creating a new HFile on the HDFS. While inserting data from the HBase shell, the flush command can be used to write the in-memory (memstore) data to the store files. If there is a server failure, the WAL can effectively retrieve the log to get everything up to where the server was prior to the crash failure. Hence, the WAL guarantees that the data is never lost. Also, as another level of assurance, the actual write-ahead log resides on the HDFS, which is a replicated filesystem. Any other server having a replicated copy can open the log. The HLog class represents the WAL. When an HRegion object is instantiated, the single HLog instance is passed on as a parameter to the constructor of HRegion. In the case of an update operation, it saves the data directly to the shared WAL and also keeps track of the changes by incrementing the sequence numbers for each edits. WAL uses a Hadoop SequenceFile, which stores records as sets of key-value pairs. Here, the HLogKey instance represents the key, and the key-value represents the rowkey, column family, column qualifier, timestamp, type, and value along with the region and table name where data needs to be stored. Also, the structure starts with two fixed-length numbers that indicate the size and value of the key. The following diagram shows the structure of a key-value pair: The WALEdit class instance takes care of atomicity at the log level by wrapping each update. For example, in the case of a multicolumn update for a row, each column is represented as a separate KeyValue instance. If the server fails after updating few columns to the WAL, it ends up with only a half-persisted row and the remaining updates are not persisted. Atomicity is guaranteed by wrapping all updates that comprise multiple columns into a single WALEdit instance and writing it in a single operation. For durability, a log writer's sync() method is called, which gets the acknowledgement from the low-level filesystem on each update. This method also takes care of writing the WAL to the replication servers (from one datanode to another). The log flush time can be set to as low as you want, or even be kept in sync for every edit to ensure high durability but at the cost of performance. To take care of the size of the write ahead log file, the LogRoller instance runs as a background thread and takes care of rolling log files at certain intervals (the default is 60 minutes). Rolling of the log file can also be controlled based on the size and hbase.regionserver.logroll.multiplier. It rotates the log file when it becomes 90 percent of the block size, if set to 0.9. HFile (the real data storage file) HFile represents the real data storage file. The files contain a variable number of data blocks and fixed number of file info blocks and trailer blocks. The index blocks records the offsets of the data and meta blocks. Each data block contains a magic header and a number of serialized KeyValue instances. The default size of the block is 64 KB and can be as large as the block size. Hence, the default block size for files in HDFS is 64 MB, which is 1,024 times the HFile default block size but there is no correlation between these two blocks. Each key-value in the HFile is represented as a low-level byte array. Within the HBase root directory, we have different files available at different levels. Write-ahead log files represented by the HLog instances are created in a directory called WALs under the root directory defined by the hbase.rootdir property in hbase-site.xml. This WALs directory also contains a subdirectory for each HRegionServer. In each subdirectory, there are several write-ahead log files (because of log rotation). All regions from that region server share the same HLog files. In HBase, every table also has its own directory created under the data/default directory. This data/default directory is located under the root directory defined by the hbase.rootdir property in hbase-site.xml. Each table directory contains a file called .tableinfo within the .tabledesc folder. This .tableinfo file stores the metadata information about the table, such as table and column family schemas, and is represented as the serialized HTableDescriptor class. Each table directory also has a separate directory for every region comprising the table, and the name of this directory is created using the MD5 hash portion of a region name. The region directory also has a .regioninfo file that contains the serialized information of the HRegionInfo instance for the given region. Once the region exceeds the maximum configured region size, it splits and a matching split directory is created within the region directory. This size is configured using the hbase.hregion.max.filesize property or the configuration done at the column-family level using the HColumnDescriptor instance. In the case of multiple flushes by the MemStore, the number of files might get increased on this disk. The compaction process running in the background combines the files to the largest configured file size and also triggers region split. Summary In this article, we have learned about the internals of HBase and how it stores the data. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 5620

article-image-openvz-container-administration
Packt
11 Nov 2014
11 min read
Save for later

OpenVZ Container Administration

Packt
11 Nov 2014
11 min read
In this article by Mark Furman, the author of OpenVZ Essentials, we will go over the various aspects of OpenVZ administration. Some of the things we are going to go over in this article are as follows: Listing the containers that are running on the server Starting, stopping, suspending, and resuming containers Destroying, mounting, and unmounting containers Setting quota on and off Creating snapshots of the containers in order to back up and restore the container to another server (For more resources related to this topic, see here.) Using vzlist The vzlist command is used to list the containers on a node. When you run vzlist on its own without any options, it will only list the containers that are currently running on the system: vzlist In the previous example, we used the vzlist command to list the containers that are currently running on the server. Listing all the containers on the server If you want to list all the containers on the server instead of just the containers that are currently running on the server, you will need to add -a after vzlist. This will tell vzlist to include all of the containers that are created on the node inside its output: vzlist -a In the previous example, we used the vzlist command with an -a flag to tell vzctl that we want to list all of the containers that have been created on the server. The vzctl command The next command that we are going to cover is the vzctl command. This is the primary command that you are going to use when you want to perform tasks with the containers on the node. The initial functions of the vzctl command that we will go over are how to start, stop, and restart the container. Starting a container We use vzctl to start a container on the node. To start a container, run the following command: vzctl start 101Starting Container ...Setup slm memory limitSetup slm subgroup (default)Setting devperms 20002 dev 0x7d00Adding IP address(es) to pool:Adding IP address(es): 192.168.2.101Hostname for Container set: gotham.example.comContainer start in progress... In the previous example, we used the vzctl command with the start option to start the container 101. Stopping a container To stop a container, run the following command: vzctl stop 101Stopping container ...Container was stoppedContainer is unmounted In the previous example, we used the vzctl command with the stop option to stop the container 101. Restarting a container To restart a container, run the following command: vzctl restart 101Stopping Container ...Container was stoppedContainer is unmountedStarting Container... In the previous example, we used the vzctl command with the restart option to restart the container 101. Using vzctl to suspend and resume a container The following set of commands will use vzctl to suspend and resume a container. When you use vzctl to suspend a container, it creates a save point of the container to a dump file. You can then use vzctl to resume the container to the saved point it was in before the container was suspended. Suspending a container To suspend a container, run the following command: vzctl suspend 101 In the previous example, we used the vzctl command with the suspend option to suspend the container 101. Resuming a container To resume a container, run the following command: vzctl resume 101 In the previous example, we used the vzctl command with the resume option to resume operations on the container 101. In order to get resume or suspend to work, you may need to enable several kernel modules by running the following:modprobe vzcptmodprobe vzrst Destroying a container You can destroy a container that you created by using the destroy argument with vzctl. This will remove all the files including the configuration file and the directories created by the container. In order to destroy a container, you must first stop the container from running. To destroy a container, run the following command: vzctl destroy 101Destroying container private area: /vz/private/101 Container private area was destroyed. In the previous example, we used the vzctl command with the destroy option to destroy the container 101. Using vzctl to mount and unmount a container You are able to mount and unmount a container's private area located at /vz/root/ctid, which provides the container with root filesystem that exists on the server. Mounting and unmounting containers come in handy when you have trouble accessing the filesystem for your container. Mounting a container To mount a container, run the following command: vzctl mount 101 In the previous example, we used the vzctl command with the mount option to mount the private area for the container 101. Unmounting a container To unmount a container, run the following command: vzctl umount 101 In the previous example, we used the vzctl command with the umount option to unmount the private area for the container 101. Disk quotas Disk quotas allow you to define special limits for your container, including the size of the filesystem or the number of inodes that are available for use. Setting quotaon and quotaoff for a container You can manually start and stop the containers disk quota by using the quotaon and quotaoff arguments with vzctl. Turning on disk quota for a container To turn on disk quota for a container, run the following command: vzctl quotaon 101 In the previous example, we used the vzctl command with the quotaon option to turn disk quota on for the container 101. Turning off disk quota for a container To turn off disk quota for a container, run the following command: vzctl quotaoff 101 In the previous example, we used the vzctl command with the quotaoff option to turn off disk quota for the container 101. Setting disk quotas with vzctl set You are able to set the disk quotas for your containers on your server using the vzctl set command. With this command, you can set the disk space, disk inodes, and the quota time. To set the disk space for container 101 to 2 GB, use the following command: vzctl set 101 --diskspace 2000000:2200000 --save In the previous example, we used the vzctl set command to set the disk space quota to 2 GB with a 2.2 GB barrier. The two values that are separated with a : symbol and are the soft limit and the hard limit. The soft limit in the example is 2000000 and the hard limit is 2200000. The soft limit can be exceeded up to the value of the hard limit. The hard limit should never exceed its value. OpenVZ defines soft limits as barriers and hard limits as limits. To set the inode disk for container 101 to 1 million inodes, use the following command: vzctl set 101 --diskinodes 1000000:1100000 --save In the previous example, we used the vzctl set command to set the disk inode limits to a soft limit or barrier of 1 million inodes and a hard limit or limit or 1.1 million inodes. To set the quota time or the period of time in seconds that the container is allowed to exceed the soft limit values of disk quota and inode quota, use the following command: vzctl set 101 --quotatime 900 --save In the previous example, we used the vzctl command to set the quota time to 900 seconds or 15 minutes. This means that once the container soft limit is broken, you will be able to exceed the quota to the value of the hard limit for 15 minutes before the container reports that the value is over quota. Further use of vzctl set The vzctl set command allows you to make modifications to the container's config file without the need to manually edit the file. We are going to go over a few of the options that are essential to administer the node. --onboot The --onboot flag allows you to set whether or not the container will be booted when the node is booted. To set the onboot option, use the following command: vzctl set 101 --onboot In the previous example, we used the vzctl command with the set option and the --onboot flag to enable the container to boot automatically when the server is rebooted, and then saved to the container configuration file. --bootorder The --bootorder flag allows you to change the boot order priority of the container. The higher the value given, the sooner the container will start when the node is booted. To set the bootorder option, use the following command: vzctl set 101 --bootorder 9 --save In the previous example, we used the vzctl command with the set option and the bootorder flag to tell that we would like to change the priority of the order that the container is booted in, and then we save the option to the container's configuration file. --userpasswd The --userpasswd flag allows you to change a user's password that belongs to the container. If the user does not exist, then the user will be created. To set the userpasswd option, use the following command: vzctl set 101 --userpasswd admin:changeme In the previous example, we used the vzctl command with the set option and the --userpasswd flag and change the password for the admin user to the password changeme. --name The --name flag allows you to give the container a name that when assigned, can be used in place of the CTID value when using vzctl. This allows for an easier way to memorize your containers. Instead of focusing on the container ID, you will just need to remember the container name to access the container. To set the name option, use the following command: vzctl set 101 --name gotham --save In the previous example, we used the vzctl command with the set option to set our container 101 to use the name gotham and then save the changes to containers configuration file. --description The --description flag allows you to add a description for the container to give an idea of what the container is for. To use the description option, use the following command: vzctl set 101 --description "Web Development Test Server" --save In the previous example, we used the vzctl command with the set option and the --description flag to add a description of the container "Web Development Test Server". --ipadd The --ipadd flag allows you to add an IP address to the specified container. To set the ipadd option, use the following command: vzctl set 101 --ipadd 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipadd flag to add the IP address 192.168.2.103 to container 101 and then save the changes to the containers configuration file. --ipdel The --ipdel flag allows you to remove an IP address from the specified container. To use the ipdel option, use the following command: vzctl set 101 --ipdel 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipdel flag to remove the IP address 192.168.2.193 from the container 101 and then save the changes to the containers configuration file. --hostname The --hostname flag allows you to set or change the hostname for your container. To use the hostname option, use the following command: vzctl set 101 --hostname gotham.example.com --save In the previous example, we used the vzctl command with the set option and the --hostname flag to change the hostname of the container to gotham.example.com. --disable The --disable flag allows you to disable a containers startup. When this option is in place, you will not be able to start the container until this option is removed. To use the disable option, use the following command: vzctl set 101 --disable In the preceding example, we used the vzctl command with the set option and the --disable flag to prevent the container 101 from starting and then save the changes to the container's configuration file. --ram The --ram flag allows you to set the value for the physical page limit of the container and helps to regulate the amount of memory that is available to the container. To use the ram option, use the following command: vzctl set 101 --ram 2G --save In the previous example, we set the physical page limit to 2 GB using the --ram flag. --swap The --swap flag allows you to set the value of the amount of swap memory that is available to the container. To use the swap option, use the following command: vzctl set 101 --swap 1G --save In the preceding example, we set the swap memory limit for the container to 1 GB using the --swap flag. Summary In this article, we learned to administer the containers that are created on the node by using the vzctl command, and the vzlist command to list containers on the server. The vzctl command has a broad range of flags that can be given to it to allow you to perform many actions to a container. It allows you to start, stop, and restart, create, and destroy a container. You can also suspend and unsuspend the current state of the container, mount and unmount a container, issue changes to the container's config file by using vzctl set. Resources for Article: Further resources on this subject: Basic Concepts of Proxmox Virtual Environment [article] A Virtual Machine for a Virtual World [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 10359
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 15217

article-image-making-large-scale-led-art-with-fadecandy
Michael Ang
10 Nov 2014
6 min read
Save for later

Making Large-Scale LED Art with FadeCandy

Michael Ang
10 Nov 2014
6 min read
Building projects with programmable LEDs can be very satisfying. With a few lines of code, you can create an awesome animated pattern of light. If you've programmed with an Arduino, you certainly remember the first time you got an LED to blink! From there, you probably wanted to go larger. Arduino is an excellent platform for projects with a small number of LEDs, but at a certain point the small micro-controller reaches its limit. Ardent Mobile Cloud Platform, the project that sparked FadeCandy. Photo by Aaron Muszalski, used under CC-BY. FadeCandy is an alternative way to drive potentially thousands of LEDs, and is an excellent way to go when you have a project larger than Arduino can handle. FadeCandy also provides sophisticated techniques to make your animations buttery smooth—very important when you aren't going for a "chunky" look with your lighting. A typical problem when using LEDs is getting a smooth fade to off or a low brightness. Usually there's a pronounced "stair step" or chunkiness as the LED approaches minimum brightness. The FadeCandy board uses dithering and interpolation to smooth between color values, giving you more nuanced color and smoother animation. With FadeCandy, your light palette can now include "subtle and smooth" as well as "blinky and bright." Two FadeCandy boards The FadeCandy board connects to your computer over USB and can drive up to 8 strips of 64 LEDs. FadeCandy uses the popular WS281x RGB LEDs, which are available online, for example, NeoPixels from Adafruit. Multiple boards can be connected to the host computer, and with 512 LEDs per board, you can create quite large light projects! The host computer runs a piece of software called the FadeCandy server (fcserver). The program (called the "client") that creates the light pattern is separate from the server and can be written in a variety of different programming languages. For example, you can write your animation program in Processing and your Processing sketch will send the colors for the pixels to the FadeCandy server, which sends the data over USB to the FadeCandy hardware boards. It's also possible to make a web page that connects to the FadeCandy server, or to use Python or Node.js. This flexibility means that you can use a powerful desktop programming language that supports, for example, video playback or camera processing. The downside is that you need a host computer with a USB to drive the FadeCandy hardware boards, but the host computer could be a small one, such as the Raspberry Pi.   FadeCandy connected to computer via USB and an LED strip via a breadboard. The LED strip is connected to a separate 5V power supply (not shown). When using a small number of LEDs with an Arduino, you can get away with powering the LEDs from the same power supply as the Arduino. Since FadeCandy is designed to use a large number of LEDs at once, you'll need a separate power supply to power the LEDs (you can't just power them from the USB connection). This is actually a good thing, since it makes you think about providing enough juice to run all the LEDs. For a full guide on setting up a FadeCandy board and software, I recommend the in-depth tutorial LED Art with FadeCandy. Once you have the hardware set up, there are two pieces of software you need to run. The first is the FadeCandy server (fcserver). The server connects to the FadeCandy boards over USB and listens for clients to connect over the network. The client software is responsible for generating the pixels that you want to show and then passing this data to the server. The client software is where you create your fancy animation, handle user interaction, analyze audio, or do whatever processing is needed to generate the colors for your LEDs. Client program in Processing Let's look at one of the examples included with the FadeCandy source code. The example is written in Processing and plays back an animation of fire on a strip of 64 LEDs. The Processing code loads a picture of flames and scrolls the image vertically over the strip of LEDs. The white dots in the screenshot represent the LEDs in the strip—at each of the dots, the color from the Processing sketch is sampled and sent to the corresponding LED. The nice thing about using Processing to make the animation is that it's easy to load an image or perform other more complicated operations and we get to see what's happening onscreen. With the large amount of memory and CPU on a laptop computer, we could also load a video of a fire burning and have that show on the LEDs. The light from the LEDs in this example is quite pleasing. Projected on a wall or other surface, the light ripples smoothly. It's possible to turn off the dithering and interpolation that FadeCandy provides (for example, with this Python config utility) and you can see that these techniques do lead to smoother animation, especially at lower brightness levels. With the dithering and interpolation turned on, the motion is more fluid, giving more of an illusion of continuous movement rather than individual LEDs changing. The choice of using FadeCandy or Arduino to control LEDs comes down largely to a question of scale. For projects using a small number of LEDs, using an Arduino makes it easy to make the project standalone and run on battery power. For example, in my Chrysalis light sculpture, I use an Arduino to drive 32 LEDs interpolating between colors from an image. I was able to fit the image into the onboard memory of the Arduino by making it quite small (31x16 RGB pixels, for a grand total of 1,488 bytes). Getting smooth fading with 32 LEDs on an Arduino is certainly possible, but using hundreds of LEDs would be out of the question. FadeCandy-driven LEDs in a Polycon light sculpture. FadeCandy was designed for projects that are too big to fit on a single Arduino. Where the total memory on an Arduino is measured in kilobytes, the RAM on a Raspberry Pi is hundreds of megabytes, and on a laptop you're talking gigabytes. You can use the processing power of your laptop (or single board computer) to analyze audio, play back video, or do heavy computation that would be hard on a microcontroller. By providing easy interfacing and smooth fading, FadeCandy really opens up what is possible for artistic expression with programmable lighting. I for one welcome the new age of buttery smooth LED light art! FadeCandy is a project by Micah Elizabeth Scott produced in collaboration with Adafruit. About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit that bridges the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology. Into Arduino? Check out our Arduino page for our newest and most popular releases. Begin your adventure through creative hardware today!
Read more
  • 0
  • 0
  • 6048

article-image-create-simple-plugin-melonjs-games
Ellison Leao
10 Nov 2014
4 min read
Save for later

Creating a simple plugin for MelonJS games

Ellison Leao
10 Nov 2014
4 min read
If you are not familiar with the great MelonJS game framework, please go to their official page and read about the great things you can do with this awesome tool. In this post I will teach you how to create a simple plugin to use in your MelonJS game. First, you need to understand the plugin structure: (function($) { myPlugin = me.plugin.Base.extend({ // minimum melonJS version expected version : "1.0.0", init : function() { // call the parent constructor this.parent(); this.myVar = null; }, }); })(window); As you can see, there are no real difficulties in creating new plugins. You just have to create a me class inheriting from the me.plugin.Base class, passing the minimum melonJS version this plugin will use. If you need to persist some variables, you can override the init method just like the code from the start. For this plugin I will create a Clay.io leaderboard integration for a game. The code for the plugin is as follows: /* * MelonJS Game Engine * Copyright (C) 2011 - 2013, Olivier Biot, Jason Oster * http://www.melonjs.org * * Clay.io API plugin * */ (function($) { /** * @class * @public * @extends me.plugin.Base * @memberOf me * @constructor */ Clayio = me.plugin.Base.extend({ // minimum melonJS version expected version : "1.0.0", gameKey: null, _leaderboard: null, init : function(gameKey, options) { // call the parent constructor this.parent(); this.gameKey = gameKey; Clay = {}; Clay.gameKey = this.gameKey; Clay.readyFunctions = []; Clay.ready = function( fn ) { Clay.readyFunctions.push( fn ); }; if (options === undefined) { options = { debug: false, hideUI: false } } Clay.options = { debug: options.debug === undefined ? false: options.debug, hideUI: options.hideUI === undefined ? false: options.hideUI, fail: options.fail } window.onload = function() { var clay = document.createElement("script"); clay.async = true; clay.src = ( "https:" == document.location.protocol ? "https://" : "http://" ) + "cdn.clay.io/api.js"; var tag = document.getElementsByTagName("script")[0]; tag.parentNode.insertBefore(clay, tag); } }, leaderboard: function(id, score, callback) { if (!id) { throw "You must pass a leaderboard id"; } // we can get the score directly from game.data.score if (!score){ score = game.data.score; } var leaderboard = new Clay.Leaderboard({id: id}); this._leaderboard = leaderboard; if (!callback) { this._leaderboard.post({score: score}, callback); }else{ this._leaderboard.post({score: score}); } }, showLeaderBoard: function(id, options, callback) { if (!options){ options = {}; } if (options.limit === undefined){ options.limit = 10; } if (!this._leaderboard) { if (id === undefined) { throw "The leaderboard was not defined before. You must pass a leaderboard id"; } var leaderboard = new Clay.Leaderboard({id: id}); this._leaderboard = leaderboard; } this._leaderboard.show(options, callback); } }); })(window); Let me explain how all of this works: The init method will receive your Clay.io gamekey and initialize the Clay.io API file asynchronously. The leaderboard receives a Clay.io leaderboard ID and a score value. Then, the method creates a leaderboard instance and adds the passed score to the Clay.io leaderboard. If no ID is passed, the function throws an error. The showLeaderboard method is an event that shows the Clay.io leaderboard modal on the screen. If you previously called the leaderboard method, there is no need to pass the leaderboard ID again. To use this plugin in your game, first register the plugin in your game.js file. On the game.onload method add the following code: window.onload(function() { me.plugin.register.defer(this, Clayio, "clay"); }); Due to a Clay.io bug you need to add the socket.io.js script into the index.html file manually. Place the following code into the file <head>: <script src='http://api.clay.io/socket.io/socket.io.js'></script> Now, if you want to call the leaderboard method, just add the following code into your scene: me.plugin.clay.leaderboard(leaderboardId); And that's it! I hope I’ve shown you how easy it is to create plugins for MelonJS. About The Author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 0
  • 2035

article-image-configuring-distributed-rails-applications-chef-part-2
Rahmal Conda
07 Nov 2014
9 min read
Save for later

Configuring Distributed Rails Applications with Chef: Part 2

Rahmal Conda
07 Nov 2014
9 min read
In my Part 1 post, I gave you the low down about Chef. I covered what it’s for and what it’s capable of. Now let’s get into some real code and take a look at how you install and run Chef Solo and Chef Server. What we want to accomplish First let’s make a list of some goals. What are we trying to get out of deploying and provisioning with Chef? Once we have it set up, provisioning a new server should be simple; no more than a few simple commands. We want it to be platform-agnostic so we can deploy any VPS provider we choose with the same scripts. We want it to be easy to follow and understand. Any new developer coming later should have no problem figuring out what’s going on. We want the server to be nearly automated. It should take care of itself as much as possible, and alert us if anything goes wrong. Before we start, let’s decide on a stack. You should feel free to run any stack you choose. This is just what I’m using for this post setup: Ubuntu 12.04 LTS RVM Ruby 1.9.3+ Rails 3.2+ Postgres 9.3+ Redis 3.1+ Chef Git Now that we’ve got that out of the way, let’s get started! Step 1: Install the tools First, make sure that all of the packages we download to our VPS are up to date: ~$ sudo apt-get update Next, we'll install RVM (Ruby Version Manager). RVM is a great tool for installing Ruby. It allows you to use several versions of Ruby on one server. Don't get ahead of yourself though; at this point, we only care about one version. To install RVM, we’ll need curl: ~$ sudo apt-get install curl We also need to install Git. Git is an open source distributed version control system, primarily used to maintain software projects. (If you didn't know that much, you're probably reading the wrong post. But I digress!): ~$ sudo apt-get install git Now install RVM with this curl command: ~$ curl -sSL https://get.rvm.io | bash -s stable You’ll need to source RVM (you can add this to your bash profile): ~$ source ~/.rvm/scripts/rvm In order for it to work, RVM has some of its own dependencies that need to be installed. To automatically install them, use the following command: ~$ rvm requirements Once we have RVM set up, installing Ruby is simple: ~$ rvm install 1.9.3 Ruby 1.9.3 is now installed! Since we'll be accessing it through a tool that can potentially have a variety of Ruby versions loaded, we need to tell the system to use this version as the default: ~$ rvm use 1.9.3 --default Next we'll make sure that we can install any Ruby Gem we need into this new environment. We'll stick with RVM for installing gems as well. This'll ensure they get loaded into our Ruby version properly. Run this command: ~$ rvm rubygems current Don’t worry if it seems like you’re setting up a lot of things manually now. Once Chef is set up, all of this will be part of your cookbooks, so you’ll only have to do this once. Step 2: Install Chef and friends First, we'll start off by cloning the Opscode Chef repository: ~$ git clone git://github.com/opscode/chef-repo.git chef With Ruby and RubyGems set up, we can install some gems! We’ll start with a gem called Librarian-Chef. Librarian-Chef is sort of a Rails Bundler for Chef cookbooks. It'll download and manage cookbooks that you specify in Cheffile. Many useful cookbooks are published by different sources within the Chef community. You'll want to make use of them as you build out your own Chef environment. ~$ gem install librarian-chef  Initialize Librarian in your Chef repository with this command: ~$ cd chef ~/chef$ librarian-chef init This command will create a Cheffile in your Chef repository. All of your dependencies should be specified in that file. To deploy the stack we just built, your Cheffile should look like this: 1 site 'http://community.opscode.com/api/v1' 2 cookbook 'sudo' 3 cookbook 'apt' 4 cookbook 'user' 5 cookbook 'git' 6 cookbook 'rvm' 7 cookbook 'postgresql' 8 cookbook 'rails' ~ Now use Librarian to pull these community cookbooks: ~/chef$ librarian-chef install Librarian will pull the cookbooks you specify, along with their dependencies, to the cookbooks folder and create a Cheffile.lock file. Commit both Cheffile and Cheffile.lock to your repo: ~/chef$ git add Cheffile Cheffile.lock ~/chef$ git commit -m “updated cookbooks list” There is no need to commit the cookbooks folder, because you can always use the install command and Librarian will pull the same group of cookbooks with the correct versions. You should not touch the cookbooks folder—let Librarian manage it for you. Librarian will overwrite any changes you make inside that folder. If you want to manually create and manage cookbooks, outside of Librarian, add a new folder, like local-cookbooks, for instance. Step 3: Cooking up somethin’ good! Now that you see how to get the cookbooks, you can create your roles. You use roles to determine what role a server instance would have in you server stack, and you specify what that role would need. For instance, your Database Server role would most likely need a Postgresql server (or you DB of choice), a DB client, user authorization and management, while your Web Server would need Apache (or Nginx), Unicorn, Passenger, and so on. You can also make base roles, to have a basic provision that all your servers would have. Given what we’ve installed so far, our basic configuration might look something like this: name "base" description "Basic configuration for all nodes" run_list( 'recipe[git]', 'recipe[sudo]', 'recipe[apt]', 'recipe[rvm::user]', 'recipe[postgresql::client]' ) override_attributes( authorization: { sudo: { users: ['ubuntu'], passwordless: true } }, rvm: { rubies: ['ruby-1.9.3-p125'], default_ruby: 'ruby-1.9.3-p125', global_gems: ['bundler', 'rake'] } ) ~ Deploying locally with Chef Solo: Chef Solo is a Ruby gem that runs a self-contained Chef instance. Solo is great for running your recipes locally to test them, or to provision development machines. If you don’t have a hosted Chef Server set up, you can use Chef Solo to set up remote servers too. If your architecture is still pretty small, this might be just what you need. We need to create a Chef configuration file, so we’ll call it deploy.rb: root = File.absolute_path(File.dirname(__FILE__)) roles = File.join(root, 'cookbooks') books = File.join(root, 'roles') file_cache_path root cookbook_path books role_path roles ~ We’ll also need a JSON-formatted configuration file. Let’s call this one deploy.json: { "run_list": ["recipe[base]"] } ~ Now run Chef with this command: ~/chef$ sudo chef-solo -j deploy.json -c deploy.rb Deploying to a new Amazon EC2 instance: You’ll need the Chef server for this step. First you need to create a new VPS instance for your Chef server and configure it with a static IP or a domain name, if possible. We won’t go through that here, but you can find instructions for setting up a server instance on EC2 with a public IP and configuring a domain name in the documentation for your VPS. Once you have your server instance set up, SSH onto the instance and install Chef server. Start by downloading the dep package using the wget tool: ~$ wget https://opscode-omnibus-packages.s3.amazonaws.com/ ubuntu/12.04/x86_64/chef-server_11.0.10-1.ubuntu.12.04_amd64.deb Once the dep package has downloaded, install Chef server like so: ~$ sudo dpkg -i chef-server* When it completes, it will print to the screen an instruction that you need to run this next command to actually configure the service for your specific machine. This command will configure everything automatically: ~$ sudo chef-server-ctl reconfigure Once the configuration step is complete, the Chef server should be up and running. You can access the web interface immediately by browsing to your server's domain name or IP address. Now that you’ve got Chef up and running, install the knife EC2 plugin. This will also install the knife gem as a dependency: ~$ gem install knife-ec2 You now have everything you need! So create another VPS to provision with Chef. Once you do that, you’ll need to copy your SSH keys over: ~$ ssh-copy-id root@yourserverip You can finally provision your server! Start by installing Chef on your new machine: ~$ knife solo prepare root@yourserverip This will generate a file, nodes/yourserverip.json. You need to change this file to add your own environment settings. For instance, you will need to add username and password for monit. You will also need to add a password for postgresql to the file. Run the openssl command again to create a password for postgresql. Take the generated password, and add it to the file. Now, you can finally provision your server! Start the Chef command: ~$ knife solo cook root@yourserverip Now just sit back, relax and watch Chef cook up your tasty app server. This process may take a while. But once it completes, you’ll have a server ready for a Rails, Postgres, and Redis! I hope these posts helped you get an idea of how much Chef can simplify your life and your deployments. Here’s a couple of links with more information and references about Chef: Chef community site:http://cookbooks.opscode.com/ Chef Wiki:https://wiki.opscode.com/display/chef/Home Chef Supermarket:https://community.opscode.com/cookbooks?utf8=%E2%9C%93&q=user Chef cookbooks for busy Ruby developers:http://teohm.com/blog/2013/04/17/chef-cookbooks-for-busy-ruby-developers/ Deploying Rails apps with Chef and Capistrano:http://www.slideshare.net/SmartLogic/guided-exploration-deploying-rails-apps-with-chef-and-capistrano About the author Rahmal Conda is a Software Development Professional and Ruby aficionado from Chicago. After 10 years working in web and application development, he moved out to the Bay Area, eager to join the startup scene. He had a taste of the startup life in Chicago working at a small personal finance company. After that he knew it was the life he had been looking for. So he moved his family out west. Since then he's made a name for himself in the social space at some high profile Silicon Valley startups. Right now he's the one of the Co-founders and Platform Architect of Boxes, a mobile marketplace for the world's hidden treasures.
Read more
  • 0
  • 0
  • 2867
article-image-how-to-deploy-a-blog-with-ghost-and-docker
Felix Rabe
07 Nov 2014
6 min read
Save for later

How to Deploy a Blog with Ghost and Docker

Felix Rabe
07 Nov 2014
6 min read
2013 gave birth to two wonderful Open Source projects: Ghost and Docker. This post will show you what the buzz is all about, and how you can use them together. So what are Ghost and Docker, exactly? Ghost is an exciting new blogging platform, written in JavaScript running on Node.js. It features a simple and modern user experience, as well as very transparent and accessible developer communications. This blog post covers Ghost 0.4.2. Docker is a very useful new development tool to package applications together with their dependencies for automated and portable deployment. It is based on Linux Containers (lxc) for lightweight virtualization, and AUFS for filesystem layering. This blog post covers Docker 1.1.2. Install Docker If you are on Windows or Mac OS X, the easiest way to get started using Docker is Boot2Docker. For Linux and more in-depth instructions, consult one of the Docker installation guides. Go ahead and install Docker via one of the above links, then come back and run: docker version You run this in your terminal to verify your installation. If you get about eight lines of detailed version information, the installation was successful. Just running docker will provide you with a list of commands, and docker help <command> will show a command's usage. If you use Boot2Docker, remember to export DOCKER_HOST=tcp://192.168.59.103:2375. Now, to get the Ubuntu 14.04 base image downloaded (which we'll use in the next sections), run the following command: docker run --rm ubuntu:14.04 /bin/true This will take a while, but only for the first time. There are many more Docker images available at the Docker Hub Registry. Hello Docker To give you a quick glimpse into what Docker can do for you, run the following command: docker run --rm ubuntu:14.04 /bin/echo Hello Docker This runs /bin/echo Hello Docker in its own virtual Ubuntu 14.04 environment, but since it uses Linux Containers instead of booting a complete operating system in a virtual machine, this only takes less than a second to complete. Pretty sweet, huh? To run Bash, provide the -ti flags for interactivity: docker run --rm -ti ubuntu:14.04 /bin/bash The --rm flag makes sure that the container gets removed after use, so any files you create in that Bash session get removed after logging out. For more details, see the Docker Run Reference. Build the Ghost image In the previous section, you've run the ubuntu:14.04 image. In this section, we'll build an image for Ghost that we can then use to quickly launch a new Ghost container. While you could get a pre-made Ghost Docker image, for the sake of learning, we'll build our own. About the terminology: A Docker image is analogous to a program stored on disk, while a Docker container is analogous to a process running in memory. Now create a new directory, such as docker-ghost, with the following files — you can also find them in this Gist on GitHub: package.json: {} This is the bare minimum actually required, and will be expanded with the current Ghost dependency by the Dockerfile command npm install --save ghost when building the Docker image. server.js: #!/usr/bin/env node var ghost = require('ghost'); ghost({ config: __dirname + '/config.js' }); This is all that is required to use Ghost as an NPM module. config.js: config = require('./node_modules/ghost/config.example.js'); config.development.server.host = '0.0.0.0'; config.production.server.host = '0.0.0.0'; module.exports = config; This will make the Ghost server accessible from outside of the Docker container. Dockerfile: # DOCKER-VERSION 1.1.2 FROM ubuntu:14.04 # Speed up apt-get according to https://gist.github.com/jpetazzo/6127116 RUN echo "force-unsafe-io" > /etc/dpkg/dpkg.cfg.d/02apt-speedup RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache # Update the distribution ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get upgrade -y # https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y python-software-properties python g++ make nodejs git # git needed by 'npm install' ADD . /src RUN cd /src; npm install --save ghost ENTRYPOINT ["node", "/src/server.js"] # Override ubuntu:14.04 CMD directive: CMD [] EXPOSE 2368 This Dockerfile will create a Docker image with Node.js and the dependencies needed to build the Ghost NPM module, and prepare Ghost to be run via Docker. See Documentation for details on the syntax. Now build the Ghost image using: cd docker-ghost docker build -t ghost-image . This will take a while, but you might have to Ctrl-C and re-run the command if, for more than a couple of minutes, you are stuck at the following step: > node-pre-gyp install --fallback-to-build Run Ghost Now start the Ghost container: docker run --name ghost-container -d -p 2368:2368 ghost-image If you run Boot2Docker, you'll have to figure out its IP address: boot2docker ip Usually, that's 192.168.59.103, so by going to http://192.168.59.103:2368, you will see your fresh new Ghost blog. Yay! For the admin interface, go to http://192.168.59.103:2368/ghost. Manage the Ghost container The following commands will come in handy to manage the Ghost container: # Show all running containers: docker ps -a # Show the container logs: docker logs [-f] ghost-container # Stop Ghost via a simulated Ctrl-C: docker kill -s INT ghost-container # After killing Ghost, this will restart it: docker start ghost-container # Remove the container AND THE DATA (!): docker rm ghost-container What you'll want to do next Some steps that are outside the scope of this post, but some steps that you might want to pursue next, are: Copy and change the Ghost configuration that currently resides in node_modules/ghost/config.js. Move the Ghost content directory into a separate Docker volume to allow for upgrades and data backups. Deploy the Ghost image to production on your public server at your hosting provider. Also, you might want to change the Ghost configuration to match your domain and change the port to 80. How I use Ghost with Docker I run Ghost in Docker successfully over at Named Data Education, a new blog about Named Data Networking. I like the fact that I can replicate an isolated setup identically on that server as well as on my own laptop. Ghost resources Official docs: The Ghost Guide, and the FAQ- / How-To-like User Guide. How To Install Ghost, Ghost for Beginners and All About Ghost are a collection of sites that provide more in-depth material on operating a Ghost blog. By the same guys: All Ghost Themes. Ghost themes on ThemeForest is also a great collection of themes. Docker resources The official documentation provides many guides and references. Docker volumes are explained here and in this post by Michael Crosby. About the Author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol. You can find our very best Docker content on our dedicated Docker page. Whatever you do with software, Docker will help you do it better.
Read more
  • 0
  • 0
  • 29148

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3
Mike Ball
07 Nov 2014
11 min read
Save for later

Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
07 Nov 2014
11 min read
Part 1: Getting up and running with Middleman Many of today’s most prominent web frameworks, such as Ruby on Rails, Django, Wordpress, Drupal, Express, and Spring MVC, rely on a server-side language to process HTTP requests, query data at runtime, and serve back dynamically constructed HTML. These platforms are great, yet developers of dynamic web applications often face complex performance challenges under heavy user traffic, independent of the underlying technology. High traffic, and frequent requests, may exploit processing-intensive code or network latency, in effect yielding a poor user experience or production outage. Static site generators such as Middleman, Jeckyll, and Wintersmith offer developers an elegant, highly scalable alternative to complex, dynamic web applications. Such tools perform dynamic processing and HTML construction during build time rather than runtime. These tools produce a directory of static HTML, CSS, and JavaScript files that can be deployed directly to a web server such as Nginx or Apache. This architecture reduces complexity and encourages a sensible separation of concerns; if necessary, user-specific customization can be handled via client-side interaction with third-party satellite services. In this three part series, we'll walk-through how to get started in developing a Middleman site, some basics of Middleman blogging, how to migrate content from an existing WordPress blog, and how to deploy a Middleman blog to production. We will also learn how to create automated tests, continuous integration, and automated deployments. In this part, we’ll cover the following: Creating a basic Middleman project Middleman configuration basics A quick overview of the Middleman template system Creating a basic Middleman blog Why should you use middleman? Middleman is a mature, full-featured static site generator. It supports a strong templating system, numerous Ruby-based HTML templating tools such as ERb and HAML, as well as a Sprockets-based asset pipeline used to manage CSS, JavaScript, and third-party client-side code. Middleman also integrates well with CoffeeScript, SASS, and Compass. Environment For this tutorial, I’m using an RVM-installed Ruby 2.1.2. I’m on Mac OSX 10.9.4. Installing middleman Install middleman via bundler: $ gem install middleman Create a basic middleman project called middleman-demo: $ middleman init middleman-demo This results in a middleman-demo directory with the following layout: ├── Gemfile ├── Gemfile.lock ├── config.rb └── source    ├── images    │   ├── background.png    │   └── middleman.png    ├── index.html.erb    ├── javascripts    │   └── all.js    ├── layouts    │   └── layout.erb    └── stylesheets        ├── all.css        └── normalize.css[SB4]  There are 5 directories and 10 files. A quick tour Here are a few notes on the middleman-demo layout: The Ruby Gemfile  cites Ruby gem dependencies; Gemfile.lock cites the full dependency chain, including  middleman-demo’s dependencies’ dependencies The config.rb  houses middleman-demo’s configuration The source directory houses middleman-demo ’s source code–the templates, style sheets, images, JavaScript, and other source files required by the  middleman-demo [SB7] site While a Middleman production build is simply a directory of static HTML, CSS, JavaScript, and image files, Middleman sites can be run via a simple web server in development. Run the middleman-demo development server: $ middleman Now, the middleman-demo site can be viewed in your web browser at  http://localhost:4567. Set up live-reloading Middleman comes with the middleman-livereload gem. The gem detects source code changes and automatically reloads the Middleman app. Activate middleman-livereload  by uncommenting the following code in config.rb: # Reload the browser automatically whenever files change configure :development do activate :livereload end Restart the middleman server to allow the configuration change to take effect. Now, middleman-demo should automatically reload on change to config.rb and your web browser should automatically refresh when you edit the source/* code. Customize the site’s appearance Middleman offers a mature HTML templating system. The source/layouts directory contains layouts, the common HTML surrounding individual pages and shared across your site. middleman-demo uses ERb as its template language, though Middleman supports other options such as HAML and Slim. Also note that Middleman supports the ability embed metadata within templates via frontmatter. Frontmatter allows page-specific variables to be embedded via YAML or JSON. These variables are available in a current_page.data namespace. For example, source/index.html.erb contains the following frontmatter specifying a title; it’s available to ERb templates as current_page.data.title: --- title: Welcome to Middleman --- Currently, middleman-demo is a default Middleman installation. Let’s customize things a bit. First, remove all the contents of source/stylesheets/all.css  to remove the default Middleman styles. Next, edit source/index.html.erb to be the following: --- title: Welcome to Middleman Demo --- <h1>Middleman Demo</h1> When viewing middleman-demo at http://localhost:4567, you’ll now see a largely unstyled HTML document with a single Middleman Demo heading. Install the middleman-blog plugin The middleman-blog plugin offers blog functionality to middleman applications. We’ll use middleman-blog in middleman-demo. Add the middleman-blog version 3.5.3 gem dependency to middleman-demo by adding the following to the Gemfile: gem "middleman-blog", "3.5.3 Re-install the middleman-demo gem dependencies, which now include middleman-blog: $ bundle install Activate middleman-blog and specify a URL pattern at which to serve blog posts by adding the following to config.rb: activate :blog do |blog| blog.prefix = "blog" blog.permalink = "{year}/{month}/{day}/{title}.html" end Write a quick blog post Now that all has been configured, let’s write a quick blog post to confirm that middleman-blog works. First, create a directory to house the blog posts: $ mkdir source/blog The source/blog directory will house markdown files containing blog post content and any necessary metadata. These markdown files highlight a key feature of middleman: rather than query a relational database within which content is stored, a middleman application typically reads data from flat files, simple text files–usually markdown–stored within the site’s source code repository. Create a markdown file for middleman-demo ’s first post: $ touch source/blog/2014-08-20-new-blog.markdown Next, add the required frontmatter and content to source/blog/2014-08-20-new-blog.markdown: --- title: New Blog date: 2014/08/20 tags: middleman, blog --- Hello world from Middleman! Features Rich templating system Built-in helpers Easy configuration Asset pipeline Lots more  Note that the content is authored in markdown, a plain text syntax, which is evaluated by Middleman as HTML. You can also embed HTML directly in the markdown post files. GitHub’s documentation provides a good overview of markdown. Next, add the following ERb template code to source/index.html.erb [SB37] to display a list of blog posts on middleman-demo ’s home page: <ul> <% blog.articles.each do |article| %> <li> <%= link_to article.title, article.path %> </li> <% end %> </ul> Now, when running middleman-demo and visiting http://localhost:4567, a link to the new blog post is listed on middleman-demo ’s home page. Clicking the link renders the permalink for the New Blog blog post at blog/2014-08-20/new-blog.html, as is specified in the blog configuration in config.rb. A few notes on the template code Note the use of a link_to method. This is a built-in middleman template helper. Middleman provides template helpers to simplify many common template tasks, such as rendering an anchor tag. In this case, we pass the link_to method two arguments, the intended anchor tag text and the intended href value. In turn, link_to generates the necessary HTML. Also note the use of a blog variable, within which an article’s method houses an array of all blog posts. Where did this come from?  middleman-demo is an instance of  Middleman::Application;  a blog  method on this instance. To explore other Middleman::Application methods, open middleman-demo via the built-in Middleman console by entering the following in your terminal: $ middleman console To view all the methods on the blog, including the aforementioned articles method, enter the following within the console: 2.1.2 :001 > blog.methods To view all the additional methods, beyond the blog, available to the Middleman::Application instance, enter the following within the console: 2.1.2 :001 > self.methods More can be read about all these methods on Middleman::Application’s rdoc.info class documentation. Cleaner URLs Note that the current new blog URL ends in .html. Let’s customize middleman-demo to omit .html from URLs. Add the following config.rb: activate :directory_indexes Now, rather than generating files such as /blog/2014-08-20/new-blog.html,  middleman-demo generates files such as /blog/2014-08-20/new-blog/index.html, thus enabling the page to be served by most web servers at a /blog/2014-08-20/new-blog/ path. Adjusting the templates Let’s adjust our the middleman-demo ERb templates a bit. First, note that <h1>Middleman Demo</h1> only displays on the home page; let’s make it render on all of the site’s pages. Move <h1>Middleman Demo</h1> from  source/index.html.erb  to source/layouts/layout.erb. Put it just inside the <body> tag: <body class="<%= page_classes %>"> <h1>Middleman Demo</h1> <%= yield %> </body> Next, let’s create a custom blog post template. Create the template file: $ touch source/layout/post.erb Add the following to extend the site-wide functionality of source/layouts/layout.erb to  source/layouts/post.erb: <% wrap_layout :layout do %> <h2><%= current_article.title %></h2> <p>Posted <%= current_article.date.strftime('%B %e, %Y') %></p> <%= yield %> <ul> <% current_article.tags.each do |tag| %> <li><a href="/blog/tags/<%= tag %>/"><%= tag %></a></li> <% end %> </ul> <% end %> Note the use of the wrap_layout  ERb helper.  The wrap_layout ERb helper takes two arguments. The first is the name of the layout to wrap, in this case :layout. The second argument is a Ruby block; the contents of the block are evaluated within the <%= yield %> call of source/layouts/layout.erb. Next, instruct  middleman-demo  to use  source/layouts/post.erb  in serving blog posts by adding the necessary configuration to  config.rb : page "blog/*", :layout => :post Now, when restarting the Middleman server and visiting  http://localhost:4567/blog/2014/08/20/new-blog/,  middleman-demo renders a more comprehensive blog template that includes the post’s title, date published, and tags. Let’s add a simple template to render a tags page that lists relevant tagged content. First, create the template: $ touch source/tag.html.erb And add the necessary ERb to list the relevant posts assigned a given tag: <h2>Posts tagged <%= tagname %></h2> <ul> <% page_articles.each do |post| %> <li> <a href="<%= post.url %>"><%= post.title %></a> </li> <% end %> </ul> Specify the blog’s tag template by editing the blog configuration in config.rb: activate :blog do |blog| blog.prefix = 'blog' blog.permalink = "{year}/{month}/{day}/{title}.html" # tag template: blog.tag_template = "tag.html" end Edit config.rb to configure middleman-demo’s tag template to use source/layout.erb rather than source/post.erb: page "blog/tags/*", :layout => :layout Now, when visiting http://localhost:4567/2014/08/20/new-blog/, you should see a linked list of New Blog’s tags. Clicking a tag should correctly render the tags page. Part 1 recap Thus far, middleman-demo serves as a basic Middleman-based blog example. It demonstrates Middleman templating, how to set up the middleman-blog  plugin, and how to make author markdown-based blog posts in Middleman. In part 2, we’ll cover migrating content from an existing Wordpress blog. We’ll also step through establishing an Amazon S3 bucket, building middleman-demo, and deploying to production. In part 3, we’ll cover how to create automated tests, continuous integration, and automated deployments. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 20173

article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 12450
article-image-postmodel-workflow
Packt
04 Nov 2014
23 min read
Save for later

Postmodel Workflow

Packt
04 Nov 2014
23 min read
 This article written by Trent Hauck, the author of scikit-learn Cookbook, Packt Publishing, will cover the following recipes: K-fold cross validation Automatic cross validation Cross validation with ShuffleSplit Stratified k-fold Poor man's grid search Brute force grid search Using dummy estimators to compare results (For more resources related to this topic, see here.) Even though by design the articles are unordered, you could argue by virtue of the art of data science, we've saved the best for last. For the most part, each recipe within this article is applicable to the various models we've worked with. In some ways, you can think about this article as tuning the parameters and features. Ultimately, we need to choose some criteria to determine the "best" model. We'll use various measures to define best. Then in the Cross validation with ShuffleSplit recipe, we will randomize the evaluation across subsets of the data to help avoid overfitting. K-fold cross validation In this recipe, we'll create, quite possibly, the most important post-model validation exercise—cross validation. We'll talk about k-fold cross validation in this recipe. There are several varieties of cross validation, each with slightly different randomization schemes. K-fold is perhaps one of the most well-known randomization schemes. Getting ready We'll create some data and then fit a classifier on the different folds. It's probably worth mentioning that if you can keep a holdout set, then that would be best. For example, we have a dataset where N = 1000. If we hold out 200 data points, then use cross validation between the other 800 points to determine the best parameters. How to do it... First, we'll create some fake data, then we'll examine the parameters, and finally, we'll look at the size of the resulting dataset: >>> N = 1000>>> holdout = 200>>> from sklearn.datasets import make_regression>>> X, y = make_regression(1000, shuffle=True) Now that we have the data, let's hold out 200 points, and then go through the fold scheme like we normally would: >>> X_h, y_h = X[:holdout], y[:holdout]>>> X_t, y_t = X[holdout:], y[holdout:]>>> from sklearn.cross_validation import KFold K-fold gives us the option of choosing how many folds we want, if we want the values to be indices or Booleans, if want to shuffle the dataset, and finally, the random state (this is mainly for reproducibility). Indices will actually be removed in later versions. It's assumed to be True. Let's create the cross validation object: >>> kfold = KFold(len(y_t), n_folds=4) Now, we can iterate through the k-fold object: >>> output_string = "Fold: {}, N_train: {}, N_test: {}">>> for i, (train, test) in enumerate(kfold):       print output_string.format(i, len(y_t[train]),       len(y_t[test]))Fold: 0, N_train: 600, N_test: 200Fold: 1, N_train: 600, N_test: 200Fold: 2, N_train: 600, N_test: 200Fold: 3, N_train: 600, N_test: 200 Each iteration should return the same split size. How it works... It's probably clear, but k-fold works by iterating through the folds and holds out 1/n_folds * N, where N for us was len(y_t). From a Python perspective, the cross validation objects have an iterator that can be accessed by using the in operator. Often times, it's useful to write a wrapper around a cross validation object that will iterate a subset of the data. For example, we may have a dataset that has repeated measures for data points or we may have a dataset with patients and each patient having measures. We're going to mix it up and use pandas for this part: >>> import numpy as np>>> import pandas as pd>>> patients = np.repeat(np.arange(0, 100, dtype=np.int8), 8)>>> measurements = pd.DataFrame({'patient_id': patients,                   'ys': np.random.normal(0, 1, 800)}) Now that we have the data, we only want to hold out certain customers instead of data points: >>> custids = np.unique(measurements.patient_id)>>> customer_kfold = KFold(custids.size, n_folds=4)>>> output_string = "Fold: {}, N_train: {}, N_test: {}">>> for i, (train, test) in enumerate(customer_kfold):       train_cust_ids = custids[train]       training = measurements[measurements.patient_id.isin(                 train_cust_ids)]       testing = measurements[~measurements.patient_id.isin(                 train_cust_ids)]       print output_string.format(i, len(training), len(testing))Fold: 0, N_train: 600, N_test: 200Fold: 1, N_train: 600, N_test: 200Fold: 2, N_train: 600, N_test: 200Fold: 3, N_train: 600, N_test: 200 Automatic cross validation We've looked at the using cross validation iterators that scikit-learn comes with, but we can also use a helper function to perform cross validation for use automatically. This is similar to how other objects in scikit-learn are wrapped by helper functions, pipeline for instance. Getting ready First, we'll need to create a sample classifier; this can really be anything, a decision tree, a random forest, whatever. For us, it'll be a random forest. We'll then create a dataset and use the cross validation functions. How to do it... First import the ensemble module and we'll get started: >>> from sklearn import ensemble>>> rf = ensemble.RandomForestRegressor(max_features='auto') Okay, so now, let's create some regression data: >>> from sklearn import datasets>>> X, y = datasets.make_regression(10000, 10) Now that we have the data, we can import the cross_validation module and get access to the functions we'll use: >>> from sklearn import cross_validation>>> scores = cross_validation.cross_val_score(rf, X, y)>>> print scores[ 0.86823874 0.86763225 0.86986129] How it works... For the most part, this will delegate to the cross validation objects. One nice thing is that, the function will handle performing the cross validation in parallel. We can activate verbose mode play by play: >>> scores = cross_validation.cross_val_score(rf, X, y, verbose=3, cv=4)[CV] no parameters to be set[CV] no parameters to be set, score=0.872866 - 0.7s[CV] no parameters to be set[CV] no parameters to be set, score=0.873679 - 0.6s[CV] no parameters to be set[CV] no parameters to be set, score=0.878018 - 0.7s[CV] no parameters to be set[CV] no parameters to be set, score=0.871598 - 0.6s[Parallel(n_jobs=1)]: Done 1 jobs | elapsed: 0.7s[Parallel(n_jobs=1)]: Done 4 out of 4 | elapsed: 2.6s finished As we can see, during each iteration, we scored the function. We also get an idea of how long the model runs. It's also worth knowing that we can score our function predicated on which kind of model we're trying to fit. Cross validation with ShuffleSplit ShuffleSplit is one of the simplest cross validation techniques. This cross validation technique will simply take a sample of the data for the number of iterations specified. Getting ready ShuffleSplit is another cross validation technique that is very simple. We'll specify the total elements in the dataset, and it will take care of the rest. We'll walk through an example of estimating the mean of a univariate dataset. This is somewhat similar to resampling, but it'll illustrate one reason why we want to use cross validation while showing cross validation. How to do it... First, we need to create the dataset. We'll use NumPy to create a dataset, where we know the underlying mean. We'll sample half of the dataset to estimate the mean and see how close it is to the underlying mean: >>> import numpy as np>>> true_loc = 1000>>> true_scale = 10>>> N = 1000>>> dataset = np.random.normal(true_loc, true_scale, N)>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.hist(dataset, color='k', alpha=.65, histtype='stepfilled');>>> ax.set_title("Histogram of dataset");>>> f.savefig("978-1-78398-948-5_06_06.png") NumPy will give the following output: Now, let's take the first half of the data and guess the mean: >>> from sklearn import cross_validation>>> holdout_set = dataset[:500]>>> fitting_set = dataset[500:]>>> estimate = fitting_set[:N/2].mean()>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.set_title("True Mean vs Regular Estimate")>>> ax.vlines(true_loc, 0, 1, color='r', linestyles='-', lw=5,             alpha=.65, label='true mean')>>> ax.vlines(estimate, 0, 1, color='g', linestyles='-', lw=5,             alpha=.65, label='regular estimate')>>> ax.set_xlim(999, 1001)>>> ax.legend()>>> f.savefig("978-1-78398-948-5_06_07.png") We'll get the following output: Now, we can use ShuffleSplit to fit the estimator on several smaller datasets: >>> from sklearn.cross_validation import ShuffleSplit>>> shuffle_split = ShuffleSplit(len(fitting_set))>>> mean_p = []>>> for train, _ in shuffle_split:       mean_p.append(fitting_set[train].mean())       shuf_estimate = np.mean(mean_p)>>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.vlines(true_loc, 0, 1, color='r', linestyles='-', lw=5,             alpha=.65, label='true mean')>>> ax.vlines(estimate, 0, 1, color='g', linestyles='-', lw=5,             alpha=.65, label='regular estimate')>>> ax.vlines(shuf_estimate, 0, 1, color='b', linestyles='-', lw=5,             alpha=.65, label='shufflesplit estimate')>>> ax.set_title("All Estimates")>>> ax.set_xlim(999, 1001)>>> ax.legend(loc=3) The output will be as follows: As we can see, we got an estimate that was similar to what we expected, but we were able to take many samples to get that estimate. Stratified k-fold In this recipe, we'll quickly look at stratified k-fold valuation. We've walked through different recipes where the class representation was unbalanced in some manner. Stratified k-fold is nice because its scheme is specifically designed to maintain the class proportions. Getting ready We're going to create a small dataset. In this dataset, we will then use stratified k-fold validation. We want it small so that we can see the variation. For larger samples. it probably won't be as big of a deal. We'll then plot the class proportions at each step to illustrate how the class proportions are maintained: >>> from sklearn import datasets>>> X, y = datasets.make_classification(n_samples=int(1e3), weights=[1./11]) Let's check the overall class weight distribution: >>> y.mean()0.90300000000000002 Roughly, 90.5 percent of the samples are 1, with the balance 0. How to do it... Let's create a stratified k-fold object and iterate it through each fold. We'll measure the proportion of verse that are 1. After that we'll plot the proportion of classes by the split number to see how and if it changes. This code will hopefully illustrate how this is beneficial. We'll also plot this code against a basic ShuffleSplit: >>> from sklearn import cross_validation>>> n_folds = 50>>> strat_kfold = cross_validation.StratifiedKFold(y,                 n_folds=n_folds)>>> shuff_split = cross_validation.ShuffleSplit(n=len(y),                 n_iter=n_folds)>>> kfold_y_props = []>>> shuff_y_props = []>>> for (k_train, k_test), (s_train, s_test) in zip(strat_kfold,         shuff_split):        kfold_y_props.append(y[k_train].mean())       shuff_y_props.append(y[s_train].mean()) Now, let's plot the proportions over each fold: >>> import matplotlib.pyplot as plt>>> f, ax = plt.subplots(figsize=(7, 5))>>> ax.plot(range(n_folds), shuff_y_props, label="ShuffleSplit",           color='k')>>> ax.plot(range(n_folds), kfold_y_props, label="Stratified",           color='k', ls='--')>>> ax.set_title("Comparing class proportions.")>>> ax.legend(loc='best') The output will be as follows: We can see that the proportion of each fold for stratified k-fold is stable across folds. How it works... Stratified k-fold works by taking the y value. First, getting the overall proportion of the classes, then intelligently splitting the training and test set into the proportions. This will generalize to multiple labels: >>> import numpy as np>>> three_classes = np.random.choice([1,2,3], p=[.1, .4, .5],                   size=1000)>>> import itertools as it>>> for train, test in cross_validation.StratifiedKFold(three_classes, 5):       print np.bincount(three_classes[train])[ 0 90 314 395][ 0 90 314 395][ 0 90 314 395][ 0 91 315 395][ 0 91 315 396] As we can see, we got roughly the sample sizes of each class for our training and testing proportions. Poor man's grid search In this recipe, we're going to introduce grid search with basic Python, though we will use sklearn for the models and matplotlib for the visualization. Getting ready In this recipe, we will perform the following tasks: Design a basic search grid in the parameter space Iterate through the grid and check the loss/score function at each point in the parameter space for the dataset Choose the point in the parameter space that minimizes/maximizes the evaluation function Also, the model we'll fit is a basic decision tree classifier. Our parameter space will be 2 dimensional to help us with the visualization: The parameter space will then be the Cartesian product of the those two sets: We'll see in a bit how we can iterate through this space with itertools. Let's create the dataset and then get started: >>> from sklearn import datasets>>> X, y = datasets.make_classification(n_samples=2000, n_features=10) How to do it... Earlier we said that we'd use grid search to tune two parameters—criteria and max_features. We need to represent those as Python sets, and then use itertools product to iterate through them: >>> criteria = {'gini', 'entropy'}>>> max_features = {'auto', 'log2', None}>>> import itertools as it>>> parameter_space = it.product(criteria, max_features) Great! So now that we have the parameter space, let's iterate through it and check the accuracy of each model as specified by the parameters. Then, we'll store that accuracy so that we can compare different parameter spaces. We'll also use a test and train split of 50, 50: import numpy as nptrain_set = np.random.choice([True, False], size=len(y))from sklearn.tree import DecisionTreeClassifieraccuracies = {}for criterion, max_feature in parameter_space:   dt = DecisionTreeClassifier(criterion=criterion,         max_features=max_feature)   dt.fit(X[train_set], y[train_set])   accuracies[(criterion, max_feature)] = (dt.predict(X[~train_set])                                         == y[~train_set]).mean()>>> accuracies{('entropy', None): 0.974609375, ('entropy', 'auto'): 0.9736328125,('entropy', 'log2'): 0.962890625, ('gini', None): 0.9677734375, ('gini','auto'): 0.9638671875, ('gini', 'log2'): 0.96875} So we now have the accuracies and its performance. Let's visualize the performance: >>> from matplotlib import pyplot as plt>>> from matplotlib import cm>>> cmap = cm.RdBu_r>>> f, ax = plt.subplots(figsize=(7, 4))>>> ax.set_xticklabels([''] + list(criteria))>>> ax.set_yticklabels([''] + list(max_features))>>> plot_array = []>>> for max_feature in max_features:m = []>>> for criterion in criteria:       m.append(accuracies[(criterion, max_feature)])       plot_array.append(m)>>> colors = ax.matshow(plot_array, vmin=np.min(accuracies.values())             - 0.001, vmax=np.max(accuracies.values()) + 0.001,             cmap=cmap)>>> f.colorbar(colors) The following is the output: It's fairly easy to see which one performed best here. Hopefully, you can see how this process can be taken to the further stage with a brute force method. How it works... This works fairly simply, we just have to perform the following steps: Choose a set of parameters. Iterate through them and find the accuracy of each step. Find the best performer by visual inspection. Brute force grid search In this recipe, we'll do an exhaustive grid search through scikit-learn. This is basically the same thing we did in the previous recipe, but we'll utilize built-in methods. We'll also walk through an example of performing randomized optimization. This is an alternative to brute force search. Essentially, we're trading computer cycles to make sure that we search the entire space. We were fairly calm in the last recipe. However, you could imagine a model that has several steps, first imputation for fix missing data, then PCA reduce the dimensionality to classification. Your parameter space could get very large, very fast; therefore, it can be advantageous to only search a part of that space. Getting ready To get started, we'll need to perform the following steps: Create some classification data. We'll then create a LogisticRegression object that will be the model we're fitting. After that, we'll create the search objects, GridSearch and RandomizedSearchCV. How to do it... Run the following code to create some classification data: >>> from sklearn.datasets import make_classification>>> X, y = make_classification(1000, n_features=5) Now, we'll create our logistic regression object: >>> from sklearn.linear_model import LogisticRegression>>> lr = LogisticRegression(class_weight='auto') We need to specify the parameters we want to search. For GridSearch, we can just specify the ranges that we care about, but for RandomizedSearchCV, we'll need to actually specify the distribution over the same space from which to sample: >>> lr.fit(X, y)LogisticRegression(C=1.0, class_weight={0: 0.25, 1: 0.75},                   dual=False,fit_intercept=True,                  intercept_scaling=1, penalty='l2',                   random_state=None, tol=0.0001)>>> grid_search_params = {'penalty': ['l1', 'l2'],'C': [1, 2, 3, 4]} The only change we'll need to make is to describe the C parameter as a probability distribution. We'll keep it simple right now, though we will use scipy to describe the distribution: >>> import scipy.stats as st>>> import numpy as np>>> random_search_params = {'penalty': ['l1', 'l2'],'C': st.randint(1, 4)} How it works... Now, we'll fit the classifier. This works by passing lr to the parameter search objects: >>> from sklearn.grid_search import GridSearchCV, RandomizedSearchCV>>> gs = GridSearchCV(lr, grid_search_params) GridSearchCV implements the same API as the other models: >>> gs.fit(X, y)GridSearchCV(cv=None, estimator=LogisticRegression(C=1.0,             class_weight='auto', dual=False, fit_intercept=True,             intercept_scaling=1, penalty='l2', random_state=None,             tol=0.0001), fit_params={}, iid=True, loss_func=None,             n_jobs=1, param_grid={'penalty': ['l1', 'l2'], 'C':             [1, 2, 3, 4]}, pre_dispatch='2*n_jobs', refit=True,             score_func=None, scoring=None, verbose=0) As we can see with the param_grid parameter, our penalty and C are both arrays. To access the scores, we can use the grid_scores_ attribute of the grid search. We also want to find the optimal set of parameters. We can also look at the marginal performance of the grid search: >>> gs.grid_scores_[mean: 0.90300, std: 0.01192, params: {'penalty': 'l1', 'C': 1},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 1},mean: 0.90200, std: 0.01117, params: {'penalty': 'l1', 'C': 2},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 2},mean: 0.90200, std: 0.01117, params: {'penalty': 'l1', 'C': 3},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 3},mean: 0.90100, std: 0.01258, params: {'penalty': 'l1', 'C': 4},mean: 0.90100, std: 0.01258, params: {'penalty': 'l2', 'C': 4}] We might want to get the max score: >>> gs.grid_scores_[1][1]0.90100000000000002>>> max(gs.grid_scores_, key=lambda x: x[1])mean: 0.90300, std: 0.01192, params: {'penalty': 'l1', 'C': 1} The parameters obtained are the best choices for our logistic regression. Using dummy estimators to compare results This recipe is about creating fake estimators; this isn't the pretty or exciting stuff, but it is worthwhile to have a reference point for the model you'll eventually build. Getting ready In this recipe, we'll perform the following tasks: Create some data random data. Fit the various dummy estimators. We'll perform these two steps for regression data and classification data. How to do it... First, we'll create the random data: >>> from sklearn.datasets import make_regression, make_classification# classification if for later>>> X, y = make_regression()>>> from sklearn import dummy>>> dumdum = dummy.DummyRegressor()>>> dumdum.fit(X, y)DummyRegressor(constant=None, strategy='mean') By default, the estimator will predict by just taking the mean of the values and predicting the mean values: >>> dumdum.predict(X)[:5]array([ 2.23297907, 2.23297907, 2.23297907, 2.23297907, 2.23297907]) There are other two other strategies we can try. We can predict a supplied constant (refer to constant=None from the preceding command). We can also predict the median value. Supplying a constant will only be considered if strategy is "constant". Let's have a look: >>> predictors = [("mean", None),                 ("median", None),                 ("constant", 10)]>>> for strategy, constant in predictors:       dumdum = dummy.DummyRegressor(strategy=strategy,                 constant=constant)>>> dumdum.fit(X, y)>>> print "strategy: {}".format(strategy), ",".join(map(str,         dumdum.predict(X)[:5]))strategy: mean 2.23297906733,2.23297906733,2.23297906733,2.23297906733,2.23297906733strategy: median 20.38535248,20.38535248,20.38535248,20.38535248,20.38535248strategy: constant 10.0,10.0,10.0,10.0,10.0 We actually have four options for classifiers. These strategies are similar to the continuous case, it's just slanted toward classification problems: >>> predictors = [("constant", 0),                 ("stratified", None),                 ("uniform", None),                 ("most_frequent", None)] We'll also need to create some classification data: >>> X, y = make_classification()>>> for strategy, constant in predictors:       dumdum = dummy.DummyClassifier(strategy=strategy,                 constant=constant)       dumdum.fit(X, y)       print "strategy: {}".format(strategy), ",".join(map(str,             dumdum.predict(X)[:5]))strategy: constant 0,0,0,0,0strategy: stratified 1,0,0,1,0strategy: uniform 0,0,0,1,1strategy: most_frequent 1,1,1,1,1 How it works... It's always good to test your models against the simplest models and that's exactly what the dummy estimators give you. For example, imagine a fraud model. In this model, only 5 percent of the data set is fraud. Therefore, we can probably fit a pretty good model just by never guessing any fraud. We can create this model by using the stratified strategy, using the following command. We can also get a good example of why class imbalance causes problems: >>> X, y = make_classification(20000, weights=[.95, .05])>>> dumdum = dummy.DummyClassifier(strategy='most_frequent')>>> dumdum.fit(X, y)DummyClassifier(constant=None, random_state=None, strategy='most_frequent')>>> from sklearn.metrics import accuracy_score>>> print accuracy_score(y, dumdum.predict(X))0.94575 We were actually correct very often, but that's not the point. The point is that this is our baseline. If we cannot create a model for fraud that is more accurate than this, then it isn't worth our time. Summary This article taught us how we can take a basic model produced from one of the recipes and tune it so that we can achieve better results than we could with the basic model. Resources for Article: Further resources on this subject: Specialized Machine Learning Topics [article] Machine Learning in IPython with scikit-learn [article] Our First Machine Learning Method – Linear Classification [article]
Read more
  • 0
  • 0
  • 2245

article-image-creating-our-first-animation-angularjs
Packt
31 Oct 2014
36 min read
Save for later

Creating Our First Animation in AngularJS

Packt
31 Oct 2014
36 min read
In this article by Richard Keller, author of the book Learning AngularJS Animations, we will learn how to apply CSS animations within the context of AngularJS by creating animations using CSS transitions and CSS keyframe animations that are integrated with AngularJS native directives using the ngAnimate module. In this article, we will learn: The ngAnimate module setup and usage AngularJS directives with support for out-of-the-box animation AngularJS animations with the CSS transition AngularJS animations with CSS keyframe animations The naming convention of the CSS animation classes Animation of the ngMessage and ngMessages directives (For more resources related to this topic, see here.) The ngAnimate module setup and usage AngularJS is a module-based framework; if we want our AngularJS application to have the animation feature, we need to add the animation module (ngAnimate). We have to include this module in the application by adding the module as a dependency in our AngularJS application. However, before that, we should include the JavaScript angular-animate.js file in HTML. Both files are available on the Google content distribution network (CDN), Bower, Google Code, and https://angularjs.org/. The Google developers' CDN hosts many versions of AngularJS, as listed here: https://developers.google.com/speed/libraries/devguide#angularjs Currently, AngularJS Version 1.3 is the latest stable version, so we will use AngularJS Version 1.3.0 on all samples files of this book; we can get them from https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular.min.js and https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0/angular-animate.min.js. You might want to use Bower. To do so, check out this great video article at https://thinkster.io/egghead/intro-to-bower/, explaining how to use Bower to get AngularJS. We include the JavaScript files of AngularJS and the ngAnimate module, and then we include the ngAnimate module as a dependency of our app. This is shown in the following sample, using the Google CDN and the minified versions of both files: <!DOCTYPE html> <html ng-app"myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/    1.3.0/angular-animate.min.js"></script> <script>    var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> Here, we already have an AngularJS web app configured to use animations. Now, we will learn how to animate using AngularJS directives. AngularJS directives with native support for animations AngularJS has the purpose of changing the way web developers and designers manipulate the Document Object Model (DOM). We don't directly manipulate the DOM when developing controllers, services, and templates. AngularJS does all the DOM manipulation work for us. The only place where an application touches the DOM is within directives. For most of the DOM manipulation requirements, AngularJS already provides are built-in directives that fit our needs. There are many important AngularJS directives that already have built-in support for animations, and they use the ngAnimate module. This is why this module is so useful; it allows us to use animations within AngularJS directives DOM manipulation. This way, we don't have to replicate native directives by extending them just to add animation functionality. The ngAnimate module provides us a way to hook animations in between AngularJS directives execution. It even allows us to hook on custom directives. As we are dealing with animations between DOM manipulations, we can have animations before and after an element is added to or removed from the DOM, after an element changes (by adding or removing classes), and before and after an element is moved in the DOM. These events are the moments when we might add animations. Fade animations using AngularJS Now that we already know how to install a web app with the ngAnimate module enabled, let's create fade-in and fade-out animations to get started with AngularJS animations. We will use the same HTML from the installation topic and add a simple controller, just to change an ngShow directive model value and add a CSS transition. The ngShow directive shows or hides the given element based on the expression provided to the ng-show attribute. For this sample, we have a Toggle fade button that changes the ngShow model value, so we can see what happens when the element fades in and fades out from the DOM. The ngShow directive shows and hides an element by adding and removing the ng-hide class from the element that contains the directive, shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS animation installation</title> </head> <body> <style type="text/css">    .firstSampleAnimation.ng-hide-add,    .firstSampleAnimation.ng-hide-remove {     -webkit-transition: 1s ease-in-out opacity;     transition: 1s ease-in-out opacity;     opacity: 1;  } .firstSampleAnimation.ng-hide { opacity: 0; } </style> <div> <div ng-controller="animationsCtrl"> <h1>ngShow animation</h1> <button ng-click="fadeAnimation = !fadeAnimation">Toggle fade</button> fadeAnimation value: {{fadeAnimation}} <div class="firstSampleAnimation" ng-show="fadeAnimation"> This element appears when the fadeAnimation model is true </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs/ 1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> In the CSS code, we declared an opacity transition to elements with the firstAnimationSample and ng-hide-add classes, or elements with the firstAnimationSample and ng-hide-remove classes. We also added the firstAnimationSample class to the same element that has the ng-show directive attribute. The fadeAnimation model is initially false, so the element with the ngShow directive is initially hidden, as the ngShow directive adds the ng-hide class to the element to set the display property as none. When we first click on the Toggle fade button, the fadeAnimation model will become true. Then, the ngShow directive will remove the ng-hide class to display the element. But before that, the ngAnimate module knows there is a transition declared for this element. Because of that, the ngAnimate module will append the ng-hide-remove class to trigger the hide animation start. Then, ngAnimate will add the ng-hide-remove-active class that can contain the final state of the animation to the element and remove the ng-hide class at the same time. Both classes will last until the animation (1 second in this sample) finishes, and then they are removed. This is the fade-in animation; ngAnimate triggers animations by adding and removing the classes that contain the animations; this is why we say that AngularJS animations are class based. This is where the magic happens. All that we did to create this fade-in animation was declare a CSS transition with the class name, ng-hide-remove. This class name means that it's appended when the ng-hide class is removed. The fade-out animation will happen when we click on the Toggle fade button again, and then, the fadeAnimation model will become false. The ngShow directive will add the ng-hide class to remove the element, but before this, the ngAnimate module knows that there is a transition declared for that element too. The ngAnimate module will append the ng-hide-add class and then add the ng-hide and ng-hide-add-active classes to the element at the same time. Both classes will last until the animation (1 second in this sample) finishes, then they are removed, and only the ng-hide class is kept, to hide the element. The fade-out animation was created by just declaring the CSS transition with the class name of ng-hide-add. It is easy to understand that this class is appended to the element when the ng-hide class is about to be added. The AngularJS animations convention As this article is intended to teach you how to create animations with AngularJS, you need to know which directives already have built-in support for AngularJS animations to make our life easier. Here, we have a table of directives with the directive names and the events of the directive life cycle when animation hooks are supported. The first row means that the ngRepeat directive supports animation on enter, leave, and move event times. All events are relative to DOM manipulations, for example, when an element enters or leaves DOM, or when a class is added to or removed from an element. Directive Supported animations ngRepeat Enter, leave, and move ngView Enter and leave ngInclude Enter and leave ngSwitch Enter and leave ngIf Enter and leave ngClass Add and remove ngShow and ngHide Add and remove form and ngModel Add and remove ngMessages Add and remove ngMessage Enter and leave Perhaps, the more experienced AngularJS users have noticed that the most frequently used directives are attended in this list. This is great; it means that animating with AngularJS isn't hard for most use cases. AngularJS animation with CSS transitions We need to know how to bind the CSS animation as well as the AngularJS directives listed in the previous table. The ngIf directive, for example, has support for the enter and leave animations. When the value of the ngIf model is changed to true, it triggers the animation by adding the ng-enter class to the element just after the ngIf DOM element is created and injected. This triggers the animation, and the classes are kept for the duration of the transition ends. Then, the ng-enter class is removed. When the value of ngIf is changed to false, the ng-leave class is added to the element just before the ngIf content is removed from the DOM, and so, the animation is triggered while the element still exists. To illustrate the AngularJS ngIf directive and ngAnimate module behavior, let's see what happens in a sample. First, we have to declare a button that toggles the value of the fadeAnimation model, and one div tag that uses ng-if="fadeAnimation", so we can see what happens when the element is removed and added back. Here, we create the HTML code using the HTML template we used in the last topic to install the ngAnimate module: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngIf sample</title> </head> <body> <style> /* ngIf animation */ .animationIf.ng-enter, .animationIf.ng-leave { -webkit-transition: opacity ease-in-out 1s; transition: opacity ease-in-out 1s; } .animationIf.ng-enter, .animationIf.ng-leave.ng-leave-active { opacity: 0; } .animationIf.ng-leave, .animationIf.ng-enter.ng-enter-active { opacity: 1; } </style> <div ng-controller="animationsCtrl"> <h1>ngIf animation</h1> <div> fadeAnimation value: {{fadeAnimation}} </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <div ng-if="fadeAnimation" class="animationIf"> This element appears when the fadeAnimation model is true </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.fadeAnimation = false; }); </script> </body> </html> So, let's see what happens in the DOM just after we click on the Toggle fade button. We will use Chrome Developer Tools (Chrome DevTools) to check the HTML in each animation step. It's a native tool that comes with the Chrome browser. To open Chrome DevTools, you just need to right-click on any part of the page and click on Inspect Element. The ng-enter class Our CSS declaration added an animation to the element with the animationIf and ng-enter classes. So, the transition is applied when the element has the ng-enter class too. This class is appended to the element when the element has just entered the DOM. It's important to add the specific class of the element you want to animate in the selector, which in this case is the animationIf class, because many other elements might trigger animation and add the ng-enter class too. We should be careful to use the specific target element class. Until the animation is completed, the resulting HTML fragment will be as follows: Consider the following snippet: <div ng-if="fadeAnimation" class="animationIf ng-scope ng-animate ng-enter ng-enter-active"> fadeAnimation value: true </div> We can see that the ng-animate, ng-enter, and ng-enter-active classes were added to the element. After the animation is completed, the DOM will have the animation classes removed as the next screenshot shows: As you can see, the animation classes are removed: <div ng-if="fadeAnimation" class="animationIf ng-scope"> This element appears when the fadeAnimation model is true </div> The ng-leave class We added the same transition of the ng-enter class to the element with the animationIf and ng-leave classes. The ng-leave class is added to the element before the element leaves the DOM. So, before the element vanishes, it will display the fade effect too. If we click again on the Toggle fade button, the leave animation will be displayed and the following HTML fragment and screen will be rendered: The fragment rendered is as follows: <div ng-if="fadeAnimation" class="animationIf ng-scope g-animate ng-leave ng-leave-active"> This element appears when the fadeAnimation model is true </div> We can notice that the ng-animate, ng-leave, and ng-leave-active classes were added to the element. Finally, after the element is removed from the DOM, the rendered result will be as follows: The code after removing the element is as follows: <div ng-controller="animationsCtrl" class="ng-scope"> <div class="ng-binding"> fadeAnimation value: false </div> <button ng-click="fadeAnimation = !fadeAnimation"> Toggle fade</button> <!-- ngIf: fadeAnimation --> </div> Furthermore, there are the ng-enter-active and ng-leave-active classes. They are appended to the element classes too. Both are used to define the target value of the transition, and the -active classes define the destination CSS so that we can create a transition between the start and the end of an event. For example, ng-enter is the initial class of the enter event and ng-enter-active is the final class of the enter event. They are used to determine the style applied at the start of the animation beginning and the final transition style, and they are displayed when the transition completes the cycle. A use case of the -active class is when we want to set an initial color and a final color using the CSS transition. In the last sample case, the ng-leave class has opacity set to 1 and the ng-leave-active class has the opacity set to 0; so, the element will fade away at the end of the animation. Great, we just created our first animation using AngularJS and CSS transitions. AngularJS animation with CSS keyframe animations We created an animation using the ngIf directive and CSS transitions. Now we are going to create an animation using ngRepeat and CSS animations (keyframes). As we saw in the earlier table on directives and the supported animation events, the ngRepeat directive supports animation on the enter, leave, and move events. We already used the enter and leave events in the last sample. The move event is triggered when an item is moved around on the list of items. For this sample, we will create three functions on the controller scope: one to add elements to the list in order to execute the enter event, one to remove an item from list in order to execute the leave event, and one to sort the elements so that we can see the move event. Here is the JavaScript with the functions; $scope.items is the array that we will use on the ngRepeat directive: var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.items = [{ name: 'Richard' }, { name: 'Bruno' } , { name: 'Jobson' }]; $scope.counter = 0; $scope.addItem = function () { var name = 'Item' + $scope.counter++; $scope.items.push({ name: name }); }; $scope.removeItem = function () { var length = $scope.items.length; var indexRemoved = Math.floor(Math.random() * length); $scope.items.splice(indexRemoved, 1); }; $scope.sortItems = function () { $scope.items.sort(function (a, b) { return a[name] < b[name] ? -1 : 1 }); }; }); The HTML is as follows; it is without the CSS styles because we will see them later separating each animation block: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngRepeat sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngRepeat Animation</h1> <div> <div ng-repeat="item in items" class="repeatItem"> {{item.name}} </div> <button ng-click="addItem()">Add item</button> <button ng-click="removeItem()">Remove item</button><button ng-click="sortItems()"> Sort items</button> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> </body> </html> We will add an animation to the element with the repeatItem and ng-enter classes, and we will declare the from and to keyframes. So, when an element appears, it starts with opacity set to 0 and color set as red and will animate for 1 second until opacity is 1 and color is black. This will be seen when an item is added to the ngRepeat array. The enter animation definition is declared as follows: /* ngRepeat ng-enter animation */ .repeatItem.ng-enter { -webkit-animation: 1s ng-enter-repeat-animation; animation: 1s ng-enter-repeat-animation; } @-webkit-keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } @keyframes ng-enter-repeat-animation { from { opacity: 0; color: red; } to { opacity: 1; color: black; } } The move animation is declared next is to be triggered when we move an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-move classes. We will declare the from and to keyframes. So, when an element moves, it starts with opacity set to 0 and color set as black and will animate for 1 second until opacity is 0.5 and color is blue, shown as follows: /* ngRepeat ng-move animation */ .repeatItem.ng-move { -webkit-animation: 1s ng-move-repeat-animation; animation: 1s ng-move-repeat-animation; } @-webkit-keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } @keyframes ng-move-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0.5; color: blue; } } The leave animation is declared next and is to be triggered when we remove an item of ngRepeat. We will add a keyframe animation to the element with the repeatItem and ng-leave classes; we will declare the from and to keyframes; so, when an element leaves the DOM, it starts with opacity set to 1 and color set as black and animates for 1 second until opacity is 0 and color is red, shown as follows: /* ngRepeat ng-leave animation */ .repeatItem.ng-leave { -webkit-animation: 1s ng-leave-repeat-animation; animation: 1s ng-leave-repeat-animation; } @-webkit-keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } @keyframes ng-leave-repeat-animation { from { opacity: 1; color: black; } to { opacity: 0; color: red; } } We can see that the ng-enter-active and ng-leave-active classes aren't used on this sample, as the keyframe animation already determines the initial and final properties' states. In this case, as we used CSS keyframes, the classes with the -active suffix are useless, although for CSS transitions, it's useful to set an animation destination. The CSS naming convention In the last few sections, we saw how to create animations using AngularJS, CSS transitions, and CSS keyframe animations. Creating animations using both CSS transitions and CSS animations is very similar because all animations in AngularJS are class based, and AngularJS animations have a well-defined class name pattern. We must follow the CSS naming convention by adding a specific class to the directive element so that we can determine the element animation. Otherwise, the ngAnimate module will not be able to recognize which element the animation applies to. We already know that both ngIf and ngRepeat use the ng-enter, ng-enter-active, ng-leave, and ng-leave-active classes that are added to the element in the enter and leave events. It's the same naming convention used by the ngInclude, ngSwitch, ngMessage, and ngView directives. The ngHide and ngShow directives follow a different convention. They add the ng-hide-add and ng-hide-add-active classes when the element is going to be hidden. When the element is going to be shown, they add the ng-hide-remove and ng-hide-remove-active classes. These class names are more intuitive for the purpose of hiding and showing elements. There is also the ngClass directive convention that uses the class name added to create the animation classes with the -add, -add-active, -remove, and -remove-active suffixes, similar to the ngHide directive. The ngRepeat directive uses the ng-move and ng-move-active classes when elements move their position in the DOM, as we already saw in the last sample. The ngClass directive animation sample The ngClass directive allows us to dynamically set CSS classes. So, we can programmatically add and remove CSS from DOM elements. Classes are already used to change element styles, so it's very good to see how useful animating the ngClass directive is. Let's see a sample of ngClass so that it's easier to understand. We will create the HTML code with a Toggle ngClassbutton that will add and remove the animationClass class from the element with the initialClass class through the ngClass directive: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngClass sample</title> </head> <body> <link href="ngClassSample.css" rel="stylesheet" /> <div> <h1>ngClass Animation</h1> <div> <button ng-click="toggleNgClass = !toggleNgClass">Toggle ngClass</button> <div class="initialClass" ng-class=" {'animationClass' : toggleNgClass}"> This element has class 'initialClass' and the ngClass directive is declared as ng-class="{'animationClass' : toggleNgClass}" </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> For this sample, we will use two basic classes: an initial class and the class that the ngClass directive will add to and remove from the element: /* ngclass animation */ /*This is the initialClass, that keeps in the element*/ .initialClass { background-color: white; color: black; border: 1px solid black; } /* This is the animationClass, that is added or removed by the ngClass expression*/ .animationClass { background-color: black; color: white; border: 1px solid white; } To create the animation, we will define a CSS animation using keyframes; so, we only will need to use the animationClass-add and animationClass-remove classes to add animations: @-webkit-keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } @keyframes ng-class-animation { from { background-color: white; color:black; border: 1px solid black; } to { background-color: black; color: white; border: 1px solid white; } } The initial state is shown as follows: So, we want to display an animation when animationClass is added to the element with the initialClass class by the ngClass directive. This way, our animation selector will be: .initialClass.animationClass-add{ -webkit-animation: 1s ng-class-animation; animation: 1s ng-class-animation; } After 500 ms, the result should be a complete gray div tag because the text, border, and background colors are halfway through the transition between black and white, as we can see in this screenshot: After a second of animation, this is the result: The remove animation, which occurs when animationClass is removed, is similar to the enter animation. However, this animation should be the reverse of the enter animation, and so, the CSS selector of the animation will be: initialClass.animationClass-remove { -webkit-animation: 1s ng-class-animation reverse; animation: 1s ng-class-animation reverse; } The animation result will be the same as we saw in previous screenshots, but in the reverse order. The ngHide and ngShow animation sample Let's see one sample of the ngHide animation, which is the directive that shows and hides the given HTML code based on an expression, such as the ngShow directive. We will use this directive to create a success notification message that fades in and out. To have a lean CSS file in this sample, we will use the Bootstrap CSS library, which is a great library to use with AngularJS. There is an AngularJS version of this library created by the Angular UI team, available at http://angular-ui.github.io/bootstrap/. The Twitter Bootstrap library is available at http://getbootstrap.com/. For this sample, we will use the Microsoft CDN; you can check out the Microsoft CDN libraries at http://www.asp.net/ajax/cdn. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngHide sample</title> </head> <body> <link href="http://ajax.aspnetcdn.com/ajax/bootstrap/3.2.0/css/ bootstrap.css" rel="stylesheet" /> <style> /* ngHide animation */ .ngHideSample { padding: 10px; } .ngHideSample.ng-hide-add { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 1; } .ngHideSample.ng-hide-add-active { opacity: 0; } .ngHideSample.ng-hide-remove { -webkit-transition: all linear 0.3s; -moz-transition: all linear 0.3s; -ms-transition: all linear 0.3s; -o-transition: all linear 0.3s; opacity: 0; } .ngHideSample.ng-hide-remove-active { opacity: 1; } </style> <div> <h1>ngHide animation</h1> <div> <button ng-click="disabled = !disabled">Toggle ngHide animation</button> <div ng-hide="disabled" class="ngHideSample bg-success"> This element has the ng-hide directive. </div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); </script> </body> </html> In this sample, we created an animation in which when the element is going to hide, its opacity is transitioned until it's set to 0. Also, when the element appears again, its opacity transitions back to 1 as we can see in the sequence of the following sequence of screenshots. In the initial state, the output is as follows: After we click on the button, the notification message starts to fade: After the add (ng-hide-add) animation has completed, the output is as follows: Then, if we toggle again, we will see the success message fading in: After the animation has completed, it returns to the initial state: The ngShow directive uses the same convention; the only difference is that each directive has the opposite behavior for the model value. When the model is true, ngShow removes the ng-hide class and ngHide adds the ng-hide class, as we saw in the first sample of this article. The ngModel directive and form animations We can easily animate form controls such as input, select, and textarea on ngModel changes. Form controls already work with validation CSS classes such as ng-valid, ng-invalid, ng-dirty, and ng-pristine. These classes are appended to form controls by AngularJS, based on validations and the current form control status. We are able to animate on the add and remove features of those classes. So, let's see an example of how to change the input color to red when a field becomes invalid. This helps users to check for errors while filling in the form before it is submitted. The animation eases the validation error experience. For this sample, a valid input will contain only digits and will become invalid once a character is entered. Consider the following HTML: <h1>ngModel and form animation</h1> <div> <form> <input ng-model="ngModelSample" ng-pattern="/^d+$/" class="inputSample" /> </form> </div> This ng-pattern directive validates using the regular expression if the model ngModelSample is a number. So, if we want to warn the user when the input is invalid, we will set the input text color to red using a CSS transition. Consider the following CSS: /* ngModel animation */ .inputSample.ng-invalid-add { -webkit-transition: 1s linear all; transition: 1s linear all; color: black; } .inputSample.ng-invalid { color: red; } .inputSample.ng-invalid-add-active { color: red; } We followed the same pattern as ngClass. So, when the ng-invalid class is added, it will append the ng-invalid-add class and the transition will change the text color to red in a second; it will then continue to be red, as we have defined the ng-invalid color as red too. The test is easy; we just need to type in one non-numeric character on the input and it will display the animation. The ngMessage and ngMessages directive animations Both the ngMessage and ngMessages directives are complimentary, but you can choose which one you want to animate, or even animate both of them. They became separated from the core module, so we have to add the ngMessages module as a dependency of our AngularJS application. These directives were added to AngularJS in Version 1.3, and they are useful to display messages based on the state of the model of a form control. So, we can easily display a custom message if an input has a specific validation error, for example, when the input is required but is not filled in yet. Without these directives, we would rely on JavaScript code and/or complex ngIf statements to accomplish the same result. For this sample, we will create three different error messages for three different validations of a password field, as described in the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>ngMessages animation</title> </head> <body> <link href="ngMessageAnimation.css" rel="stylesheet" /> <h1>ngMessage and ngMessages animation</h1> <div> <form name="messageAnimationForm"> <label for="modelSample">Password validation input</label> <div> <input ng-model="ngModelSample" id="modelSample" name="modelSample" type="password" ng-pattern= "/^d+$/" ng-minlength="5" ng-maxlength="10" required class="ngMessageSample" /> <div ng-messages="messageAnimationForm. modelSample.$error" class="ngMessagesClass" ng-messages-multiple> <div ng-message="pattern" class="ngMessageClass">* This field is invalid, only numbers are allowed</div> <div ng-message="minlength" class="ngMessageClass">* It's mandatory at least 5 characters</div> <div ng-message="maxlength" class="ngMessageClass">* It's mandatory at most 10 characters</div> </div> </div> </form> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-messages.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngMessages']); </script> </body> </html> We included the ngMessage file too, as it's required for this sample. For the ngMessages directive, that is, the container of the ngMessage directives, we included an animation on ng-active-addthat changes the container background color from white to red and ng-inactive-add that does the opposite, changing the background color from red to white. This works because the ngMessages directive appends the ng-active class when there is any message to be displayed. When there is no message, it appends the ng-inactive class to the element. Let's see the ngMessages animation's declaration: .ngMessagesClass { height: 50px; width: 350px; } .ngMessagesClass.ng-active-add { transition: 0.3s linear all; background-color: red; } .ngMessagesClass.ng-active { background-color: red; } .ngMessagesClass.ng-inactive-add { transition: 0.3s linear all; background-color: white; } .ngMessagesClass.ng-inactive { background-color: white; } For the ngMessage directive, which contains a message, we created an animation that changes the color of the error message from transparent to white when the message enters the DOM, and changes the color from white to transparent when the message leaves DOM, shown as follows: .ngMessageClass { color: white; } .ngMessageClass.ng-enter { transition: 0.3s linear all; color: transparent; } .ngMessageClass.ng-enter-active { color: white; } .ngMessageClass.ng-leave { transition: 0.3s linear all; color: white; } .ngMessageClass.ng-leave-active { color: transparent; } This sample illustrates two animations for two directives that are related to each other. The initial result, before we add a password, is as follows: We can see both animations being triggered when we type in the a character, for example, in the password input. Between 0 and 300 ms of the animation, we will see both the background and text appearing for two validation messages: After 300 ms, the animation has completed, and the output is as follows: The ngView directive animation The ngView directive is used to add a template to the main layout. It has support for animation, for both enter and leave events. It's nice to have an animation for ngView, so the user has a better notion that we are switching views. For this directive sample, we need to add the ngRoute JavaScript file to the HTML and the ngRoute module as a dependency of our app. We will create a sample that slides the content of the current view to the left, and the new view appears sliding from the right to the left too so that we can see the current view leaving and the next view appearing. Consider the following HTML: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngView sample</title> </head> <body> <style> .ngViewRelative { position: relative; height: 300px; } .ngViewContainer { position: absolute; width: 500px; display: block; } .ngViewContainer.ng-enter, .ngViewContainer.ng-leave { -webkit-transition: 600ms linear all; transition: 600ms linear all; } .ngViewContainer.ng-enter { transform: translateX(500px); } .ngViewContainer.ng-enter-active { transform: translateX(0px); } .ngViewContainer.ng-leave { transform: translateX(0px); } .ngViewContainer.ng-leave-active { transform: translateX(-1000px); } </style> <h1>ngView sample</h1> <div class="ngViewRelative"> <a href="#/First">First page</a> <a href="#/Second">Second page</a> <div ng-view class="ngViewContainer"> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-route.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate', 'ngRoute']); app.config(['$routeProvider', function ($routeProvider) { $routeProvider .when('/First', { templateUrl: 'first.html' }) .when('/Second', { templateUrl: 'second.html' }) .otherwise({ redirectTo: '/First' }); }]); </script> </body> </html> We need to configure the routes on config, as the JavaScript shows us. We then create the two HTML templates on the same directory. The content of the templates are just plain lorem ipsum. The first.html file content is shown as follows: <div> <h2>First page</h2> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras consectetur dui nunc, vel feugiat lectus imperdiet et. In hac habitasse platea dictumst. In rutrum malesuada justo, sed porttitor dolor rutrum eu. Sed condimentum tempus est at euismod. Donec in faucibus urna. Fusce fermentum in mauris at pretium. Aenean ut orci nunc. Nulla id velit interdum nibh feugiat ultricies eu fermentum dolor. Pellentesque lobortis rhoncus nisi, imperdiet viverra leo ullamcorper sed. Donec condimentum tincidunt mollis. Curabitur lorem nibh, mattis non euismod quis, pharetra eu nibh. </p> </div> The second.html file content is shown as follows: <div> <h2>Second page</h2> <p> Ut eu metus vel ipsum tristique fringilla. Proin hendrerit augue quis nisl pellentesque posuere. Aliquam sollicitudin ligula elit, sit amet placerat augue pulvinar eget. Aliquam bibendum pulvinar nisi, quis commodo lorem volutpat in. Donec et felis sit amet mauris venenatis feugiat non id metus. Fusce leo elit, egestas non turpis sed, tincidunt consequat tellus. Fusce quis auctor neque, a ultricies urna. Cras varius purus id sagittis luctus. Sed id lectus tristique, euismod ipsum ut, congue augue. </p> </div> Great, we now have our app set up to enable ngView and routes. The animation was defined by adding animation to the enter and leave events, using translateX(). This animation is defined to the new view coming from 500 px from the right and animating until the position on the x-axis is 0, leaving the view in the left corner. The leaving view goes from the initial position until it is at -1000 px on the x-axis. Then, it leaves the DOM. This animation creates a sliding effect; the leaving view leaves faster as it has to move the double of the distance of the entering view in the same animation duration. We can change the translation using the y-axis to change the animation direction, creating the same sliding effect but with different aesthetics. The ngSwitch directive animation The ngSwitch directive is a directive that is used to conditionally swap the DOM structure based on an expression. It supports animation on the enter and leave events, for example, the ngView directive animation events. For this sample, we will create the same sliding effect of the ngView sample, but in this case, we will create a sliding effect from top to bottom instead of right to left. This animation helps the user to understand that one item is being replaced by the other. The ngSwitch sample HTML is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngSwitch sample</title> </head> <body> <div ng-controller="animationsCtrl"> <h1>ngSwitch sample</h1> <p>Choose an item:</p> <select ng-model="ngSwitchSelected" ng-options="item for item in ngSwitchItems"></select> <p>Selected item:</p> <div class="switchItemRelative" ng-switch on="ngSwitchSelected"> <div class="switchItem" ng-switch-when="item1">Item 1</div> <div class="switchItem" ng-switch-when="item2">Item 2</div> <div class="switchItem" ng-switch-when="item3">Item 3</div> <div class="switchItem" ng-switch-default>Default Item</div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngSwitchItems = ['item1', 'item2', 'item3']; }); </script> </body> </html> In the JavaScript controller, we added the ngSwitchItems array to the scope, and the animation CSS is defined as follows: /* ngSwitch animation */ .switchItemRelative { position: relative; height: 25px; overflow: hidden; } .switchItem { position: absolute; width: 500px; display: block; } /*The transition is added when the switch item is about to enter or about to leave DOM*/ .switchItem.ng-enter, .switchItem.ng-leave { -webkit-transition: 300ms linear all; -moz-transition: 300ms linear all; -ms-transition: 300ms linear all; -o-transition: 300ms linear all; transition: 300ms linear all; } /* When the element is about to enter DOM*/ .switchItem.ng-enter { bottom: 100%; } /* When the element completes the enter transition */ .switchItem.ng-enter-active { bottom: 0; } /* When the element is about to leave DOM*/ .switchItem.ng-leave { bottom: 0; } /*When the element end the leave transition*/ .switchItem.ng-leave-active { bottom: -100%; } This is almost the same CSS as the ngView sample; we just used the bottom property, added a different height to the switchItemRelative class, and included overflow:hidden. The ngInclude directive sample The ngInclude directive is used to fetch, compile, and include an HTML fragment; it supports animations for the enter and leave events, such as the ngView and ngSwitch directives. For this sample, we will use both templates created in the last ngView sample, first.html and second.html. The ngInclude animation sample HTML with JavaScript and CSS included is shown as follows: <!DOCTYPE html> <html ng-app="myApp"> <head> <title>AngularJS ngInclude sample</title> </head> <body> <style> .ngIncludeRelative { position: relative; height: 500px; overflow: hidden; } .ngIncludeItem { position: absolute; width: 500px; display: block; } .ngIncludeItem.ng-enter, .ngIncludeItem.ng-leave { -webkit-transition: 300ms linear all; transition: 300ms linear all; } .ngIncludeItem.ng-enter { top: 100%; } .ngIncludeItem.ng-enter-active { top: 0; } .ngIncludeItem.ng-leave { top: 0; } .ngIncludeItem.ng-leave-active { top: -100%; } </style> <div ng-controller="animationsCtrl"> <h1>ngInclude sample</h1> <p>Choose one template</p> <select ng-model="ngIncludeSelected" ng-options="item.name for item in ngIncludeTemplates"></select> <p>ngInclude:</p> <div class="ngIncludeRelative"> <div class="ngIncludeItem" nginclude=" ngIncludeSelected.url"></div> </div> </div> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular.min.js"></script> <script src="//ajax.googleapis.com/ajax/libs/angularjs /1.3.0/angular-animate.min.js"></script> <script> var app = angular.module('myApp', ['ngAnimate']); app.controller('animationsCtrl', function ($scope) { $scope.ngIncludeTemplates = [{ name: 'first', url: 'first.html' }, { name: 'second', url: 'second.html' }]; }) </script> </body> </html> In the JavaScript controller, we included the templates array. Finally, we can animate ngInclude using CSS. In this sample, we will animate by sliding the templates using the top property, using the enter and leave events animation. To test this sample, just change the template value selected. Do it yourself exercises The following are some exercises that you can refer to as an exercise that will help you understand the concepts of this article better: Create a spinning loading animation, using the ngShow or ngHide directives that appears when the scope controller variable, $scope.isLoading, is equal to true. Using exercise 1, create a gray background layer with opacity 0.5 that smoothly fills the entire page behind the loading spin, and after page content is loaded, covers all the content until isProcessing becomes false. The effect should be that of a drop of ink that is dropped on a piece of paper and spreads until it's completely stained. Create a success notification animation, similar to the ngShow example, but instead of using the fade animation, use a slide-down animation. So, the success message starts with height:0px. Check http://api.jquery.com/slidedown/ for the expected animation effect. Copy any animation from the http://capptivate.co/ website, using AngularJS and CSS animations. Summary In this article, we learned how to animate AngularJS native directives using the CSS transitions and CSS keyframe concepts. This article taught you how to create animations on AngularJS web apps. Resources for Article: Further resources on this subject: Important Aspect of AngularJS UI Development [article] Setting Up The Rig [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 3109
Modal Close icon
Modal Close icon