Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-how-to-simplify-your-development-workflow-with-gulp
Brian Hough
21 Sep 2015
10 min read
Save for later

How to Simplify Your Development Workflow with Gulp

Brian Hough
21 Sep 2015
10 min read
The use of task runners is a fairly recent addition to the Front-End developers toolbox. If you are even using a solution like Gulp, you are already ahead of the game. CSS compiling, JavaScript linting, Image optimization, are powerful tools. However, once you start leveraging a task runner to enhance your workflow, your Gulp file can quickly get out of control. It is very common to end up with a Gulp file that looks something like this: var gulp = require('gulp'); var compass = require('gulp-compass'); var autoprefixer = require('gulp-autoprefixer'); var uglify = require('gulp-uglify'); var imagemin = require('gulp-imagemin'); var plumber = require('gulp-plumber'); var notify = require('gulp-notify'); var watch = require('gulp-watch'); // JS Minification gulp.task('js-uglify', function() { returngulp.src('./src/js/**/*.js') .pipe(plumber({ errorHandler: notify.onError("ERROR: JS Compilation Failed") })) .pipe(uglify()) .pipe(gulp.dest('./dist/js')) }); }); // SASS Compliation gulp.task('sass-compile', function() { returngulp.src('./src/scss/main.scss') .pipe(plumber({ errorHandler: notify.onError("ERROR: CSS Compilation Failed") })) .pipe(compass({ style: 'compressed', css: './dist/css', sass: './src/scss', image: './src/img' })) .pipe(autoprefixer('> 1%', 'last 2 versions', 'Firefox ESR', 'Opera 12.1')) .pipe(gulp.dest('./dist/css')) }); }); // Image Optimization gulp.task('image-minification', function(){ returngulp.src('./src/img/**/*') .pipe(plumber({ errorHandler: notify.onError("ERROR: Image Minification Failed") })) .pipe(imagemin({ optimizationLevel: 3, progressive: true, interlaced: true })) .pipe(gulp.dest('./dist/img')); }); // Watch Task gulp.task('watch', function () { // Builds JavaScript watch('./src/js/**/*.js', function () { gulp.start('js-uglify'); }); // Builds CSS watch('./src/scss/**/*.scss', function () { gulp.start('css-compile'); }); // Optimizes Images watch(['./src/img/**/*.jpg', './src/img/**/*.png', './src/img/**/*.svg'], function () { gulp.start('image-minification'); }); }); // Default Task Triggers Watch gulp.task('default', function() { gulp.start('watch'); }); While this works, it is not very maintainable, especially as you add more and more tasks. The goal of our workflow tools are to be as easy and unobtrusive as possible. Let's look at some ways we can make our tasks easier to maintain as our workflow needs scale. Gulp Load Plugins Like most node-based projects, there are a lot of dependencies to maintain when using Gulp. Every new task often requires several new plugins to get up and running, making the giant list at the top of gulp file a maintenance nightmare. Luckily, there is an easy way to address thanks to gulp-load-plugins. gulp-load-plugins loads any Gulp plugins from your package.json automatically without you needing to manually require them. Each plugin can then be used as before without having to add each new plugin to your list at the top. To get started let's first add gulp-load-plugins to our package.json file. npm install --save-dev gulp-load-plugins Once we've done this, we can remove that giant list of dependencies from the top of our gulpfile.js. Instead we replace it with just two dependencies: var gulp = require('gulp'); var plugins = require('gulp-load-plugins')(); We now have a single object plugins that will contain all the plugins our project depends on. We just need to update our code to reflect that our plugins are part of this new object: var gulp = require('gulp'); var plugins = require('gulp-load-plugins')(); // JS Minification gulp.task('js-uglify', function() { returngulp.src('./src/js/**/*.js') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: JS Compilation Failed") })) .pipe(plugins.uglify()) .pipe(gulp.dest('./dist/js')) }); }); // SASS Compliation gulp.task('sass-compile', function() { returngulp.src('./src/scss/main.scss') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: CSS Compilation Failed") })) .pipe(plugins.compass({ style: 'compressed', css: './dist/css', sass: './src/scss', image: './src/img' })) .pipe(plugins.autoprefixer('> 1%', 'last 2 versions', 'Firefox ESR', 'Opera 12.1')) .pipe(gulp.dest('./dist/css')) }); }); // Image Optimization gulp.task('image-minification', function(){ returngulp.src('./src/img/**/*') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: Image Minification Failed") })) .pipe(plugins.imagemin({ optimizationLevel: 3, progressive: true, interlaced: true })) .pipe(gulp.dest('./dist/img')); }); // Watch Task gulp.task('watch', function () { // Builds JavaScript plugins.watch('./src/js/**/*.js', function () { gulp.start('js-uglify'); }); // Builds CSS plugins.watch('./src/scss/**/*.scss', function () { gulp.start('css-compile'); }); // Optimizes Images plugins.watch(['./src/img/**/*.jpg', './src/img/**/*.png', './src/img/**/*.svg'], function () { gulp.start('image-minification'); }); }); // Default Task Triggers Watch gulp.task('default', function() { gulp.start('watch'); }); Now, each time we add a new plugin, this object will be automatically updated with it, making plugin maintenance a breeze. Centralized Configuration Going over our gulpfile.js you probably notice we repeat a lot of references, specifically items like source and destination folders, as well as plugin configuration objects. As our task list grows, and changes to these can be troublesome to maintain. Moving these items to a centralized configuration object, can be a life saver if you ever need to update one of these values. To get started let's create a new file called config.json: { "scssSrcPath":"./src/scss", "jsSrcPath":"./src/js", "imgSrcPath":"./src/img", "cssDistPath":"./dist/css", "jsDistPath":"./dist/js", "imgDistPath":"./dist/img", "browserList":"> 1%', 'last 2 versions', 'Firefox ESR', 'Opera 12.1" } What we have here is a basic JSON file that contains the most common, repeating configuration values. We have a source and destination path for Sass, JavaScript, and Image files, as well as a list of support browsers for Autoprefixer. Now let's add this configuration file to our gulpfile.js: var gulp = require('gulp'); var config = require('./config.json'); var plugins = require('gulp-load-plugins')(); // JS Minification gulp.task('js-uglify', function() { returngulp.src(config.jsSrcPath + '/**/*.js') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: JS Compilation Failed") })) .pipe(plugins.uglify()) .pipe(gulp.dest(config.jsDistPath)) }); }); // SASS Compliation gulp.task('sass-compile', function() { returngulp.src(config.scssSrcPath + '/main.scss') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: CSS Compilation Failed") })) .pipe(plugins.compass({ style: 'compressed', css: config.cssDistPath, sass: config.scssSrcPath, image: config.imgSrcPath })) .pipe(plugins.autoprefixer(config.browserList)) .pipe(gulp.dest(config.cssDistPath)) }); }); // Image Optimization gulp.task('image-minification', function(){ returngulp.src(config.imgSrcPath'/**/*') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: Image Minification Failed") })) .pipe(plugins.imagemin({ optimizationLevel: 3, progressive: true, interlaced: true })) .pipe(gulp.dest(config.jsDistPath)); }); // Watch Task gulp.task('watch', function () { // Builds JavaScript plugins.watch(config.jsSrcPath + '/**/*.js', function () { gulp.start('js-uglify'); }); // Builds CSS plugins.watch(config.scssSrcPath + '/**/*.scss', function () { gulp.start('css-compile'); }); // Optimizes Images plugins.watch([config.imgSrcPath + '/**/*.jpg', config.imgSrcPath + '/**/*.png', config.imgSrcPath + '/**/*.svg'], function () { gulp.start('image-minification'); }); }); // Default Task Triggers Watch gulp.task('default', function() { gulp.start('watch'); }); First, we required our config file so that all our tasks have access to the object. Then we update each task using our configuration values including all our file paths and our browser support list. Now anytime these values are updated, we only have to do it one place. This approach is going to come in especially handy with our next step, which is modularizing our tasks. Modular Tasks You've probably noticed that we have leveraged node's module loading capabilities to achieve our results so far. However, we can take this one step further, by modularizing our tasks themselves. Placing each task in its own file allows us to give our workflow code structure and making it easier to maintain. The same benefits we gain from having modularized code in our projects can be extended to our workflow as well. Our first step is to pull our tasks into individual files. Create a folder named tasks and create the following four files: tasks/js-uglify.js: module.exports = function(gulp, plugins, config) { gulp.task('js-uglify', function() { returngulp.src(config.jsSrcPath + '/**/*.js') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: JS Compilation Failed") })) .pipe(plugins.uglify()) .pipe(gulp.dest(config.jsDistPath)) }); }); }; tasks/sass-compile.js: module.exports = function(gulp, plugins, config) { gulp.task('sass-compile', function() { returngulp.src(config.scssSrcPath + '/main.scss') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: CSS Compilation Failed") })) .pipe(plugins.compass({ style: 'compressed', css: config.cssDistPath, sass: config.scssSrcPath, image: config.imgSrcPath })) .pipe(plugins.autoprefixer(config.browserList)) .pipe(gulp.dest(config.cssDistPath)) }); }); }; tasks/image-minification.js: module.exports = function(gulp, plugins, config) { gulp.task('image-minification', function(){ returngulp.src(config.imgSrcPath'/**/*') .pipe(plugins.plumber({ errorHandler: plugins.notify.onError("ERROR: Image Minification Failed") })) .pipe(plugins.imagemin({ optimizationLevel: 3, progressive: true, interlaced: true })) .pipe(gulp.dest(config.jsDistPath)); }); }; tasks/watch.js: module.exports = function(gulp, plugins, config) { gulp.task('watch', function () { // Builds JavaScript plugins.watch(config.jsSrcPath + '/**/*.js', function () { gulp.start('js-uglify'); }); // Builds CSS plugins.watch(config.scssSrcPath + '/**/*.scss', function () { gulp.start('css-compile'); }); // Optimizes Images plugins.watch([config.imgSrcPath + '/**/*.jpg', config.imgSrcPath + '/**/*.png', config.imgSrcPath + '/**/*.svg'], function () { gulp.start('image-minification'); }); }); }; Here we are wrapping each individual task as a module and preparing to pass it three parameters. gulp will, of course, contain the Gulp code base, plugins will pass our task the full plugins object, and config will contain all our configuration values. Beyond this, our tasks remain unchanged. Next, we need to pull our tasks back into our gulpfile.js. Let's start by adding a line at the end of our config.json. "tasksPath":"./tasks" This will help us to keep our code a bit cleaner, and if we ever move our tasks we can simply update this reference. Now we just need our individual tasks: var gulp = require('gulp'); var config = require('./config.json'); var plugins = require('gulp-load-plugins')(); // JS Minification require(config.tasksPath + '/js-uglify')(gulp, plugins, config); // SASS Compliation require(config.tasksPath + '/sass-compile')(gulp, plugins, config); // Image Optimization require(config.tasksPath + '/image-minification')(gulp, plugins, config); // Watch Task require(config.tasksPath + '/watch')(gulp, plugins, config); // Default Task Triggers Watch gulp.task('default', function() { gulp.start('watch'); }); We have now required our four individual tasks from our gulpfile.js passing each the previously discussed parameters (gulp, plugins, config). Nothing changes about how we use these tasks, they simply now are self-contained within our code base. You will notice that our watch task is even able to access other tasks required in the same way. Conclusion As our front-end toolbox gets larger and larger, how we maintain that side of our code is increasingly important. It is possible to apply the same best practices we use on our project code to our workflow code as well. This further helps our tools get out of the way and lets us focus on coding. JavaScript developers of the world, unite! For more JavaScript tutorials and extra content, visit our dedicated page here. About The Author Brian Hough is a Front-End Architect, Designer, and Product Manager at Piqora. By day, he is working to prove that the days of bad Enterprise User Experiences are a thing of the past. By night, he obsesses about ways to bring designers and developers together using technology. He blogs about his early stage startup experience at lostinpixelation.com, or you can read his general musings on twitter @b_hough.
Read more
  • 0
  • 0
  • 6493

article-image-deploying-highly-available-openstack
Packt
21 Sep 2015
17 min read
Save for later

Deploying Highly Available OpenStack

Packt
21 Sep 2015
17 min read
In this article by Arthur Berezin, the author of the book OpenStack Configuration Cookbook, we will cover the following topics: Installing Pacemaker Installing HAProxy Configuring Galera cluster for MariaDB Installing RabbitMQ with mirrored queues Configuring highly available OpenStack services (For more resources related to this topic, see here.) Many organizations choose OpenStack for its distributed architecture and ability to deliver the Infrastructure as a Service (IaaS) platform for mission-critical applications. In such environments, it is crucial to configure all OpenStack services in a highly available configuration to provide as much possible uptime for the control plane services of the cloud. Deploying a highly available control plane for OpenStack can be achieved in various configurations. Each of these configurations would serve certain set of demands and introduce a growing set of prerequisites. Pacemaker is used to create active-active clusters to guarantee services' resilience to possible faults. Pacemaker is also used to create a virtual IP addresses for each of the services. HAProxy serves as a load balancer for incoming calls to service's APIs. This article discusses neither high availably of virtual machine instances nor Nova-Compute service of the hypervisor. Most of the OpenStack services are stateless, OpenStack services store persistent in a SQL database, which is potentially a single point of failure we should make highly available. In this article, we will deploy a highly available database using MariaDB and Galera, which implements multimaster replication. To ensure availability of the message bus, we will configure RabbitMQ with mirrored queues. This article discusses configuring each service separately on three controllers' layout that runs OpenStack controller services, including Neutron, database, and RabbitMQ message bus. All can be configured on several controller nodes, or each service could be implemented on its separate set of hosts. Installing Pacemaker All OpenStack services consist of system Linux services. The first step of ensuring services' availability is to configure Pacemaker clusters for each service, so Pacemaker monitors the services. In case of failure, Pacemaker restarts the failed service. In addition, we will use Pacemaker to create a virtual IP address for each of OpenStack's services to ensure services are accessible using the same IP address when failures occurs and the actual service has relocated to another host. In this section, we will install Pacemaker and prepare it to configure highly available OpenStack services. Getting ready To ensure maximum availability, we will install and configure three hosts to serve as controller nodes. Prepare three controller hosts with identical hardware and network layout. We will base our configuration for most of the OpenStack services on the configuration used in a single controller layout, and we will deploy Neutron network services on all three controller nodes. How to do it… Run the following steps on three highly available controller nodes: Install pacemaker packages: [root@controller1 ~]# yum install -y pcs pacemaker corosync fence-agents-all resource-agents Enable and start the pcsd service: [root@controller1 ~]# systemctl enable pcsd [root@controller1 ~]# systemctl start pcsd Set a password for hacluster user; the password should be identical on all the nodes: [root@controller1 ~]# echo 'password' | passwd --stdin hacluster We will use the hacluster password through the HAProxy configuration. Authenticate all controller nodes running using -p option to give the password on the command line, and provide the same password you have set in the previous step: [root@controller1 ~] # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force At this point, you may run pcs commands from a single controller node instead of running commands on each node separately. [root@controller1 ~]# rabbitmqctl set_policy HA '^(?!amq.).*' '{"ha-mode": "all"}' There's more... You may find the complete Pacemaker documentation, which includes installation documentation, complete configuration reference, and examples in Cluster Labs website at http://clusterlabs.org/doc/. Installing HAProxy Addressing high availability for OpenStack includes avoiding high load of a single host and ensuring incoming TCP connections to all API endpoints are balanced across the controller hosts. We will use HAProxy, an open source load balancer, which is particularly suited for HTTP load balancing as it supports session persistence and layer 7 processing. Getting ready In this section, we will install HAProxy on all controller hosts, configure Pacemaker cluster for HAProxy services, and prepare for OpenStack services configuration. How to do it... Run the following steps on all controller nodes: Install HAProxy package: # yum install -y haproxy Enable nonlocal binding Kernel parameter: # echo net.ipv4.ip_nonlocal_bind=1 >> /etc/sysctl.d/haproxy.conf # echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind Configure HAProxy load balancer settings for the GaleraDB, RabbitMQ, and Keystone service as shown in the following diagram: Edit /etc/haproxy/haproxy.cfg with the following configuration: global    daemon defaults    mode tcp    maxconn 10000    timeout connect 2s    timeout client 10s    timeout server 10s   frontend vip-db    bind 192.168.16.200:3306    timeout client 90s    default_backend db-vms-galera   backend db-vms-galera    option httpchk    stick-table type ip size 2    stick on dst    timeout server 90s    server rhos5-db1 192.168.16.58:3306 check inter 1s port 9200    server rhos5-db2 192.168.16.59:3306 check inter 1s port 9200    server rhos5-db3 192.168.16.60:3306 check inter 1s port 9200   frontend vip-rabbitmq    bind 192.168.16.213:5672    timeout client 900m    default_backend rabbitmq-vms   backend rabbitmq-vms    balance roundrobin    timeout server 900m    server rhos5-rabbitmq1 192.168.16.61:5672 check inter 1s    server rhos5-rabbitmq2 192.168.16.62:5672 check inter 1s    server rhos5-rabbitmq3 192.168.16.63:5672 check inter 1s   frontend vip-keystone-admin    bind 192.168.16.202:35357    default_backend keystone-admin-vms backend keystone-admin-vms    balance roundrobin    server rhos5-keystone1 192.168.16.64:35357 check inter 1s    server rhos5-keystone2 192.168.16.65:35357 check inter 1s    server rhos5-keystone3 192.168.16.66:35357 check inter 1s   frontend vip-keystone-public    bind 192.168.16.202:5000    default_backend keystone-public-vms backend keystone-public-vms    balance roundrobin    server rhos5-keystone1 192.168.16.64:5000 check inter 1s    server rhos5-keystone2 192.168.16.65:5000 check inter 1s    server rhos5-keystone3 192.168.16.66:5000 check inter 1s This configuration file is an example for configuring HAProxy with load balancer for the MariaDB, RabbitMQ, and Keystone service. We need to authenticate on all nodes before we are allowed to change the configuration to configure all nodes from one point. Use the previously configured hacluster user and password to do this. # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force Create a Pacemaker cluster for HAProxy service as follows: Note that you can run pcs commands now from a single controller node. # pcs cluster setup --name ha-controller controller1 controller2 controller3 # pcs cluster enable --all # pcs cluster start --all Finally, using pcs resource create command, create a cloned systemd resource that will run a highly available active-active HAProxy service on all controller hosts: pcs resource create lb-haproxy systemd:haproxy op monitor start-delay=10s --clone Create the virtual IP address for each of the services: # pcs resource create vip-db IPaddr2 ip=192.168.16.200 # pcs resource create vip-rabbitmq IPaddr2 ip=192.168.16.213 # pcs resource create vip-keystone IPaddr2 ip=192.168.16.202 You may use pcs status command to verify whether all resources are successfully running: # pcs status Configuring Galera cluster for MariaDB Galera is a multimaster cluster for MariaDB, which is based on synchronous replication between all cluster nodes. Effectively, Galera treats a cluster of MariaDB nodes as one single master node that reads and writes to all nodes. Galera replication happens at transaction commit time, by broadcasting transaction write set to the cluster for application. Client connects directly to the DBMS and experiences close to the native DBMS behavior. wsrep API (write set replication API) defines the interface between Galera replication and the DBMS: Getting ready In this section, we will install Galera cluster packages for MariaDB on our three controller nodes, then we will configure Pacemaker to monitor all Galera services. Pacemaker can be stopped on all cluster nodes, as shown, if it is running from previous steps: # pcs cluster stop --all How to do it.. Perform the following steps on all controller nodes: Install galera packages for MariaDB: # yum install -y mariadb-galera-server xinetd resource-agents Edit /etc/sysconfig/clustercheck and add the following lines: MYSQL_USERNAME="clustercheck" MYSQL_PASSWORD="password" MYSQL_HOST="localhost" Edit Galera configuration file /etc/my.cnf.d/galera.cnf with the following lines: Make sure to enter host's IP address at the bind-address parameter. [mysqld] skip-name-resolve=1 binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=0 query_cache_type=0 bind-address=[host-IP-address] wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_name="galera_cluster" wsrep_slave_threads=1 wsrep_certify_nonPK=1 wsrep_max_ws_rows=131072 wsrep_max_ws_size=1073741824 wsrep_debug=0 wsrep_convert_LOCK_to_trx=0 wsrep_retry_autocommit=1 wsrep_auto_increment_control=1 wsrep_drupal_282555_workaround=0 wsrep_causal_reads=0 wsrep_notify_cmd= wsrep_sst_method=rsync You can learn more on each of the Galera's default options on the documentation page at http://galeracluster.com/documentation-webpages/configuration.html. Add the following lines to the xinetd configuration file /etc/xinetd.d/galera-monitor: service galera-monitor {        port           = 9200        disable         = no        socket_type     = stream        protocol       = tcp        wait           = no        user           = root        group           = root        groups         = yes        server         = /usr/bin/clustercheck        type           = UNLISTED        per_source     = UNLIMITED        log_on_success =        log_on_failure = HOST        flags           = REUSE } Start and enable the xinetd service: # systemctl enable xinetd # systemctl start xinetd # systemctl enable pcsd # systemctl start pcsd Authenticate on all nodes. Use the previously configured hacluster user and password to do this as follows: # pcs cluster auth controller1 controller2 controller3 -u hacluster -p password --force Now commands can be run from a single controller node. Create a Pacemaker cluster for Galera service: # pcs cluster setup --name controller-db controller1 controller2 controller3 # pcs cluster enable --all # pcs cluster start --all Add the Galera service resource to the Galera Pacemaker cluster: # pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://controller1,controller2,controll er3" meta master-max=3 ordered=true op promote timeout=300s on- fail=block --master Create a user for CLusterCheck xinetd service: mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'password';" See also You can find the complete Galera documentation, which includes installation documentation and complete configuration reference and examples in Galera cluster website at http://galeracluster.com/documentation-webpages/. Installing RabbitMQ with mirrored queues RabbitMQ is used as a message bus for services to inner-communicate. The queues are located on a single node that makes the RabbitMQ service a single point of failure. To avoid RabbitMQ being a single point of failure, we will configure RabbitMQ to use mirrored queues across multiple nodes. Each mirrored queue consists of one master and one or more slaves, with the oldest slave being promoted to the new master if the old master disappears for any reason. Messages published to the queue are replicated to all slaves. Getting Ready In this section, we will install RabbitMQ packages on our three controller nodes and configure RabbitMQ to mirror its queues across all controller nodes, then we will configure Pacemaker to monitor all RabbitMQ services. How to do it.. Perform the following steps on all controller nodes: Install RabbitMQ packages on all controller nodes: # yum -y install rabbitmq-server Start and enable rabbitmq-server service: # systemctl start rabbitmq-server # systemctl stop rabbitmq-server RabbitMQ cluster nodes use a cookie to determine whether they are allowed to communicate with each other; for nodes to be able to communicate, they must have the same cookie. Copy erlang.cookie from controller1 to controller2 and controller3: [root@controller1 ~]# scp /var/lib/rabbitmq/.erlang.cookie root@controller2:/var/lib/rabbitmq/ [root@controller1 ~]## scp /var/lib/rabbitmq/.erlang.cookie root@controller3:/var/lib/rabbitmq/ Start and enable Pacemaker on all nodes: # systemctl enable pcsd # systemctl start pcsd Since we already authenticated all nodes of the cluster in the previous section, we can now run following commands on controller1. Create a new Pacemaker cluster for RabbitMQ service as follows: [root@controller1 ~]# pcs cluster setup --name rabbitmq controller1 controller2 controller3 [root@controller1 ~]# pcs cluster enable --all [root@controller1 ~]# pcs cluster start --all To the Pacemaker cluster, add a systemd resource for RabbitMQ service: [root@controller1 ~]# pcs resource create rabbitmq-server systemd:rabbitmq-server op monitor start-delay=20s --clone Since all RabbitMQ nodes must join the cluster one at a time, stop RabbitMQ on controller2 and controller3: [root@controller2 ~]# rabbitmqctl stop_app [root@controller3 ~]# rabbitmqctl stop_app Join controller2 to the cluster and start RabbitMQ on it: [root@controller2 ~]# rabbitmqctl join_cluster rabbit@controller1 [root@controller2 ~]# rabbitmqctl start_app Now join controller3 to the cluster as well and start RabbitMQ on it: [root@controller3 ~]# rabbitmqctl join_cluster rabbit@controller1 [root@controller3 ~]# rabbitmqctl start_app At this point, the cluster should be configured and we need to set RabbitMQ's HA policy to mirror the queues to all RabbitMQ cluster nodes as follows: There's more.. The RabbitMQ cluster should be configured with all the queues cloned to all controller nodes. To verify cluster's state, you can use the rabbitmqctl cluster_status and rabbitmqctl list_policies commands from each of controller nodes as follows: [root@controller1 ~]# rabbitmqctl cluster_status [root@controller1 ~]# rabbitmqctl list_policies To verify Pacemaker's cluster status, you may use pcs status command as follows: [root@controller1 ~]# pcs status See also For a complete documentation on how RabbitMQ implements the mirrored queues feature and additional configuration options, you can refer to project's documentation pages at https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/ha.html. Configuring Highly OpenStack Services Most OpenStack services are stateless web services that keep persistent data on a SQL database and use a message bus for inner-service communication. We will use Pacemaker and HAProxy to run OpenStack services in an active-active highly available configuration, so traffic for each of the services is load balanced across all controller nodes and cloud can be easily scaled out to more controller nodes if needed. We will configure Pacemaker clusters for each of the services that will run on all controller nodes. We will also use Pacemaker to create a virtual IP addresses for each of OpenStack's services, so rather than addressing a specific node, services will be addressed by their corresponding virtual IP address. We will use HAProxy to load balance incoming requests to the services across all controller nodes. Get Ready In this section, we will use the virtual IP address we created for the services with Pacemaker and HAProxy in previous sections. We will also configure OpenStack services to use the highly available Galera-clustered database, and RabbitMQ with mirrored queues. This is an example for the Keystone service. Please refer to the Packt website URL here for complete configuration of all OpenStack services. How to do it.. Perform the following steps on all controller nodes: Install the Keystone service on all controller nodes: yum install -y openstack-keystone openstack-utils openstack-selinux Generate a Keystone service token on controller1 and copy it to controller2 and controller3 using scp: [root@controller1 ~]# export SERVICE_TOKEN=$(openssl rand -hex 10) [root@controller1 ~]# echo $SERVICE_TOKEN > ~/keystone_admin_token [root@controller1 ~]# scp ~/keystone_admin_token root@controller2:~/keystone_admin_token Export the Keystone service token on controller2 and controller3 as well: [root@controller2 ~]# export SERVICE_TOKEN=$(cat ~/keystone_admin_token) [root@controller3 ~]# export SERVICE_TOKEN=$(cat ~/keystone_admin_token) Note: Perform the following commands on all controller nodes. Configure the Keystone service on all controller nodes to use vip-rabbit: # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN # openstack-config --set /etc/keystone/keystone.conf DEFAULT rabbit_host vip-rabbitmq Configure the Keystone service endpoints to point to Keystone virtual IP: # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_endpoint 'http://vip-keystone:%(admin_port)s/' # openstack-config --set /etc/keystone/keystone.conf DEFAULT public_endpoint 'http://vip-keystone:%(public_port)s/' Configure Keystone to connect to the SQL databases use Galera cluster virtual IP: # openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:keystonetest@vip-mysql/keystone # openstack-config --set /etc/keystone/keystone.conf database max_retries -1 On controller1, create Keystone KPI and sync the database: [root@controller1 ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone [root@controller1 ~]# chown -R keystone:keystone /var/log/keystone   /etc/keystone/ssl/ [root@controller1 ~] su keystone -s /bin/sh -c "keystone-manage db_sync" Using scp, copy Keystone SSL certificates from controller1 to controller2 and controller3: [root@controller1 ~]# rsync -av /etc/keystone/ssl/ controller2:/etc/keystone/ssl/ [root@controller1 ~]# rsync -av /etc/keystone/ssl/ controller3:/etc/keystone/ssl/ Make sure that Keystone user is owner of newly copied files controller2 and controller3: [root@controller2 ~]# chown -R keystone:keystone /etc/keystone/ssl/ [root@controller3 ~]# chown -R keystone:keystone /etc/keystone/ssl/ Create a systemd resource for the Keystone service, use --clone to ensure it runs with active-active configuration: [root@controller1 ~]# pcs resource create keystone systemd:openstack-keystone op monitor start-delay=10s --clone Create endpoint and user account for Keystone with the Keystone VIP as given: [root@controller1 ~]# export SERVICE_ENDPOINT="http://vip-keystone:35357/v2.0" [root@controller1 ~]# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" [root@controller1 ~]# keystone endpoint-create --service keystone --publicurl 'http://vip-keystone:5000/v2.0' --adminurl 'http://vip-keystone:35357/v2.0' --internalurl 'http://vip-keystone:5000/v2.0'   [root@controller1 ~]# keystone user-create --name admin --pass keystonetest [root@controller1 ~]# keystone role-create --name admin [root@controller1 ~]# keystone tenant-create --name admin [root@controller1 ~]# keystone user-role-add --user admin --role admin --tenant admin Create all controller nodes on a keystonerc_admin file with OpenStack admin credentials using the Keystone VIP: cat > ~/keystonerc_admin << EOF export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=password export OS_AUTH_URL=http://vip-keystone:35357/v2.0/ export PS1='[u@h W(keystone_admin)]$ ' EOF Source the keystonerc_admin credentials file to be able to run the authenticated OpenStack commands: [root@controller1 ~]# source ~/keystonerc_admin At this point, you should be able to execute the Keystone commands and create the Services tenant: [root@controller1 ~]# keystone tenant-create --name services --description "Services Tenant" Summary In this article, we have covered the installation of Pacemaker and HAProxy, configuration of Galera cluster for MariaDB, installation of RabbitMQ with mirrored queues, and configuration of highly available OpenStack services. Resources for Article: Further resources on this subject: Using the OpenStack Dash-board [article] Installing OpenStack Swift [article] Architecture and Component Overview [article]
Read more
  • 0
  • 0
  • 14902

article-image-finding-your-way
Packt
21 Sep 2015
19 min read
Save for later

Finding Your Way

Packt
21 Sep 2015
19 min read
 This article by Ray Barrera, the author of Unity AI Game Programming Second Edition, covers the following topics: A* Pathfinding algorithm A custom A* Pathfinding implementation (For more resources related to this topic, see here.) A* Pathfinding We'll implement the A* algorithm in a Unity environment using C#. The A* Pathfinding algorithm is widely used in games and interactive applications even though there are other algorithms, such as Dijkstra's algorithm, because of its simplicity and effectiveness. Revisiting the A* algorithm Let's review the A* algorithm again before we proceed to implement it in next section. First, we'll need to represent the map in a traversable data structure. While many structures are possible, for this example, we will use a 2D grid array. We'll implement the GridManager class later to handle this map information. Our GridManager class will keep a list of the Node objects that are basically titles in a 2D grid. So, we need to implement that Node class to handle things such as node type (whether it's a traversable node or an obstacle), cost to pass through and cost to reach the goal Node, and so on. We'll have two variables to store the nodes that have been processed and the nodes that we have to process. We'll call them closed list and open list, respectively. We'll implement that list type in the PriorityQueue class. And then finally, the following A* algorithm will be implemented in the AStar class. Let's take a look at it: We begin at the starting node and put it in the open list. As long as the open list has some nodes in it, we'll perform the following processes: Pick the first node from the open list and keep it as the current node. (This is assuming that we've sorted the open list and the first node has the least cost value, which will be mentioned at the end of the code.) Get the neighboring nodes of this current node that are not obstacle types, such as a wall or canyon that can't be passed through. For each neighbor node, check if this neighbor node is already in the closed list. If not, we'll calculate the total cost (F) for this neighbor node using the following formula: F = G + H In the preceding formula, G is the total cost from the previous node to this node and H is the total cost from this node to the final target node. Store this cost data in the neighbor node object. Also, store the current node as the parent node as well. Later, we'll use this parent node data to trace back the actual path. Put this neighbor node in the open list. Sort the open list in ascending order, ordered by the total cost to reach the target node. If there's no more neighbor nodes to process, put the current node in the closed list and remove it from the open list. Go back to step 2. Once you have completed this process your current node should be in the target goal node position, but only if there's an obstacle free path to reach the goal node from the start node. If it is not at the goal node, there's no available path to the target node from the current node position. If there's a valid path, all we have to do now is to trace back from current node's parent node until we reach the start node again. This will give us a path list of all the nodes that we chose during our pathfinding process, ordered from the target node to the start node. We then just reverse this path list since we want to know the path from the start node to the target goal node. This is a general overview of the algorithm we're going to implement in Unity using C#. So let's get started. Implementation We'll implement the preliminary classes that were mentioned before, such as the Node, GridManager, and PriorityQueue classes. Then, we'll use them in our main AStar class. Implementing the Node class The Node class will handle each tile object in our 2D grid, representing the maps shown in the Node.cs file: using UnityEngine; using System.Collections; using System; public class Node : IComparable { public float nodeTotalCost; public float estimatedCost; public bool bObstacle; public Node parent; public Vector3 position; public Node() { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; } public Node(Vector3 pos) { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; this.position = pos; } public void MarkAsObstacle() { this.bObstacle = true; } The Node class has properties, such as the cost values (G and H), flags to mark whether it is an obstacle, its positions, and parent node. The nodeTotalCost is G, which is the movement cost value from starting node to this node so far and the estimatedCost is H, which is total estimated cost from this node to the target goal node. We also have two simple constructor methods and a wrapper method to set whether this node is an obstacle. Then, we implement the CompareTo method as shown in the following code: public int CompareTo(object obj) { Node node = (Node)obj; //Negative value means object comes before this in the sort //order. if (this.estimatedCost < node.estimatedCost) return -1; //Positive value means object comes after this in the sort //order. if (this.estimatedCost > node.estimatedCost) return 1; return 0; } } This method is important. Our Node class inherits from IComparable because we want to override this CompareTo method. If you can recall what we discussed in the previous algorithm section, you'll notice that we need to sort our list of node arrays based on the total estimated cost. The ArrayList type has a method called Sort. This method basically looks for this CompareTo method, implemented inside the object (in this case, our Node objects) from the list. So, we implement this method to sort the node objects based on our estimatedCost value. The IComparable.CompareTo method, which is a .NET framework feature, can be found at http://msdn.microsoft.com/en-us/library/system.icomparable.compareto.aspx. Establishing the priority queue The PriorityQueue class is a short and simple class to make the handling of the nodes' ArrayList easier, as shown in the following PriorityQueue.cs class: using UnityEngine; using System.Collections; public class PriorityQueue { private ArrayList nodes = new ArrayList(); public int Length { get { return this.nodes.Count; } } public bool Contains(object node) { return this.nodes.Contains(node); } public Node First() { if (this.nodes.Count > 0) { return (Node)this.nodes[0]; } return null; } public void Push(Node node) { this.nodes.Add(node); this.nodes.Sort(); } public void Remove(Node node) { this.nodes.Remove(node); //Ensure the list is sorted this.nodes.Sort(); } } The preceding code listing should be easy to understand. One thing to notice is that after adding or removing node from the nodes' ArrayList, we call the Sort method. This will call the Node object's CompareTo method and will sort the nodes accordingly by the estimatedCost value. Setting up our grid manager The GridManager class handles all the properties of the grid, representing the map. We'll keep a singleton instance of the GridManager class as we need only one object to represent the map, as shown in the following GridManager.cs file: using UnityEngine; using System.Collections; public class GridManager : MonoBehaviour { private static GridManager s_Instance = null; public static GridManager instance { get { if (s_Instance == null) { s_Instance = FindObjectOfType(typeof(GridManager)) as GridManager; if (s_Instance == null) Debug.Log("Could not locate a GridManager " + "object. n You have to have exactly " + "one GridManager in the scene."); } return s_Instance; } } We look for the GridManager object in our scene and if found, we keep it in our s_Instance static variable: public int numOfRows; public int numOfColumns; public float gridCellSize; public bool showGrid = true; public bool showObstacleBlocks = true; private Vector3 origin = new Vector3(); private GameObject[] obstacleList; public Node[,] nodes { get; set; } public Vector3 Origin { get { return origin; } } Next, we declare all the variables; we'll need to represent our map, such as number of rows and columns, the size of each grid tile, and some Boolean variables to visualize the grid and obstacles as well as to store all the nodes present in the grid, as shown in the following code: void Awake() { obstacleList = GameObject.FindGameObjectsWithTag("Obstacle"); CalculateObstacles(); } // Find all the obstacles on the map void CalculateObstacles() { nodes = new Node[numOfColumns, numOfRows]; int index = 0; for (int i = 0; i < numOfColumns; i++) { for (int j = 0; j < numOfRows; j++) { Vector3 cellPos = GetGridCellCenter(index); Node node = new Node(cellPos); nodes[i, j] = node; index++; } } if (obstacleList != null && obstacleList.Length > 0) { //For each obstacle found on the map, record it in our list foreach (GameObject data in obstacleList) { int indexCell = GetGridIndex(data.transform.position); int col = GetColumn(indexCell); int row = GetRow(indexCell); nodes[row, col].MarkAsObstacle(); } } } We look for all the game objects with an Obstacle tag and put them in our obstacleList property. Then we set up our nodes' 2D array in the CalculateObstacles method. First, we just create the normal node objects with default properties. Just after that, we examine our obstacleList. Convert their position into row-column data and update the nodes at that index to be obstacles. The GridManager class has a couple of helper methods to traverse the grid and get the grid cell data. The following are some of them with a brief description of what they do. The implementation is simple, so we won't go into the details. The GetGridCellCenter method returns the position of the grid cell in world coordinates from the cell index, as shown in the following code: public Vector3 GetGridCellCenter(int index) { Vector3 cellPosition = GetGridCellPosition(index); cellPosition.x += (gridCellSize / 2.0f); cellPosition.z += (gridCellSize / 2.0f); return cellPosition; } public Vector3 GetGridCellPosition(int index) { int row = GetRow(index); int col = GetColumn(index); float xPosInGrid = col * gridCellSize; float zPosInGrid = row * gridCellSize; return Origin + new Vector3(xPosInGrid, 0.0f, zPosInGrid); } The GetGridIndex method returns the grid cell index in the grid from the given position: public int GetGridIndex(Vector3 pos) { if (!IsInBounds(pos)) { return -1; } pos -= Origin; int col = (int)(pos.x / gridCellSize); int row = (int)(pos.z / gridCellSize); return (row * numOfColumns + col); } public bool IsInBounds(Vector3 pos) { float width = numOfColumns * gridCellSize; float height = numOfRows* gridCellSize; return (pos.x >= Origin.x && pos.x <= Origin.x + width && pos.x <= Origin.z + height && pos.z >= Origin.z); } The GetRow and GetColumn methods return the row and column data of the grid cell from the given index: public int GetRow(int index) { int row = index / numOfColumns; return row; } public int GetColumn(int index) { int col = index % numOfColumns; return col; } Another important method is GetNeighbours, which is used by the AStar class to retrieve the neighboring nodes of a particular node: public void GetNeighbours(Node node, ArrayList neighbors) { Vector3 neighborPos = node.position; int neighborIndex = GetGridIndex(neighborPos); int row = GetRow(neighborIndex); int column = GetColumn(neighborIndex); //Bottom int leftNodeRow = row - 1; int leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Top leftNodeRow = row + 1; leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Right leftNodeRow = row; leftNodeColumn = column + 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Left leftNodeRow = row; leftNodeColumn = column - 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); } void AssignNeighbour(int row, int column, ArrayList neighbors) { if (row != -1 && column != -1 && row < numOfRows && column < numOfColumns) { Node nodeToAdd = nodes[row, column]; if (!nodeToAdd.bObstacle) { neighbors.Add(nodeToAdd); } } } First, we retrieve the neighboring nodes of the current node in the left, right, top, and bottom, all four directions. Then, inside the AssignNeighbour method, we check the node to see whether it's an obstacle. If it's not, we push that neighbor node to the referenced array list, neighbors. The next method is a debug aid method to visualize the grid and obstacle blocks: void OnDrawGizmos() { if (showGrid) { DebugDrawGrid(transform.position, numOfRows, numOfColumns, gridCellSize, Color.blue); } Gizmos.DrawSphere(transform.position, 0.5f); if (showObstacleBlocks) { Vector3 cellSize = new Vector3(gridCellSize, 1.0f, gridCellSize); if (obstacleList != null && obstacleList.Length > 0) { foreach (GameObject data in obstacleList) { Gizmos.DrawCube(GetGridCellCenter( GetGridIndex(data.transform.position)), cellSize); } } } } public void DebugDrawGrid(Vector3 origin, int numRows, int numCols,float cellSize, Color color) { float width = (numCols * cellSize); float height = (numRows * cellSize); // Draw the horizontal grid lines for (int i = 0; i < numRows + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(0.0f, 0.0f, 1.0f); Vector3 endPos = startPos + width * new Vector3(1.0f, 0.0f, 0.0f); Debug.DrawLine(startPos, endPos, color); } // Draw the vertial grid lines for (int i = 0; i < numCols + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(1.0f, 0.0f, 0.0f); Vector3 endPos = startPos + height * new Vector3(0.0f, 0.0f, 1.0f); Debug.DrawLine(startPos, endPos, color); } } } Gizmos can be used to draw visual debugging and setup aids inside the editor scene view. The OnDrawGizmos method is called every frame by the engine. So, if the debug flags, showGrid and showObstacleBlocks, are checked, we just draw the grid with lines and obstacle cube objects with cubes. Let's not go through the DebugDrawGrid method, which is quite simple. You can learn more about gizmos in the Unity reference documentation at http://docs.unity3d.com/Documentation/ScriptReference/Gizmos.html. Diving into our A* Implementation The AStar class is the main class that will utilize the classes we have implemented so far. You can go back to the algorithm section if you want to review this. We start with our openList and closedList declarations, which are of the PriorityQueue type, as shown in the AStar.cs file: using UnityEngine; using System.Collections; public class AStar { public static PriorityQueue closedList, openList; Next, we implement a method called HeuristicEstimateCost to calculate the cost between the two nodes. The calculation is simple. We just find the direction vector between the two by subtracting one position vector from another. The magnitude of this resultant vector gives the direct distance from the current node to the goal node: private static float HeuristicEstimateCost(Node curNode, Node goalNode) { Vector3 vecCost = curNode.position - goalNode.position; return vecCost.magnitude; } Next, we have our main FindPath method: public static ArrayList FindPath(Node start, Node goal) { openList = new PriorityQueue(); openList.Push(start); start.nodeTotalCost = 0.0f; start.estimatedCost = HeuristicEstimateCost(start, goal); closedList = new PriorityQueue(); Node node = null; We initialize our open and closed lists. Starting with the start node, we put it in our open list. Then we start processing our open list: while (openList.Length != 0) { node = openList.First(); //Check if the current node is the goal node if (node.position == goal.position) { return CalculatePath(node); } //Create an ArrayList to store the neighboring nodes ArrayList neighbours = new ArrayList(); GridManager.instance.GetNeighbours(node, neighbours); for (int i = 0; i < neighbours.Count; i++) { Node neighbourNode = (Node)neighbours[i]; if (!closedList.Contains(neighbourNode)) { float cost = HeuristicEstimateCost(node, neighbourNode); float totalCost = node.nodeTotalCost + cost; float neighbourNodeEstCost = HeuristicEstimateCost( neighbourNode, goal); neighbourNode.nodeTotalCost = totalCost; neighbourNode.parent = node; neighbourNode.estimatedCost = totalCost + neighbourNodeEstCost; if (!openList.Contains(neighbourNode)) { openList.Push(neighbourNode); } } } //Push the current node to the closed list closedList.Push(node); //and remove it from openList openList.Remove(node); } if (node.position != goal.position) { Debug.LogError("Goal Not Found"); return null; } return CalculatePath(node); } This code implementation resembles the algorithm that we have previously discussed, so you can refer back to it if you are not clear of certain things. Get the first node of our openList. Remember our openList of nodes is always sorted every time a new node is added. So, the first node is always the node with the least estimated cost to the goal node. Check whether the current node is already at the goal node. If so, exit the while loop and build the path array. Create an array list to store the neighboring nodes of the current node being processed. Use the GetNeighbours method to retrieve the neighbors from the grid. For every node in the neighbors array, we check whether it's already in closedList. If not, put it in the calculate the cost values, update the node properties with the new cost values as well as the parent node data, and put it in openList. Push the current node to closedList and remove it from openList. Go back to step 1. If there are no more nodes in openList, our current node should be at the target node if there's a valid path available. Then, we just call the CalculatePath method with the current node parameter: private static ArrayList CalculatePath(Node node) { ArrayList list = new ArrayList(); while (node != null) { list.Add(node); node = node.parent; } list.Reverse(); return list; } } The CalculatePath method traces through each node's parent node object and builds an array list. It gives an array list with nodes from the target node to the start node. Since we want a path array from the start node to the target node, we just call the Reverse method. So, this is our AStar class. We'll write a test script in the following code to test all this and then set up a scene to use them in. Implementing a TestCode class This class will use the AStar class to find the path from the start node to the goal node, as shown in the following TestCode.cs file: using UnityEngine; using System.Collections; public class TestCode : MonoBehaviour { private Transform startPos, endPos; public Node startNode { get; set; } public Node goalNode { get; set; } public ArrayList pathArray; GameObject objStartCube, objEndCube; private float elapsedTime = 0.0f; //Interval time between pathfinding public float intervalTime = 1.0f; First, we set up the variables that we'll need to reference. The pathArray is to store the nodes array returned from the AStar FindPath method: void Start () { objStartCube = GameObject.FindGameObjectWithTag("Start"); objEndCube = GameObject.FindGameObjectWithTag("End"); pathArray = new ArrayList(); FindPath(); } void Update () { elapsedTime += Time.deltaTime; if (elapsedTime >= intervalTime) { elapsedTime = 0.0f; FindPath(); } } In the Start method, we look for objects with the Start and End tags and initialize our pathArray. We'll be trying to find our new path at every interval that we set to our intervalTime property in case the positions of the start and end nodes have changed. Then, we call the FindPath method: void FindPath() { startPos = objStartCube.transform; endPos = objEndCube.transform; startNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(startPos.position))); goalNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(endPos.position))); pathArray = AStar.FindPath(startNode, goalNode); } Since we implemented our pathfinding algorithm in the AStar class, finding a path has now become a lot simpler. First, we take the positions of our start and end game objects. Then, we create new Node objects using the helper methods of GridManager and GetGridIndex to calculate their respective row and column index positions inside the grid. Once we get this, we just call the AStar.FindPath method with the start node and goal node and store the returned array list in the local pathArray property. Next, we implement the OnDrawGizmos method to draw and visualize the path found: void OnDrawGizmos() { if (pathArray == null) return; if (pathArray.Count > 0) { int index = 1; foreach (Node node in pathArray) { if (index < pathArray.Count) { Node nextNode = (Node)pathArray[index]; Debug.DrawLine(node.position, nextNode.position, Color.green); index++; } } } } } We look through our pathArray and use the Debug.DrawLine method to draw the lines connecting the nodes from the pathArray. With this, we'll be able to see a green line connecting the nodes from start to end, forming a path, when we run and test our program. Setting up our sample scene We are going to set up a scene that looks something similar to the following screenshot: A sample test scene We'll have a directional light, the start and end game objects, a few obstacle objects, a plane entity to be used as ground, and two empty game objects in which we put our GridManager and TestAStar scripts. This is our scene hierarchy: The scene Hierarchy Create a bunch of cube entities and tag them as Obstacle. We'll be looking for objects with this tag when running our pathfinding algorithm. The Obstacle node Create a cube entity and tag it as Start. The Start node Then, create another cube entity and tag it as End. The End node Now, create an empty game object and attach the GridManager script. Set the name as GridManager because we use this name to look for the GridManager object from our script. Here, we can set up the number of rows and columns for our grid as well as the size of each tile. The GridManager script Testing all the components Let's hit the play button and see our A* Pathfinding algorithm in action. By default, once you play the scene, Unity will switch to the Game view. Since our pathfinding visualization code is written for the debug drawn in the editor view, you'll need to switch back to the Scene view or enable Gizmos to see the path found. Found path one Now, try to move the start or end node around in the scene using the editor's movement gizmo (not in the Game view, but the Scene view). Found path two You should see the path updated accordingly if there's a valid path from the start node to the target goal node, dynamically in real time. You'll get an error message in the console window if there's no path available. Summary In this article, we learned how to implement our own simple A* Pathfinding system. To attain this, we firstly implemented the Node class and established the priority queue. Then, we move on to setting up the grid manager. After that, we dived in deeper by implementing a TestCode class and setting up our sample scene. Finally, we tested all the components. Resources for Article: Further resources on this subject: Saying Hello to Unity and Android[article] Enemy and Friendly AIs[article] Customizing skin with GUISkin [article]
Read more
  • 0
  • 0
  • 29646

article-image-introducing-jax-rs-api
Packt
21 Sep 2015
25 min read
Save for later

Introducing JAX-RS API

Packt
21 Sep 2015
25 min read
 In this article by Jobinesh Purushothaman, author of the book, RESTful Java Web Services, Second Edition, we will see that there are many tools and frameworks available in the market today for building RESTful web services. There are some recent developments with respect to the standardization of various framework APIs by providing unified interfaces for a variety of implementations. Let's take a quick look at this effort. (For more resources related to this topic, see here.) As you may know, Java EE is the industry standard for developing portable, robust, scalable, and secure server-side Java applications. The Java EE 6 release took the first step towards standardizing RESTful web service APIs by introducing a Java API for RESTful web services (JAX-RS). JAX-RS is an integral part of the Java EE platform, which ensures portability of your REST API code across all Java EE-compliant application servers. The first release of JAX-RS was based on JSR 311. The latest version is JAX-RS 2 (based on JSR 339), which was released as part of the Java EE 7 platform. There are multiple JAX-RS implementations available today by various vendors. Some of the popular JAX-RS implementations are as follows: Jersey RESTful web service framework: This framework is an open source framework for developing RESTful web services in Java. It serves as a JAX-RS reference implementation. You can learn more about this project at https://jersey.java.net. Apache CXF: This framework is an open source web services framework. CXF supports both JAX-WS and JAX-RS web services. To learn more about CXF, refer to http://cxf.apache.org. RESTEasy: This framework is an open source project from JBoss, which provides various modules to help you build a RESTful web service. To learn more about RESTEasy, refer to http://resteasy.jboss.org. Restlet: This framework is a lightweight, open source RESTful web service framework. It has good support for building both scalable RESTful web service APIs and lightweight REST clients, which suits mobile platforms well. You can learn more about Restlet at http://restlet.com. Remember that you are not locked down to any specific vendor here, the RESTful web service APIs that you build using JAX-RS will run on any JAX-RS implementation as long as you do not use any vendor-specific APIs in the code. JAX-RS annotations                                      The main goal of the JAX-RS specification is to make the RESTful web service development easier than it has been in the past. As JAX-RS is a part of the Java EE platform, your code becomes portable across all Java EE-compliant servers. Specifying the dependency of the JAX-RS API To use JAX-RS APIs in your project, you need to add the javax.ws.rs-api JAR file to the class path. If the consuming project uses Maven for building the source, the dependency entry for the javax.ws.rs-api JAR file in the Project Object Model (POM) file may look like the following: <dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0.1</version><!-- set the tight version --> <scope>provided</scope><!-- compile time dependency --> </dependency> Using JAX-RS annotations to build RESTful web services Java annotations provide the metadata for your Java class, which can be used during compilation, during deployment, or at runtime in order to perform designated tasks. The use of annotations allows us to create RESTful web services as easily as we develop a POJO class. Here, we leave the interception of the HTTP requests and representation negotiations to the framework and concentrate on the business rules necessary to solve the problem at hand. If you are not familiar with Java annotations, go through the tutorial available at http://docs.oracle.com/javase/tutorial/java/annotations/. Annotations for defining a RESTful resource REST resources are the fundamental elements of any RESTful web service. A REST resource can be defined as an object that is of a specific type with the associated data and is optionally associated to other resources. It also exposes a set of standard operations corresponding to the HTTP method types such as the HEAD, GET, POST, PUT, and DELETE methods. @Path The @javax.ws.rs.Path annotation indicates the URI path to which a resource class or a class method will respond. The value that you specify for the @Path annotation is relative to the URI of the server where the REST resource is hosted. This annotation can be applied at both the class and the method levels. A @Path annotation value is not required to have leading or trailing slashes (/), as you may see in some examples. The JAX-RS runtime will parse the URI path templates in the same way even if they have leading or trailing slashes. Specifying the @Path annotation on a resource class The following code snippet illustrates how you can make a POJO class respond to a URI path template containing the /departments path fragment: import javax.ws.rs.Path; @Path("departments") public class DepartmentService { //Rest of the code goes here } The /department path fragment that you see in this example is relative to the base path in the URI. The base path typically takes the following URI pattern: http://host:port/<context-root>/<application-path>. Specifying the @Path annotation on a resource class method The following code snippet shows how you can specify @Path on a method in a REST resource class. Note that for an annotated method, the base URI is the effective URI of the containing class. For instance, you will use the URI of the following form to invoke the getTotalDepartments() method defined in the DepartmentService class: /departments/count, where departments is the @Path annotation set on the class. import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("departments") public class DepartmentService { @GET @Path("count") @Produces("text/plain") public Integer getTotalDepartments() { return findTotalRecordCount(); } //Rest of the code goes here } Specifying variables in the URI path template It is very common that a client wants to retrieve data for a specific object by passing the desired parameter to the server. JAX-RS allows you to do this via the URI path variables as discussed here. The URI path template allows you to define variables that appear as placeholders in the URI. These variables would be replaced at runtime with the values set by the client. The following example illustrates the use of the path variable to request for a specific department resource. The URI path template looks like /departments/{id}. At runtime, the client can pass an appropriate value for the id parameter to get the desired resource from the server. For instance, the URI path of the /departments/10 format returns the IT department details to the caller. The following code snippet illustrates how you can pass the department ID as a path variable for deleting a specific department record. The path URI looks like /departments/10. import javax.ws.rs.Path; import javax.ws.rs.DELETE; @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") short id) { removeDepartmentEntity(id); } //Other methods removed for brevity } In the preceding code snippet, the @PathParam annotation is used for copying the value of the path variable to the method parameter. Restricting values for path variables with regular expressions JAX-RS lets you use regular expressions in the URI path template for restricting the values set for the path variables at runtime by the client. By default, the JAX-RS runtime ensures that all the URI variables match the following regular expression: [^/]+?. The default regular expression allows the path variable to take any character except the forward slash (/). What if you want to override this default regular expression imposed on the path variable values? Good news is that JAX-RS lets you specify your own regular expression for the path variables. For example, you can set the regular expression as given in the following code snippet in order to ensure that the department name variable present in the URI path consists only of lowercase and uppercase alphanumeric characters: @DELETE @Path("{name: [a-zA-Z][a-zA-Z_0-9]}") public void removeDepartmentByName(@PathParam("name") String deptName) { //Method implementation goes here } If the path variable does not match the regular expression set of the resource class or method, the system reports the status back to the caller with an appropriate HTTP status code, such as 404 Not Found, which tells the caller that the requested resource could not be found at this moment. Annotations for specifying request-response media types The Content-Type header field in HTTP describes the body's content type present in the request and response messages. The content types are represented using the standard Internet media types. A RESTful web service makes use of this header field to indicate the type of content in the request or response message body. JAX-RS allows you to specify which Internet media types of representations a resource can produce or consume by using the @javax.ws.rs.Produces and @javax.ws.rs.Consumes annotations, respectively. @Produces The @javax.ws.rs.Produces annotation is used for defining the Internet media type(s) that a REST resource class method can return to the client. You can define this either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can produce are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml The following example uses the @Produces annotation at the class level in order to set the default response media type as JSON for all resource methods in this class. At runtime, the binding provider will convert the Java representation of the return value to the JSON format. import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("departments") @Produces(MediaType.APPLICATION_JSON) public class DepartmentService{ //Class implementation goes here... } @Consumes The @javax.ws.rs.Consumes annotation defines the Internet media type(s) that the resource class methods can accept. You can define the @Consumes annotation either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can consume are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml multipart/form-data application/x-www-form-urlencoded The following example illustrates how you can use the @Consumes attribute to designate a method in a class to consume a payload presented in the JSON media type. The binding provider will copy the JSON representation of an input message to the Department parameter of the createDepartment() method. import javax.ws.rs.Consumes; import javax.ws.rs.core.MediaType; import javax.ws.rs.POST; @POST @Consumes(MediaType.APPLICATION_JSON) public void createDepartment(Department entity) { //Method implementation goes here… } The javax.ws.rs.core.MediaType class defines constants for all media types supported in JAX-RS. To learn more about the MediaType class, visit the API documentation available at http://docs.oracle.com/javaee/7/api/javax/ws/rs/core/MediaType.html. Annotations for processing HTTP request methods In general, RESTful web services communicate over HTTP with the standard HTTP verbs (also known as method types) such as GET, PUT, POST, DELETE, HEAD, and OPTIONS. @GET A RESTful system uses the HTTP GET method type for retrieving the resources referenced in the URI path. The @javax.ws.rs.GET annotation designates a method of a resource class to respond to the HTTP GET requests. The following code snippet illustrates the use of the @GET annotation to make a method respond to the HTTP GET request type. In this example, the REST URI for accessing the findAllDepartments() method may look like /departments. The complete URI path may take the following URI pattern: http://host:port/<context-root>/<application-path>/departments. //imports removed for brevity @Path("departments") public class DepartmentService { @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartments() { //Find all departments from the data store List<Department> departments = findAllDepartmentsFromDB(); return departments; } //Other methods removed for brevity } @PUT The HTTP PUT method is used for updating or creating the resource pointed by the URI. The @javax.ws.rs.PUT annotation designates a method of a resource class to respond to the HTTP PUT requests. The PUT request generally has a message body carrying the payload. The value of the payload could be any valid Internet media type such as the JSON object, XML structure, plain text, HTML content, or binary stream. When a request reaches a server, the framework intercepts the request and directs it to the appropriate method that matches the URI path and the HTTP method type. The request payload will be mapped to the method parameter as appropriate by the framework. The following code snippet shows how you can use the @PUT annotation to designate the editDepartment() method to respond to the HTTP PUT request. The payload present in the message body will be converted and copied to the department parameter by the framework: @PUT @Path("{id}") @Consumes(MediaType.APPLICATION_JSON) public void editDepartment(@PathParam("id") Short id, Department department) { //Updates department entity to data store updateDepartmentEntity(id, department); } @POST The HTTP POST method posts data to the server. Typically, this method type is used for creating a resource. The @javax.ws.rs.POST annotation designates a method of a resource class to respond to the HTTP POST requests. The following code snippet shows how you can use the @POST annotation to designate the createDepartment() method to respond to the HTTP POST request. The payload present in the message body will be converted and copied to the department parameter by the framework: @POST public void createDepartment(Department department) { //Create department entity in data store createDepartmentEntity(department); } @DELETE The HTTP DELETE method deletes the resource pointed by the URI. The @javax.ws.rs.DELETE annotation designates a method of a resource class to respond to the HTTP DELETE requests. The following code snippet shows how you can use the @DELETE annotation to designate the removeDepartment() method to respond to the HTTP DELETE request. The department ID is passed as the path variable in this example. @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short id) { //remove department entity from data store removeDepartmentEntity(id); } @HEAD The @javax.ws.rs.HEAD annotation designates a method to respond to the HTTP HEAD requests. This method is useful for retrieving the metadata present in the response headers, without having to retrieve the message body from the server. You can use this method to check whether a URI pointing to a resource is active or to check the content size by using the Content-Length response header field, and so on. The JAX-RS runtime will offer the default implementations for the HEAD method type if the REST resource is missing explicit implementation. The default implementation provided by runtime for the HEAD method will call the method designated for the GET request type, ignoring the response entity retuned by the method. @OPTIONS The @javax.ws.rs.OPTIONS annotation designates a method to respond to the HTTP OPTIONS requests. This method is useful for obtaining a list of HTTP methods allowed on a resource. The JAX-RS runtime will offer a default implementation for the OPTIONS method type, if the REST resource is missing an explicit implementation. The default implementation offered by the runtime sets the Allow response header to all the HTTP method types supported by the resource. Annotations for accessing request parameters You can use this offering to extract the following parameters from a request: a query, URI path, form, cookie, header, and matrix. Mostly, these parameters are used in conjunction with the GET, POST, PUT, and DELETE methods. @PathParam A URI path template, in general, has a URI part pointing to the resource. It can also take the path variables embedded in the syntax; this facility is used by clients to pass parameters to the REST APIs as appropriate. The @javax.ws.rs.PathParam annotation injects (or binds) the value of the matching path parameter present in the URI path template into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. Typically, this annotation is used in conjunction with the HTTP method type annotations such as @GET, @POST, @PUT, and @DELETE. The following example illustrates the use of the @PathParam annotation to read the value of the path parameter, id, into the deptId method parameter. The URI path template for this example looks like /departments/{id}: //Other imports removed for brevity javax.ws.rs.PathParam @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short deptId) { removeDepartmentEntity(deptId); } //Other methods removed for brevity } The REST API call to remove the department resource identified by id=10 looks like DELETE /departments/10 HTTP/1.1. We can also use multiple variables in a URI path template. For example, we can have the URI path template embedding the path variables to query a list of departments from a specific city and country, which may look like /departments/{country}/{city}. The following code snippet illustrates the use of @PathParam to extract variable values from the preceding URI path template: @Produces(MediaType.APPLICATION_JSON) @Path("{country}/{city} ") public List<Department> findAllDepartments( @PathParam("country") String countyCode, @PathParam("city") String cityCode) { //Find all departments from the data store for a country //and city List<Department> departments = findAllMatchingDepartmentEntities(countyCode, cityCode ); return departments; } @QueryParam The @javax.ws.rs.QueryParam annotation injects the value(s) of a HTTP query parameter into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of @QueryParam to extract the value of the desired query parameter present in the URI. This example extracts the value of the query parameter, name, from the request URI and copies the value into the deptName method parameter. The URI that accesses the IT department resource looks like /departments?name=IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } @MatrixParam Matrix parameters are another way of defining parameters in the URI path template. The matrix parameters take the form of name-value pairs in the URI path, where each pair is preceded by semicolon (;). For instance, the URI path that uses a matrix parameter to list all departments in Bangalore city looks like /departments;city=Bangalore. The @javax.ws.rs.MatrixParam annotation injects the matrix parameter value into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet demonstrates the use of the @MatrixParam annotation to extract the matrix parameters present in the request. The URI path used in this example looks like /departments;name=IT;city=Bangalore. @GET @Produces(MediaType.APPLICATION_JSON) @Path("matrix") public List<Department> findAllDepartmentsByNameWithMatrix(@MatrixParam("name") String deptName, @MatrixParam("city") String locationCode) { List<Department> depts=findAllDepartmentsFromDB(deptName, city); return depts; } You can use PathParam, QueryParam, and MatrixParam to pass the desired search parameters to the REST APIs. Now, you may ask when to use what? Although there are no strict rules here, a very common practice followed by many is to use PathParam to drill down to the entity class hierarchy. For example, you may use the URI of the following form to identify an employee working in a specific department: /departments/{dept}/employees/{id}. QueryParam can be used for specifying attributes to locate the instance of a class. For example, you may use URI with QueryParam to identify employees who have joined on January 1, 2015, which may look like /employees?doj=2015-01-01. The MatrixParam annotation is not used frequently. This is useful when you need to make a complex REST style query to multiple levels of resources and subresources. MatrixParam is applicable to a particular path element, while the query parameter is applicable to the entire request. @HeaderParam The HTTP header fields provide necessary information about the request and response contents in HTTP. For example, the header field, Content-Length: 348, for an HTTP request says that the size of the request body content is 348 octets (8-bit bytes). The @javax.ws.rs.HeaderParam annotation injects the header values present in the request into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example extracts the referrer header parameter and logs it for audit purposes. The referrer header field in HTTP contains the address of the previous web page from which a request to the currently processed page originated: @POST public void createDepartment(@HeaderParam("Referer") String referer, Department entity) { logSource(referer); createDepartmentInDB(department); } Remember that HTTP provides a very wide selection of headers that cover most of the header parameters that you are looking for. Although you can use custom HTTP headers to pass some application-specific data to the server, try using standard headers whenever possible. Further, avoid using a custom header for holding properties specific to a resource, or the state of the resource, or parameters directly affecting the resource. @CookieParam The @javax.ws.rs.CookieParam annotation injects the matching cookie parameters present in the HTTP headers into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet uses the Default-Dept cookie parameter present in the request to return the default department details: @GET @Path("cook") @Produces(MediaType.APPLICATION_JSON) public Department getDefaultDepartment(@CookieParam("Default-Dept") short departmentId) { Department dept=findDepartmentById(departmentId); return dept; } @FormParam The @javax.ws.rs.FormParam annotation injects the matching HTML form parameters present in the request body into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The request body carrying the form elements must have the content type specified as application/x-www-form-urlencoded. Consider the following HTML form that contains the data capture form for a department entity. This form allows the user to enter the department entity details: <!DOCTYPE html> <html> <head> <title>Create Department</title> </head> <body> <form method="POST" action="/resources/departments"> Department Id: <input type="text" name="departmentId"> <br> Department Name: <input type="text" name="departmentName"> <br> <input type="submit" value="Add Department" /> </form> </body> </html> Upon clicking on the submit button on the HTML form, the department details that you entered will be posted to the REST URI, /resources/departments. The following code snippet shows the use of the @FormParam annotation for extracting the HTML form fields and copying them to the resource class method parameter: @Path("departments") public class DepartmentService { @POST //Specifies content type as //"application/x-www-form-urlencoded" @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void createDepartment(@FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) { createDepartmentEntity(departmentId, departmentName); } } @DefaultValue The @javax.ws.rs.DefaultValue annotation specifies a default value for the request parameters accessed using one of the following annotations: PathParam, QueryParam, MatrixParam, CookieParam, FormParam, or HeaderParam. The default value is used if no matching parameter value is found for the variables annotated using one of the preceding annotations. The following REST resource method will make use of the default value set for the from and to method parameters if the corresponding query parameters are found missing in the URI path: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsInRange (@DefaultValue("0") @QueryParam("from") Integer from, @DefaultValue("100") @QueryParam("to") Integer to) { findAllDepartmentEntitiesInRange(from, to); } @Context The JAX-RS runtime offers different context objects, which can be used for accessing information associated with the resource class, operating environment, and so on. You may find various context objects that hold information associated with the URI path, request, HTTP header, security, and so on. Some of these context objects also provide the utility methods for dealing with the request and response content. JAX-RS allows you to reference the desired context objects in the code via dependency injection. JAX-RS provides the @javax.ws.rs.Context annotation that injects the matching context object into the target field. You can specify the @Context annotation on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of the @Context annotation to inject the javax.ws.rs.core.UriInfo context object into a method variable. The UriInfo instance provides access to the application and request URI information. This example uses UriInfo to read the query parameter present in the request URI path template, /departments/IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName( @Context UriInfo uriInfo){ String deptName = uriInfo.getPathParameters().getFirst("name"); List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } Here is a list of the commonly used classes and interfaces, which can be injected using the @Context annotation: javax.ws.rs.core.Application: This class defines the components of a JAX-RS application and supplies additional metadata javax.ws.rs.core.UriInfo: This interface provides access to the application and request URI information javax.ws.rs.core.Request: This interface provides a method for request processing such as reading the method type and precondition evaluation. javax.ws.rs.core.HttpHeaders: This interface provides access to the HTTP header information javax.ws.rs.core.SecurityContext: This interface provides access to security-related information javax.ws.rs.ext.Providers: This interface offers the runtime lookup of a provider instance such as MessageBodyReader, MessageBodyWriter, ExceptionMapper, and ContextResolver javax.ws.rs.ext.ContextResolver<T>: This interface supplies the requested context to the resource classes and other providers javax.servlet.http.HttpServletRequest: This interface provides the client request information for a servlet javax.servlet.http.HttpServletResponse: This interface is used for sending a response to a client javax.servlet.ServletContext: This interface provides methods for a servlet to communicate with its servlet container javax.servlet.ServletConfig: This interface carries the servlet configuration parameters @BeanParam The @javax.ws.rs.BeanParam annotation allows you to inject all matching request parameters into a single bean object. The @BeanParam annotation can be set on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The bean class can have fields or properties annotated with one of the request parameter annotations, namely @PathParam, @QueryParam, @MatrixParam, @HeaderParam, @CookieParam, or @FormParam. Apart from the request parameter annotations, the bean can have the @Context annotation if there is a need. Consider the example that we discussed for @FormParam. The createDepartment() method that we used in that example has two parameters annotated with @FormParam: public void createDepartment( @FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) Let's see how we can use @BeanParam for the preceding method to give a more logical, meaningful signature by grouping all the related fields into an aggregator class, thereby avoiding too many parameters in the method signature. The DepartmentBean class that we use for this example is as follows: public class DepartmentBean { @FormParam("departmentId") private short departmentId; @FormParam("departmentName") private String departmentName; //getter and setter for the above fields //are not shown here to save space } The following code snippet demonstrates the use of the @BeanParam annotation to inject the DepartmentBean instance that contains all the FormParam values extracted from the request message body: @POST public void createDepartment(@BeanParam DepartmentBean deptBean) { createDepartmentEntity(deptBean.getDepartmentId(), deptBean.getDepartmentName()); } @Encoded By default, the JAX-RS runtime decodes all request parameters before injecting the extracted values into the target variables annotated with one of the following annotations: @FormParam, @PathParam, @MatrixParam, or @QueryParam. You can use @javax.ws.rs.Encoded to disable the automatic decoding of the parameter values. With the @Encoded annotation, the value of parameters will be provided in the encoded form itself. This annotation can be used on a class, method, or parameters. If you set this annotation on a method, it will disable decoding for all parameters defined for this method. You can use this annotation on a class to disable decoding for all parameters of all methods. In the following example, the value of the path parameter called name is injected into the method parameter in the URL encoded form (without decoding). The method implementation should take care of the decoding of the values in such cases: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { //Method body is removed for brevity } URL encoding converts a string into a valid URL format, which may contain alphabetic characters, numerals, and some special characters supported in the URL string. To learn about the URL specification, visit http://www.w3.org/Addressing/URL/url-spec.html. Summary With the use of annotations, the JAX-RS API provides a simple development model for RESTful web service programming. In case you are interested in knowing other Java RESTful Web Services books that Packt has in store for you, here is the link: RESTful Java Web Services, Jose Sandoval RESTful Java Web Services Security, René Enríquez, Andrés Salazar C Resources for Article: Further resources on this subject: The Importance of Securing Web Services[article] Understanding WebSockets and Server-sent Events in Detail[article] Adding health checks [article]
Read more
  • 0
  • 0
  • 9512

article-image-adding-fog-your-games
Packt
21 Sep 2015
8 min read
Save for later

Adding Fog to Your Games

Packt
21 Sep 2015
8 min read
In this article by Muhammad A.Moniem, author of the book Unreal Engine Lighting and Rendering Essentials speaks about rendering without mentioning one of the most and old (but important) rendering features since the rise of the 3D rendering. Fog effects have always been an essential part of any rendering engines regardless of the main goal of that engine. However, in games, it is a must to have this feature, not only because of the ambiance and feel it will give to the game, but because it will minimize the draw distance while rendering the large and open areas, which is great performance wise! The fog effects can be used for a lot of purposes, starting from adding ambiance to the world to setting a global mood (perhaps scary), to simulating a real environment, or even to distracting the players. By the end of this little article, you'll be able to: Understand both the fog types in Unreal Engine Understand the difference between both the fog types Master all the parameters to control the fog types Having said this, let's get started! (For more resources related to this topic, see here.) The fog types Unreal Engine provides the user with two varieties of fog; each has its own set of parameters to modify and provide different results of effects. The two supported fog types are as follows: The Atmospheric Fog The Exponential Height Fog The Atmospheric Fog The Atmospheric Fog gives an approximation of light scattering through a planetary atmosphere. It is the best fog method that can be used with a natural environment scene, such as landscape scenes. One of the most core features of this fog is that it gives your directional light a sun disc effect. Adding it to your game By adding an actor from the Visual Effects section of the Modes panel, or even from the actor's context menu by right-clicking on the scene view, you can install the Atmospheric Fog in your level directly. In the Visual Effects submenu of the Modes panel, you can find both the fog types listed here. In order to be able to control the quality of the final visual look of the recently inserted fog, you will have to do some tweaks for its properties attached to the actor. Sun Multiplier: This is an overall multiplier for the directional light's brightness. Increasing this value will not only brighten the fog color, but will also brighten the sky color as well. Fog Multiplier: This is a multiplier that affects only the fog color (does not affect the directional light). Density Multiplier: This is a fog density multiplier (does not affect the directional light). Density Offset: This is a fog opacity controller. Distance Scale: This is a distance factor that is compared to the Unreal unit scale. This value is more effective for a very small world. As the world size increases, you will need to increase this value too, as larger values cause changes in the fog attenuation to take place faster. Altitude Scale: This is the scale along the z axis. Distance Offset: This is the distance offset, calculated in km, is used to manage the large distances. Ground Offset: This is an offset for the sea level. (normally, the sea level is 0, and as the fog system does not work for regions below the sea level, you need to make sure that all the terrain remains above this value in order to guarantee that the fog works.) Start Distance: This is the distance from the camera lens that the fog will start from. Sun Disk Scale: This is the size of the sun disk, but keep in mind that this can't be 0, as earlier there was an option to disable the sun disk, but in order to keep it real, Epic decided to remove this option and keep the sun disk, but it gives you the chance to make it as small as possible. Precompute Params: The properties included in this group need recomputation of precomputed texture data: Density Height: This is the fog density decay height controller. The lower the values, the denser the fog will be, while the higher the values, the less scatter the fog will have. Max Scattering Num: This sets a limit on the number of scattering calculations. Inscatter Altitude Sample Number: This is the number of different altitudes at which you can sample inscatter color. The Exponential Height Fog This type of fog has its own unique requirement. While the Atmospheric Fog can be added anytime or anywhere and it works, the Exponential Height Fog requires a special type of map where there are low and high bounds, as its mechanic includes creating more density in the low places of a map and less density in the high places of the map. Between both these areas, there will be a smooth transition. One of the most interesting features of the Exponential Height Fog is that is has two fog colors: one for the hemisphere facing the dominant directional light and another color for the opposite hemisphere. Adding it to your game As mentioned earlier, to add the volume type from the same Visual Effects section of the Modes panel is very simple. You can select the Exponential Height Fog actor and drag and drop it into the scene. As you can see, even the icon implies the high and low places from the sea level. In order to be able to control the final visual look of the recently inserted fog, you would have to do some tweaks for its properties attached to the actor: Fog Density: This is the global density controller of the fog. Fog Inscattering Color: This is the inscattering color for the fog (the primary color). In the following image, you can see how different values work: Fog Height Falloff: This is the Height density controller that controls how the density increases as the height decreases. Fog Max Opacity: This controls the maximum opacity of the fog. A value of 0 means the fog will be invisible. Start Distance: This is the distance from the camera where the fog will start. Directional Inscattering Exponent: This controls the size of the directional inscattering cone. The higher the value, the clearer vision you get, while the lower the value, the more fog dense you get. Directional Inscattering Start Distance: This controls the start distance from the viewer of the directional inscattering. Directional Inscattering Color: This sets the color for directional inscattering that is used to approximate inscattering from a directional light. Visible: This controls the fog visibility. Actor Hidden in Game: This enables or disables the fog in the game (it will not affect the editing mode). Editor Billboard Scale: This is the scale of the billboard components in the editor. The animated fog Almost like any other thing in Unreal Engine, you can do some animations for it. Some parts of the engine are super responsive to the animation system, while other parts have a limited access. However, speaking of the fog, it has a limited access in order to animate some values. You can use different ways and methods to animate values at runtime or even during the edit mode. The color The height fog color can be changed at runtime using the LinearColor Property Track in the Matinee Editor. By performing the following given steps, you can change the height fog color in the game: Create a new Matinee Actor. Open the newly created actor in the Matinee Editor. Create a Height Fog Actor. Create a group in Matinee. Attach the Height Fog Actor from the scene to the group created in the previous step. Create a linear color property track in the group. Choose the FogInscatteringColor or DirectionalInscatteringColor to control its value (using two colors is an advantage of that fog type, remember!). Add keyframes to the track, and set the color for them. Animating the Exponential Height Fog In order to animate the Exponential Height Fog, you can use one of the following two ways: Use Matinee to animate the Exponential Height Fog Actor values Use a timeline node in the Level Blueprint and control the Exponential Height Fog Actor values Summary In this article, you learned about the fog effects and the supported types in the Unreal Editor, the different parameters, and how to use any of the fog types. Now, it is recommended that you go ahead directly to your editor, and start adding some fog and play with its values. Even better if you can start to do some animation for the parameters as mentioned earlier. Don't just try in the Edit mode; sometimes, the results are different when you hit play or even more different when you cook a build, so feel free to build any level you made in an executable and check the results. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints[article] Creating a Brick Breaking Game[article] The Unreal Engine [article]
Read more
  • 0
  • 0
  • 27463

article-image-modeling-complex-functions-artificial-neural-networks
Packt
21 Sep 2015
13 min read
Save for later

Modeling complex functions with artificial neural networks

Packt
21 Sep 2015
13 min read
 In this article by Sebastian Raschka, the author of Python Machine Learning, we will take a look at the concept of multilayer artificial neural networks, which was inspired by hypotheses and models of how the human brain works to solve complex problem tasks. (For more resources related to this topic, see here.) Although artificial neural networks gained a lot of popularity in the recent years, early studies of neural networks goes back to the 1940s, when Warren McCulloch and Walter Pitt first described the concept of how neurons may work. However, the decades that followed saw the first implementation of the McCulloch-Pitt neuron model, Rosenblatt's perceptron in the 1950s. Many researchers and machine learning practitioners slowly began to lose interest in neural networks, since no one had a good solution for the training of a neural network with multiple layers. Eventually, interest in neural networks was rekindled in 1986 when D.E. Rumelhart, G.E. Hinton, and R.J. Williams were involved in the discovery and popularization of the backpropagation algorithm to train neural networks more efficiently (Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986). Learning representations by back-propagating errors. Nature 323 (6088): 533–536). During the last decade, many more major breakthroughs have been made, known as deep learning algorithms. These can be used to create so-called feature detectors from unlabeled data to pre-train deep neural networks—neural networks that are composed of many layers. Neural networks are a hot topic not only in academic research but also in big technology companies such as Facebook, Microsoft, and Google. They invest heavily in artificial neural networks and deep learning research. Today, complex neural networks powered by deep learning algorithms are considered state of the art when it comes to solving complex problems, such as image and voice recognition. Introducing the multilayer neural network architecture In this section, we will connect multiple single neurons to a multilayer feed-forward neural network; this type of network is also called multilayer perceptron (MLP). The following figure illustrates the concept of an MLP consisting of three layers: one input layer, one hidden layer, and one output layer. The units in the hidden layer are fully connected to the input layer, and the output layer is fully connected to the hidden layer, respectively. As shown in the preceding diagram, we denote the ith activation unit in the jth layer as , and the activation units  and  are the bias units, which we set equal to 1. The activation of the units in the input layer is just its input plus the bias unit: Each unit in layer j is connected to all units in layer j + 1 via a weight coefficient; for example, the connection between unit a in layer j and unit b in layer j + 1 would be written as  . Note that the superscript i in  stands for the ith sample, not the ith layer; in the following paragraphs, we will often omit the superscript i for clarity. Activating a neural network via forward propagation In this section, we will describe the process of forward propagation to calculate the output of an MLP model. To understand how it fits into the context of learning an MLP model, let's summarize the MLP learning procedure in three simple steps: Starting at the input layer, we forward propagate the patterns of the training data through the network to generate an output. Based on the network's output, we calculate the error we want to minimize using a cost function, which we will describe later. We then backpropagate the error, find its derivative with respect to each weight in the network, and update the model. Finally, after we have repeated steps 1-3 for many epochs and learned the weights of the MLP, we use forward propagation to calculate the network output, and apply a threshold function to obtain the predicted class labels in the one-hot representation, which we described in the previous section. Now, let's walk through the individual steps of forward propagation to generate an output from the patterns in the training data. Since each unit in the hidden unit is connected to all units in the input layers, we first calculate the activation  as follows: Here, is the net input and  is the activation function, which has to be differentiable so as to learn the weights that connect the neurons using a gradient-based approach. To be able to solve complex problems such as image classification, we need non-linear activation functions in our MLP model, for example, the sigmoid (logistic) activation function: The sigmoid function is an "S"-shaped curve that maps the net input z onto a logistic distribution in the range 0 to 1, which passes the origin at z = 0.5 as shown in the following graph: Intuitively, we can think of the neurons in the MLP as logistic regression units that return values in the continuous range between 0 and 1. For purposes of code efficiency and readability, we will now write the activation in a more compact form using the concepts of basic linear algebra, which will allow us to vectorize our code implantation via NumPy rather than writing multiple nested and expensive Python for-loops: Here,  is our [m +1] x 1 dimensional feature vector for a sample  plus bias unit, and  is [m + 1] x h dimensional weight matrix where h is the number of hidden units in our neural network. After matrix-vector multiplication, we obtain the [m + 1] x 1 dimensional net input vector  . Furthermore, we can generalize this computation to all n samples in the training set: is now an n x [m + 1] matrix, and the matrix-matrix multiplication will result in an h x n dimensional net input matrix  . Finally, we apply the activation function g to each value in the net input matrix to get the h x n activation matrix  for the next layer (here, the output layer): Similarly, we can rewrite the activation of the output layer in the vectorized form: Here, we multiply the t x n matrix  (t is the number of output class labels) by the h x n dimensional matrix  to obtain the t x n dimensional matrix  (the columns in this matrix represent the outputs for each sample). Lastly, we apply the sigmoid activation function to obtain the continuous-valued output of our network: Classifying handwritten digits In this section, we will train our first multilayer neural network to classify handwritten digits from the popular MNIST dataset (Mixed National Institute of Standards and Technology database), which has been constructed by Yann LeCun and others (Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998) and serves as a popular benchmark dataset for machine learning algorithms. Obtaining the MNIST dataset The MNIST dataset is publicly available at http://yann.lecun.com/exdb/mnist/ and consists of these four parts: Training set images: train-images-idx3-ubyte.gz (9.9 MB, 47 MB unzipped, 60,000 samples) Training set labels: train-labels-idx1-ubyte.gz (29 KB, 60 KB unzipped, 60,000 labels) Test set images: t10k-images-idx3-ubyte.gz (1.6 MB, 7.8 MB, 10,000 samples) Test set labels: t10k-labels-idx1-ubyte.gz (5 KB, 10 KB unzipped, 10,000 labels) In this section, we will only be working with a subset of MNIST. Thus, we only need to download the training set images and training set labels. After downloading the files, I recommend that you unzip the files using the Unix/Linux GZip tool from the terminal for efficiency, for example, using the following command in your local MNIST download directory or, alternatively, your favorite unarchiver tool if you are working with a Microsoft Windows machine: gzip *ubyte.gz -d The images are stored in byte form, and using the following function, we will read them into NumPy arrays, which we will use to train our MLP: >>> import os >>> import struct >>> import numpy as np >>> def load_mnist(path): ... labels_path = os.path.join(path, 'train-labels-idx1-ubyte') ... images_path = os.path.join(path, 'train-images-idx3-ubyte') ... with open(labels_path, 'rb') as lbpath: ... magic, n = struct.unpack('>II', lbpath.read(8)) ... labels = np.fromfile(lbpath, dtype=np.uint8) ... with open(images_path, 'rb') as imgpath: ... magic, num, rows, cols = struct.unpack( ... ">IIII", imgpath.read(16)) ... images = np.fromfile(imgpath, ... dtype=np.uint8).reshape(len(labels), 784) ... return images, labels The load_mnist function returns an n x m dimensional NumPy array (images), where n is the number of samples (60,000), and m is the number of features. The images in the MNIST dataset consist of 28 x 28 pixels, and each pixel is represented by a grayscale intensity value. Here, we unroll the 28 x 28 pixels into 1D row vectors, which represent the rows in our images array (784 per row or image). The load_mnist function returns a second array, labels, which contains the 60,000 class labels (integers 0-9) of the handwritten digits. The way we read in the image might seem a little strange at first: magic, n = struct.unpack('>II', lbpath.read(8)) labels = np.fromfile(lbpath, dtype=np.int8) To understand how these two lines of code work, let's take a look at the dataset description from the MNIST website: [offset] [type] [value] [description] 0000 32 bit integer 0x00000801(2049) magic number (MSB first) 0004 32 bit integer 60000 number of items 0008 unsigned byte ?? label 0009 unsigned byte ?? label ........ xxxx unsigned byte ?? label Using the two lines of the preceding code, we first read in the "magic number," which is a description of the file protocol as well as the "number of items" (n) from the file buffer, before we read the following bytes into a NumPy array using the fromfile method. The fmt parameter value >II that we passed as an argument to struct.unpack can be composed of two parts: >: Big-endian (defines the order in which a sequence of bytes is stored) I: Unsigned integer After executing the following code, we should have a label vector of 60,000 instances, that is, a 60,000 × 784 dimensional image matrix: >>> X, y = load_mnist('mnist') >>> print('Rows: %d, columns: %d' % (X.shape[0], X.shape[1])) Rows: 60000, columns: 784 To get a idea of what those images in MNIST look like, let's define a function that reshapes a 784-pixel sample from our feature matrix into the original 28 × 28 image that we can plot via matplotlib's imshow function: >>> import matplotlib.pyplot as plt >>> def plot_digit(X, y, idx): ... img = X[idx].reshape(28,28) ... plt.imshow(img, cmap='Greys', interpolation='nearest') ... plt.title('true label: %d' % y[idx]) ... plt.show() Now let's use the plot_digit function to display an arbitrary digit (here, the fifth digit) from the dataset: >>> plot_digit(X, y, 4) Implementing a multilayer perceptron In this section, we will implement the code of an MLP with one input, one hidden, and one output layer to classify the images in the MNIST dataset. I tried to keep the code as simple as possible. However, it may seem a little complicated at first. If you are not running the code from the IPython notebook, I recommend that you copy it to a Python script file in your current working directory, for example, neuralnet.py, which you can then import into your current Python session via this: from neuralnet import NeuralNetMLP Now, let's initialize a new 784-50-10 MLP, a neural network with 784 input units (n_features), 50 hidden units (n_hidden), and 10 output units (n_output): >>> nn = NeuralNetMLP(n_output=10, ... n_features=X.shape[1], ... n_hidden=50, ... l2=0.1, ... l1=0.0, ... epochs=800, ... eta=0.001, ... alpha=0.001, ... decrease_const=0.00001, ... shuffle=True, ... minibatches=50, ... random_state=1) l2: The  parameter for L2 regularization. This is used to decrease the degree of overfitting; equivalently, l1 is the  for L1 regularization. epochs: The number of passes over the training set. eta: The learning rate . alpha: A parameter for momentum learning used to add a factor of the previous gradient to the weight update for faster learning: (where t is the current time step or epoch). decrease_const: The decrease constant d for an adaptive learning rate  that decreases over time for better convergence . shuffle: Shuffle the training set prior to every epoch to prevent the algorithm from getting stuck in circles. minibatches: Splitting of the training data into k mini-batches in each epoch. The gradient is computed for each mini-batch separately instead of the entire training data for faster learning. Next, we train the MLP using 10,000 samples from the already shuffled MNIST dataset. Note that we only use 10,000 samples to keep the time for training reasonable (up to 5 minutes on standard desktop computer hardware). However, you are encouraged to use more training data for model fitting to increase the predictive accuracy: >>> nn.fit(X[:10000], y[:10000], print_progress=True) Epoch: 800/800 Similar to our earlier Adaline implementation, we save the cost for each epoch in a cost_ list, which we can now visualize, making sure that the optimization algorithm has reached convergence. Here, we plot only every 50th step to account for the 50 mini-batches (50 minibatches × 800 epochs): >>> import matplotlib.pyplot as plt >>> plt.plot(range(len(nn.cost_)//50), nn.cost_[::50], color='red') >>> plt.ylim([0, 2000]) >>> plt.ylabel('Cost') >>> plt.xlabel('Epochs') >>> plt.show() As we can see, the optimization algorithm converged after approximately 700 epochs. Now let's evaluate the performance of the model by calculating the prediction accuracy: >>> y_pred = nn.predict(X[:10000]) >>> acc = np.sum(y[:10000] == y_pred, axis=0) / 10000 >>> print('Training accuracy: %.2f%%' % (acc * 100)) Training accuracy: 97.60% As you can see, the model gets most of the training data right. But how does it generalize to data that it hasn't seen before during training? Let's calculate the test accuracy on 5,000 images that were not included in the training set: >>> y_pred = nn.predict(X[10000:15000]) >>> acc = np.sum(y[10000:15000] == y_pred, axis=0) / 5000 >>> print('Test accuracy: %.2f%%' % (acc * 100)) Test accuracy: 92.40% Summary Based on the discrepancy between the training and test accuracy, we can conclude that the model slightly overfits the training data. To decrease the degree of overfitting, we can change the number of hidden units or the values of the regularization parameters, or fit the model on more training data. Resources for Article: Further resources on this subject: Asynchronous Programming with Python[article] The Essentials of Working with Python Collections[article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 0
  • 9813
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-building-games-html5-and-dart
Packt
21 Sep 2015
19 min read
Save for later

Building Games with HTML5 and Dart

Packt
21 Sep 2015
19 min read
In this article written by Ivo Balbaert, author of the book Learning Dart - Second Edition, you will learn to create a well-known memory game. Also, you will design a model first and work up your way from a modest beginning to a completely functional game, step by step. You will also learn how to enhance the attractiveness of web games with audio and video techniques. The following topics will be covered in this article: The model for the memory game Spiral 1—drawing the board Spiral 2—drawing cells Spiral 3—coloring the cells Spiral 4—implementing the rules Spiral 5—game logic (bringing in the time element) Spiral 6—some finishing touches Spiral 7—using images (For more resources related to this topic, see here.) The model for the memory game When started, the game presents a board with square cells. Every cell hides an image that can be seen by clicking on the cell, but this disappears quickly. You must remember where the images are, because they come in pairs. If you quickly click on two cells that hide the same picture, the cells will "flip over" and the pictures will stay visible. The objective of the game is to turn over all the pairs of matching images in a very short time. After some thinking we came up with the following model, which describes the data handled by the application. In our game, we have a number of pictures, which could belong to a Catalog. For example, a travel catalog with a collection of photos from our trips or something similar. Furthermore, we have a collection of cells and each cell is hiding a picture. Also, we have a structure that we will call memory, and this contains the cells in a grid of rows and columns. We could draw it up as shown in the following figure. You can import the model from the game_memory_json.txt file that contains its JSON representation: A conceptual model of the memory game The Catalog ID is its name, which is mandatory, but the description is optional. The Picture ID consists of the sequence number within the Catalog. The imageUri field stores the location of the image file. width and height are optional properties, since they may be derived from the image file. The size may be small, medium, or large to help select an image. The ID of a Memory is its name within the Catalog, the collection of cells is determined by the memory length, for example, 4 cells per side. Each cell is of the same length cellLength, which is a property of the memory. A memory is recalled when all the image pairs are discovered. Some statistics must be kept, such as recall count, the best recall time in seconds, and the number of cell clicks to recover the whole image (minTryCount). The Cell has the row and column coordinates and also the coordinates of its twin with the same image. Once the model is discussed and improved, model views may be created: a Board would be a view of the Memory concept and a Box would be a view of the Cell concept. The application would be based on the Catalog concept. If there is no need to browse photos of a catalog and display them within a page, there would not be a corresponding view. Now, we can start developing this game from scratch. Spiral 1 – drawing the board The app starts with main() in educ_memory_game.dart: library memory; import 'dart:html'; part 'board.dart'; void main() { // Get a reference to the canvas. CanvasElement canvas = querySelector('#canvas'); (1) new Board(canvas); (2) } We'll draw a board on a canvas element. So, we need a reference that is given in line (1). The Board view is represented in code as its own Board class in the board.dart file. Since everything happens on this board, we construct its object with canvas as an argument (line (2)). Our game board will be periodically drawn as a rectangle in line (4) by using the animationFrame method from the Window class in line (3): part of memory; class Board { CanvasElement canvas; CanvasRenderingContext2D context; num width, height; Board(this.canvas) { context = canvas.getContext('2d'); width = canvas.width; height = canvas.height; window.animationFrame.then(gameLoop); (3) } void gameLoop(num delta) { draw(); window.animationFrame.then(gameLoop); } void draw() { clear(); border(); } void clear() { context.clearRect(0, 0, width, height); } void border() { context..rect(0, 0, width, height)..stroke(); (4) } } This is our first result: The game board Spiral 2 – drawing cells In this spiral, we will give our app code some structure: Board is a view, so board.dart is moved to the view folder. We will also introduce here the Memory class from our model in its own code memory.dart file in the model folder. So, we will have to change the part statements to the following: part 'model/memory.dart'; part 'view/board.dart'; The Board view needs to know about Memory. So, we will include it in the Board class and make its object in the Board constructor: new Board(canvas, new Memory(4)); The Memory class is still very rudimentary with only its length property: class Memory { num length; Memory(this.length); } Our Board class now also needs a method to draw the lines, which we decided to make private because it is specific to Board, as well as the clear() and border()methods: void draw() { _clear(); _border(); _lines(); } The lines method is quite straightforward; first draw it on a piece of paper and translate it to code using moveTo and lineTo. Remember that x goes from top-left to right and y goes from top-left to bottom: void _lines() { var gap = height / memory.length; var x, y; for (var i = 1; i < memory.length; i++) { x = gap * i; y = x; context ..moveTo(x, 0) ..lineTo(x, height) ..moveTo(0, y) ..lineTo(width, y); } } The result is a nice grid: Board with cells Spiral 3 – coloring the cells To simplify, we will start using colors instead of pictures to be shown in the grid. Up until now, we didn't implement the cell from the model. Let's do that in modelcell.dart. We start simple by saying that the Cell class has the row, column, and color properties, and it belongs to a Memory object passed in its constructor: class Cell { int row, column; String color; Memory memory; Cell(this.memory, this.row, this.column); } Because we need a collection of cells, it is a good idea to make a Cells class, which contains List. We give it an add method and also an iterator so that we are able to use a for…in statement to loop over the collection: class Cells { List _list; Cells() { _list = new List(); } void add(Cell cell) { _list.add(cell); } Iterator get iterator => _list.iterator; } We will need colors that are randomly assigned to the cells. We will also need some utility variables and methods that do not specifically belong to the model and don't need a class. Hence, we will code them in a folder called util. To specify the colors for the cells, we will use two utility variables: a List variable of colors (colorList), which has the name colors, and a colorMap variable that maps the names to their RGB values. Refer to utilcolor.dart; later on, we can choose some fancier colors: var colorList = ['black', 'blue', //other colors ]; var colorMap = {'black': '#000000', 'blue': '#0000ff', //... }; To generate (pseudo) random values (ints, doubles, or Booleans), Dart has the Random class from dart:math. We will use the nextInt method, which takes an integer (the maximum value) and returns a positive random integer in the range from 0 (inclusive) to max (exclusive). We will build upon this in utilrandom.dart to make methods that give us a random color: int randomInt(int max) => new Random().nextInt(max); randomListElement(List list) => list[randomInt(list.length - 1)]; String randomColor() => randomListElement(colorList); String randomColorCode() => colorMap[randomColor()]; Our Memory class now contains an instance of the Cells class: Cells cells; We build this in the Memory constructor in a nested for loop, where each cell is successively instantiated with a row and column, given a random color, and added to cells: Memory(this.length) { cells = new Cells(); var cell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = new Cell(this, x, y); cell.color = randomColor(); cells.add(cell); } } } We can draw a rectangle and fill it with a color at the same time. So, we realize that we don't need to draw lines as we did in the previous spiral! The _boxes method is called from the draw animation: with a for…in statement, we loop over the collection of cells and call the _colorBox method that will draw and color the cell for each cell: void _boxes() { for (Cell cell in memory.cells) { _colorBox(cell); } } void _colorBox(Cell cell) { var gap = height / memory.length; var x = cell.row * gap; var y = cell.column * gap; context ..beginPath() ..fillStyle = colorMap[cell.color] ..rect(x, y, gap, gap) ..fill() ..stroke() ..closePath(); } Spiral 4 – implementing the rules However, wait! Our game can only work if the same color appears in only two cells: a cell and its twin cell. Moreover, a cell can be hidden or not: the color can be seen or not? To take care of this, the Cell class gets two new attributes: Cell twin; bool hidden = true; The _colorBox method in the Board class can now show the color of the cell when hidden is false (line (2)); when hidden = true (the default state), a neutral gray color will be used for the cell (line (1)): static const String COLOR_CODE = '#f0f0f0'; We also gave the gap variable a better name, boxSize: void _colorBox(Cell cell) { var x = cell.column * boxSize; var y = cell.row * boxSize; context.beginPath(); if (cell.hidden) { context.fillStyle = COLOR_CODE; (1) } else { context.fillStyle = colorMap[cell.color]; (2) } // same code as in Spiral 3 } The lines (1) and (2) can also be stated more succinctly with the ? ternary operator. Remember that the drawing changes because the _colorBox method is called via draw at 60 frames per second and the board can react to a mouse click. In this spiral, we will show a cell when it is clicked together with its twin cell and then they will stay visible. Attaching an event handler for this is easy. We add the following line to the Board constructor: querySelector('#canvas').onMouseDown.listen(onMouseDown); The onMouseDown event handler has to know on which cell the click occurred. The mouse event e contains the coordinates of the click in its e.offset.x and e.offset.y properties (lines (3) and (4)). We will obtain the cell's row and column by using a truncating division ~/ operator dividing the x (which gives the column) and y (which gives the row) values by boxSize: void onMouseDown(MouseEvent e) { int row = e.offset.y ~/ boxSize; (3) int column = e.offset.x ~/ boxSize; (4) Cell cell = memory.getCell(row, column); (5) cell.hidden = false; (6) cell.twin.hidden = false; (7) } Memory has a collection of cells. To get the cell with a specified row and column value, we will add a getCell method to memory and call it in line (5). When we have the cell, we will set its hidden property and that of its twin cell to false (lines (6) to (7)). The getCell method must return the cell at the given row and column. It loops through all the cells in line (8) and checks each cell, whether it is positioned at that row and column (line (9)). If yes, it will return that cell: Cell getCell(int row, int column) { for (Cell cell in cells) { (8) if (cell.intersects(row, column)) { (9) return cell; } } } For this purpose, we will add an intersects method to the Cell class. This checks whether its row and column match the given row and column for the current cell (see line (10)): bool intersects(int row, int column) { if (this.row == row && this.column == column) { (10) return true; } return false; } Now, we have already added a lot of functionality, but the drawing of the board will need some more thinking: How to give a cell (and its twin cell) a random color that is not yet used? How to attach a cell randomly to a twin cell that is not yet used? To end this, we will have to make the constructor of Memory a lot more intelligent: Memory(this.length) { if (length.isOdd) { (1) throw new Exception( 'Memory length must be an even integer: $length.'); } cells = new Cells(); var cell, twinCell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = getCell(y, x); (2) if (cell == null) { (3) cell = new Cell(this, y, x); cell.color = _getFreeRandomColor(); (4) cells.add(cell); twinCell = _getFreeRandomCell(); (5) cell.twin = twinCell; (6) twinCell.twin = cell; twinCell.color = cell.color; cells.add(twinCell); } } } } The number of pairs given by ((length * length) / 2) must be even. This is only true if the length parameter of Memory itself is even, so we checked it in line (1). Again, we coded a nested loop and got the cell at that row and column. However, as the cell at that position has not yet been made (line (3)), we continued to construct it and assign its color and twin. In line (4), we called _getFreeRandomColor to get a color that is not yet used: String _getFreeRandomColor() { var color; do { color = randomColor(); } while (usedColors.any((c) => c == color)); (7) usedColors.add(color); (8) return color; } The do…while loop continues as long as the color is already in a list of usedColors. On exiting from the loop, we found an unused color, which is added to usedColors in line (8) and also returned. We then had to set everything for the twin cell. We searched for a free one with the _getFreeRandomCell method in line (5). Here, the do…while loop continues until a (row, column) position is found where cell == null is, meaning that we haven't yet created a cell there (line (9)). We will promptly do this in line (10): Cell _getFreeRandomCell() { var row, column; Cell cell; do { row = randomInt(length); column = randomInt(length); cell = getCell(row, column); } while (cell != null); (9) return new Cell(this, row, column); (10) } From line (6) onwards, the properties of the twin cell are set and added to the list. This is all we need to produce the following result: Paired colored cells Spiral 5 – game logic (bringing in the time element) Our app isn't playable yet: When a cell is clicked, its color must only show for a short period of time (say one second) When a cell and its twin cell are clicked within a certain time interval, they must remain visible All of this is coded in the mouseDown event handler and we also need a lastCellClicked variable of the Cell type in the Board class. Of course, this is exactly the cell we get in the mouseDown event handler. So, we will set it in line (5) in the following code snippet: void onMouseDown(MouseEvent e) { // same code as in Spiral 4 - if (cell.twin == lastCellClicked && lastCellClicked.shown) { (1) lastCellClicked.hidden = false; (2) if (memory.recalled) memory.hide(); (3) } else { new Timer(const Duration(milliseconds: 1000), () => cell.hidden = true); (4) } lastCellClicked = cell; (5) } In line (1), we checked whether the last clicked cell was the twin cell and whether this is still shown. Then, we made sure in (2) that it stays visible. shown is a new getter in the Cell class to make the code more readable: bool get shown => !hidden;. If at that moment all the cells were shown (the memory is recalled), we again hid them in line (3). If the last clicked cell was not the twin cell, we hid the current cell after one second in line (4). recalled is a simple getter (read-only property) in the Memory class and it makes use of a Boolean variable in Memory that is initialized to false (_recalled = false;): bool get recalled { if (!_recalled) { if (cells.every((c) => c.shown)) { (6) _recalled = true; } } return _recalled; } In line (6), we tested that if every cell is shown, then this variable is set to true (the game is over). every is a new method in the Cells List and a nice functional way to write this is given as follows: bool every(Function f) => list.every(f); The hide method is straightforward: hide every cell and reset the _recalled variable to false: hide() { for (final cell in cells) cell.hidden = true; _recalled = false; } This is it, our game works! Spiral 6 – some finishing touches A working program always gives its developer a sense of joy, and rightfully so. However, this doesn't that mean you can leave the code as it is. On the contrary, carefully review your code for some time to see whether there is room for improvement or optimization. For example, are the names you used clear enough? The color of a hidden cell is now named simply COLOR_CODE in board.dart, renaming it to HIDDEN_CELL_COLOR_CODE makes its meaning explicit. The List object used in the Cells class can indicate that it is List<Cell>, by applying the fact that Dart lists are generic. The parameter of the every method in the Cell class is more precise—it is a function that accepts a cell and returns bool. Our onMouseDown event handler contains our game logic, so it is very important to tune it if possible. After some thought, we see that the code from the previous spiral can be improved; in the following line, the second condition after && is, in fact, unnecessary: if (cell.twin == lastCellClicked && lastCellClicked.shown) {...} When the player has guessed everything correctly, showing the completed screen for a few seconds will be more satisfactory (line (2)). So, this portion of our event handler code will change to: if (cell.twin == lastCellClicked) { (1) lastCellClicked.hidden = false; if (memory.recalled) { // game over new Timer(const Duration(milliseconds: 5000), () => memory.hide()); (2) } } else if (cell.twin.hidden) { new Timer(const Duration(milliseconds: 800), () => cell.hidden = true); } Why don’t we show a "YOU HAVE WON!" banner. We will do this by drawing the text on the canvas (line (3)), so we must do it in the draw() method (otherwise, it would disappear after INTERVAL milliseconds): void draw() { _clear(); _boxes(); if (memory.recalled) { // game over context.font = "bold 25px sans-serif"; context.fillStyle = "red"; context.fillText("YOU HAVE WON !", boxSize, boxSize * 2); (3) } } Then, the same game with the same configuration can be played again. We could make it more obvious that a cell is hidden by decorating it with a small circle in the _colorBox method (line (4)): if (cell.hidden) { context.fillStyle = HIDDEN_CELL_COLOR_CODE; var centerX = cell.column * boxSize + boxSize / 2; var centerY = cell.row * boxSize + boxSize / 2; var radius = 4; context.arc(centerX, centerY, radius, 0, 2 * PI, false); (4) } We do want to give our player a chance to start over by supplying a Play again button. The easiest way will be to simply refresh the screen (line (5)) by adding this code to the startup script: void main() { canvas = querySelector('#canvas'); ButtonElement play = querySelector('#play'); play.onClick.listen(playAgain); new Board(canvas, new Memory(4)); } playAgain(Event e) { window.location.reload(); (5) } Spiral 7 – using images One improvement that certainly comes to mind is the use of pictures instead of colors as shown in the Using images screenshot. How difficult would that be? It turns out that this is surprisingly easy, because we already have the game logic firmly in place! In the images folder, we supply a number of game pictures. Instead of the color property, we give the cell a String property (image), which will contain the name of the picture file. We then replace utilcolor.dart with utilimages.dart, which contains a imageList variable with the image filenames. In utilrandom.dart, we will replace the color methods with the following code: String randomImage() => randomListElement(imageList); The changes to memory.dart are also straightforward: replace the usedColor list with List usedImages = []; and the _getFreeRandomColor method with _getFreeRandomImage, which will use the new list and method: List usedImages = []; String _getFreeRandomImage() { var image; do { image = randomImage(); } while (usedImages.any((i) => i == image)); usedImages.add(image); return image; } In board.dart, we replace _colorBox(cell) with _imageBox(cell). The only new thing is how to draw the image on canvas. For this, we need ImageElement objects. Here, we have to be careful to create these objects only once and not over and over again in every draw cycle, because this produces a flickering screen. We will store the ImageElements object in a Map: var imageMap = new Map<String, ImageElement>(); Then, we populate this in the Board constructor with a for…in loop over memory.cells: for (var cell in memory.cells) { ImageElement image = new Element.tag('img'); (1) image.src = 'images/${cell.image}'; (2) imageMap[cell.image] = image; (3) } We create a new ImageElement object in line (1), giving it the complete file path to the image file as a src property in line (2) and store it in imageMap in line (3). The image file will then be loaded into memory only once. We don't do any unnecessary network access to effectively cache the images. In the draw cycle, we will load the image from imageMap and draw it in the current cell with the drawImage method in line (4): if (cell.hidden) { // see previous code } else { ImageElement image = imageMap[cell.image]; context.drawImage(image, x, y); // resize to cell size (4) } Perhaps, you can think of other improvements? Why not let the player specify the game difficulty by asking the number of boxes. It is 16 now. Check whether the input is a square of an even number. Do you have enough colors to choose from? Perhaps, dynamically building a list with enough random colors would be a better idea. Calculating and storing the statistics discussed in the model would also make the game more attractive. Another enhancement from the model is to support different catalogs of pictures. Go ahead and exercise your Dart skills! Summary By thoroughly investigating two games applying all of Dart we have already covered, your Dart star begins to shine. For other Dart games, visit http://www.builtwithdart.com/projects/games/. You can find more information at http://www.dartgamedevs.org/ on building games. Resources for Article: Further resources on this subject: Slideshow Presentations [article] Dart with JavaScript [article] Practical Dart [article]
Read more
  • 0
  • 0
  • 19049

article-image-creating-controllers-blueprints
Packt
21 Sep 2015
8 min read
Save for later

Creating Controllers with Blueprints

Packt
21 Sep 2015
8 min read
In this article by Jack Stouffer, author of the book Mastering Flask, the more complex and powerful versions will be introduced, and we will turn our disparate view functions in cohesive wholes. We will also discuss the internals of how Flask handles the lifetime of an HTTP request and advanced ways to define Flask views. (For more resources related to this topic, see here.) Request setup, teardown, and application globals In some cases, a request-specific variable is needed across all view functions and needs to be accessed from the template as well. To achieve this, we can use Flask's decorator function @app.before_request and the object g. The function @app.before_request is executed every time before a new request is made. The Flask object g is a thread-safe store of any data that needs to be kept for each specific request. At the end of the request, the object is destroyed, and a new object is spawned at the start of a new request. For example, this code checks whether the Flask session variable contains an entry for a logged in user; if it exists, it adds the User object to g: from flask import g, session, abort, render_template @app.before_request def before_request(): if 'user_id' in session: g.user = User.query.get(session['user_id']) @app.route('/restricted') def admin(): if g.user is None: abort(403) return render_template('admin.html') Multiple functions can be decorated with @app.before_request, and they all will be executed before the requested view function is executed. There also exists a decorator @app.teardown_request, which is called after the end of every request. Keep in mind that this method of handling user logins is meant as an example and is not secure. Error pages Displaying browser's default error pages to the end user is jarring as the user loses all context of your app, and they must hit the back button to return to your site. To display your own templates when an error is returned with the Flask abort() function, use the errorhandler decorator function: @app.errorhandler(404) def page_not_found(error): return render_template('page_not_found.html'), 404 The errorhandler is also useful to translate internal server errors and HTTP 500 code into user friendly error pages. The app.errorhandler() function may take either one or many HTTP status code to define which code it will act on. The returning of a tuple instead of just an HTML string allows you to define the HTTP status code of the Response object. By default, this is set to 200. Class-based views In most Flask apps, views are handled by functions. However, when many views share common functionality or there are pieces of your code that could be broken out into separate functions, it would be useful to implement our views as classes to take advantage of inheritance. For example, if we have views that render a template, we could create a generic view class that keeps our code DRY: from flask.views import View class GenericView(View): def __init__(self, template): self.template = template super(GenericView, self).__init__() def dispatch_request(self): return render_template(self.template) app.add_url_rule( '/', view_func=GenericView.as_view( 'home', template='home.html' ) ) The first thing to note about this code is the dispatch_request() function in our view class. This is the function in our view that acts as the normal view function and returns an HTML string. The app.add_url_rule() function mimics the app.route() function as it ties a route to a function call. The first argument defines the route of the function, and the view_func parameter defines the function that handles the route. The View.as_view() method is passed to the view_func parameter because it transforms the View class into a view function. The first argument defines the name of the view function, so functions such as url_for() can route to it. The remaining parameters are passed to the __init__ function of the View class. Like the normal view functions, HTTP methods other than GET must be explicitly allowed for the View class. To allow other methods, a class variable containing the list of methods named methods must be added: class GenericView(View): methods = ['GET', 'POST'] … def dispatch_request(self): if request.method == 'GET': return render_template(self.template) elif request.method == 'POST': … Method class views Often, when functions handle multiple HTTP methods, the code can become difficult to read due to large sections of code nested within if statements: @app.route('/user', methods=['GET', 'POST', 'PUT', 'DELETE']) def users(): if request.method == 'GET': … elif request.method == 'POST': … elif request.method == 'PUT': … elif request.method == 'DELETE': … This can be solved with the MethodView class. MethodView allows each method to be handled by a different class method to separate concerns: from flask.views import MethodView class UserView(MethodView): def get(self): … def post(self): … def put(self): … def delete(self): … app.add_url_rule( '/user', view_func=UserView.as_view('user') ) Blueprints In Flask, a blueprint is a method of extending an existing Flask app. They provide a way of combining groups of views with common functionality and allow developers to break their app down into different components. In our architecture, the blueprints will act as our controllers. Views are registered to a blueprint; a separate template and static folder can be defined for it, and when it has all the desired content on it, it can be registered on the main Flask app to add blueprints' content. A blueprint acts much like a Flask app object, but is not actually a self-contained app. This is how Flask extensions provide views function. To get an idea of what blueprints are, here is a very simple example: from flask import Blueprint example = Blueprint( 'example', __name__, template_folder='templates/example', static_folder='static/example', url_prefix="/example" ) @example.route('/') def home(): return render_template('home.html') The blueprint takes two required parameters—the name of the blueprint and the name of the package—which are used internally in Flask, and passing __name__ to it will suffice. The other parameters are optional and define where the blueprint will look for files. Because templates_folder was specified, the blueprint will not look in the default template folder, and the route will render templates/example/home.html and not templates/home.html. The url_prefix option automatically adds the provided URI to the start of every route in the blueprint. So, the URL for the home view is actually /example/. The url_for() function will now have to be told which blueprint the requested route is in: {{ url_for('example.home') }} Also, the url_for() function will now have to be told whether the view is being rendered from within the same blueprint: {{ url_for('.home') }} The url_for() function will also look for static files in the specified static folder as well. To add the blueprint to our app: app.register_blueprint(example) Let's transform our current app to one that uses blueprints. We will first need to define our blueprint before all of our routes: blog_blueprint = Blueprint( 'blog', __name__, template_folder='templates/blog', url_prefix="/blog" ) Now, because the templates folder was defined, we need to move all of our templates into a subfolder of the templates folder named blog. Next, all of our routes need to have the @app.route function changed to @blog_blueprint.route, and any class view assignments now need to be registered to blog_blueprint. Remember that the url_for() function calls in the templates will also have to be changed to have a period prepended to then to indicate that the route is in the same blueprint. At the end of the file, right before the if __name__ == '__main__': statement, add the following: app.register_blueprint(blog_blueprint) Now all of our content is back on the app, which is registered under the blueprint. Because our base app no longer has any views, let's add a redirect on the base URL: @app.route('/') def index(): return redirect(url_for('blog.home')) Why blog and not blog_blueprint? Because blog is the name of the blueprint and the name is what Flask uses internally for routing. blog_blueprint is the name of the variable in the Python file. Summary We now have our app working inside a blueprint, but what does this give us? Let's say that we wanted to add a photo sharing function to our site, we would be able to group all the view functions into one blueprint with its own templates, static folder, and URL prefix without any fear of disrupting the functionality of the rest of the site. Resources for Article: Further resources on this subject: More about Julia [article] Optimization in Python [article] Symbolizers [article]
Read more
  • 0
  • 0
  • 7010

article-image-networking-qt
Packt
21 Sep 2015
21 min read
Save for later

Networking in Qt

Packt
21 Sep 2015
21 min read
In this article from the book Game Programming using Qt by authors Witold Wysota and Lorenz Haas, you will be taught how to communicate with the Internet servers and with sockets in general. First, we will have a look at QNetworkAccessManager, which makes sending network requests and receiving replies really easy. Building on this basic knowledge, we will then use Google's Distance API to get information about the distance between two locations and the time it would take to get from one location to the other. (For more resources related to this topic, see here.) QNetworkAccessManager The easiest way to access files on the Internet is to use Qt's Network Access API. This API is centered on QNetworkAccessManager, which handles the complete communication between your game and the Internet. When we develop and test a network-enabled application, it is recommended that you use a private, local network if feasible. This way, it is possible to debug both ends of the connection and the errors will not expose sensitive data. If you are not familiar with setting up a web server locally on your machine, there are luckily a number of all-in-one installers that are freely available. These will automatically configure Apache2, MySQL, PHP, and much more on your system. On Windows, for example, you could use XAMPP (http://www.apachefriends.org/en) or the Uniform Server (http://www.uniformserver.com); on Apple computers there is MAMP (http://www.mamp.info/en); and on Linux, you normally don't have to do anything since there is already localhost. If not, open your preferred package manager, search for a package called apache2 or similar, and install it. Alternatively, have a look at your distribution's documentation. Before you go and install Apache on your machine, think about using a virtual machine like VirtualBox (http://www.virtualbox.org) for this task. This way, you keep your machine clean and you can easily try different settings of your test server. With multiple virtual machines, you can even test the interaction between different instances of your game. If you are on UNIX, Docker (http://www.docker.com) might be worth to have a look at too. Downloading files over HTTP For downloading files over HTTP, first set up a local server and create a file called version.txt in the root directory of the installed server. The file should contain a small text like "I am a file on localhost" or something similar. To test whether the server and the file are correctly set up, start a web browser and open http://localhost/version.txt. You then should see the file's content. Of course, if you have access to a domain, you can also use that. Just alter the URL used in the example correspondingly. If you fail, it may be the case that your server does not allow to display text files. Instead of getting lost in the server's configuration, just rename the file to version .html. This should do the trick! Result of requesting http://localhost/version.txt on a browser As you might have guessed, because of the filename, the real-life scenario could be to check whether there is an updated version of your game or application on the server. To get the content of a file, only five lines of code are needed. Time for action – downloading a file First, create an instance of QNetworkAccessManager: QNetworkAccessManager *m_nam = new QNetworkAccessManager(this); Since QNetworkAccessManager inherits QObject, it takes a pointer to QObject, which is used as a parent. Thus, you do not have to take care of deleting the manager later on. Furthermore, one single instance of QNetworkAccessManager is enough for an entire application. So, either pass a pointer to the network access manager in your game around or, for ease of use, create a singleton pattern and access the manager through that. A singleton pattern ensures that a class is instantiated exactly once. The pattern is useful for accessing application-wide configurations or—in our case—an instance of QNetworkAccessManager. On the wiki pages for qtcentre.org and qt-project.org, you will find examples for different singleton patterns. A simple template-based approach would look like this (as a header file): template <class T> class Singleton { public: static T& Instance() { static T _instance; return _instance; } private: Singleton(); ~Singleton(); Singleton(const Singleton &); Singleton& operator=(const Singleton &); }; In the source code, you would include this header file and acquire a singleton of a class called MyClass with: MyClass *singleton = &Singleton<MyClass>::Instance(); If you are using Qt Quick, you can directly use the view instance of QNetworkAccessManager: QQuickView *view = new QQuickView; QNetworkAccessManager *m_nam = view->engine()->networkAccessManager(); Secondly, we connect the manager's finished() signal to a slot of our choice. For example, in our class, we have a slot called downloadFinished(): connect(m_nam, SIGNAL(finished(QNetworkReply*)), this, SLOT(downloadFinished(QNetworkReply*))); Then, it actually request's the version.txt file from localhost: m_nam->get(QNetworkRequest(QUrl("http://localhost/version.txt"))); With get(), a request to get the contents of the file, specified by the URL, is posted. The function expects QNetworkRequest, which defines all the information needed to send a request over the network. The main information of such a request is naturally the URL of the file. This is the reason why QNetworkRequest takes a QUrl as an argument in its constructor. You can also set the URL with setUrl() to a request. If you like to define some additional headers, you can either use setHeader() for the most common header or use setRawHeader() to be fully flexible. If you want to set, for example, a custom user agent to the request, the call would look like: QNetworkRequest request; request.setUrl(QUrl("http://localhost/version.txt")); request.setHeader(QNetworkRequest::UserAgentHeader, "MyGame"); m_nam->get(request); The setHeader() function takes two arguments, the first is a value of the enumeration QNetworkRequest::KnownHeaders, which holds the most common—self-explanatory—headers such as LastModifiedHeader or ContentTypeHeader, and the second is the actual value. You could also have written the header by using of setRawHeader(): request.setRawHeader("User-Agent", "MyGame"); When you use setRawHeader(), you have to write the header field names yourself. Beside that, it behaves like setHeader(). A list of all available headers for the HTTP protocol Version 1.1 can be found in section 14 at http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14. With the get() function we requested the version.txt file from localhost. All we have to do from now on is to wait for the server to reply. As soon as the server's reply is finished, the slot downloadFinished() will be called. That was defined by the previous connection statement. As an argument the reply of type QNetworkReply is transferred to the slot and we can read the reply's data and set it to m_edit, an instance of QPlainTextEdit, using the following code: void FileDownload::downloadFinished(QNetworkReply *reply) { const QByteArray content = reply->readAll(); m_edit->setPlainText(content); reply->deleteLater(); } Since QNetworkReply inherits QIODevice, there are also other possibilities to read the contents of the reply including QDataStream or QTextStream to either read and interpret binary data or textual data. Here, as fourth command, QIODevice::readAll() is used to get the complete content of the requested file in a QByteArray. The responsibility for the transferred pointer to the corresponding QNetworkReply lies with us, so we need to delete it at the end of the slot. This would be the fifth line of code needed to download a file with Qt. However, be careful and do not call delete on the reply directly. Always use deleteLater() as the documentation suggests! Have a go hero – extending the basic file downloader If you haven't set up a localhost, just alter the URL in the source code to download another file. Of course, having to alter the source code in order to download another file is far from an ideal approach. So try to extend the dialog, by adding a line edit where you can specify the URL you want to download. Also, you can offer a file dialog to choose the location to where the downloaded file should be saved. Error handling If you do not see the content of the file, something went wrong. Just as in real life, this can always happen so we better make sure, that there is good error handling in such cases to inform the user what is going on. Time for action – displaying a proper error message Fortunately QNetworkReply offers several possibilities to do this. In the slot called downloadFinished() we first want to check if an error occurred: if (reply->error() != QNetworkReply::NoError) {/* error occurred */} The function QNetworkReply::error() returns the error that occurred while handling the request. The error is encoded as a value of type QNetworkReply::NetworkError. The two most common errors are probably these: Error code Meaning ContentNotFoundError This error indicates that the URL of the request could not be found. It is similar to the HTTP error code 404. ContentAccessDenied This error indicates that you do not have the permission to access the requested file. It is similar to the HTTP error 401. You can look up the other 23 error codes in the documentation. But normally you do not need to know exactly what went wrong. You only need to know if everything worked out—QNetworkReply::NoError would be the return value in this case—or if something went wrong. Since QNetworkReply::NoError has the value 0, you can shorten the test phrase to check if an error occurred to: if (reply->error()) { // an error occurred } To provide the user with a meaningful error description you can use QIODevice::errorString(). The text is already set up with the corresponding error message and we only have to display it: if (reply->error()) { const QString error = reply->errorString(); m_edit->setPlainText(error); return; } In our example, assuming we had an error in the URL and wrote versions.txt by mistake, the application would look like this: If the request was a HTTP request and the status code is of interest, it could be retrieved by QNetworkReply::attribute(): reply->attribute(QNetworkRequest::HttpStatusCodeAttribute) Since it returns QVariant, you can either use QVariant::toInt() to get the code as an integer or QVariant::toString() to get the number as a QString. Beside the HTTP status code you can query through attribute() a lot of other information. Have a look at the description of the enumeration QNetworkRequest::Attribute in the documentation. There you also will find QNetworkRequest::HttpReasonPhraseAttribute which holds a human readable reason phrase of the HTTP status code. For example "Not Found" if an HTTP error 404 occurred. The value of this attribute is used to set the error text for QIODevice::errorString(). So you can either use the default error description provided by errorString() or compose your own by interpreting the reply's attributes. If a download failed and you want to resume it or if you only want to download a specific part of a file, you can use the range header: QNetworkRequest req(QUrl("...")); req.setRawHeader("Range", "bytes=300-500"); QNetworkReply *reply = m_nam->get(req); In this example only the bytes 300 to 500 would be downloaded. However, the server must support this. Downloading files over FTP As simple as it is to download files over HTTP, as simple it is to download a file over FTP. If it is an anonymous FTP server for which you do not need an authentication, just use the URL like we did earlier. Assuming there is again a file called version.txt on the FTP server on localhost, type: m_nam->get(QNetworkRequest(QUrl("ftp://localhost/version.txt"))); That is all, everything else stays the same. If the FTP server requires an authentication you'll get an error, for example: Setting the user name and the user password to access an FTP server is likewise easy. Either write it in the URL or use QUrl functions setUserName() and setPassword(). If the server does not use a standard port, you can set the port explicitly with QUrl::setPort(). To upload a file to a FTP server use QNetworkAccessManager::put() which takes as first argument a QNetworkRequest, calling a URL that defines the name of the new file on the server, and as second argument the actual data, that should be uploaded. For small uploads, you can pass the content as a QByteArray. For larger contents, better use a pointer to a QIODevice. Make sure the device is open and stays available until the upload is done. Downloading files in parallel A very important note on QNetworkAccessManager: it works asynchronously. This means you can post a network request without blocking the main event loop and this is what keeps the GUI responsive. If you post more than one request, they are put on the manager's queue. Depending on the protocol used they get processed in parallel. If you are sending HTTP requests, normally up to six requests will be handled at a time. This will not block the application. Therefore, there is really no need to encapsulate QNetworkAccessManager in a thread, unfortunately, this unnecessary approach is frequently recommended all over the Internet. QNetworkAccessManager already threads internally. Really, don't move QNetworkAccessManager to a thread—unless you know exactly what you are doing. If you send multiple requests, the slot connected to the manager's finished() signal is called in an arbitrary order depending on how quickly a request gets a reply from the server. This is why you need to know to which request a reply belongs. This is one reason why every QNetworkReply carries its related QNetworkRequest. It can be accessed through QNetworkReply::request(). Even if the determination of the replies and their purpose may work for a small application in a single slot, it will quickly get large and confusing if you send a lot of requests. This problem is aggravated by the fact that all replies are delivered to only one slot. Since most probably there are different types of replies that need different treatments, it would be better to bundle them to specific slots, specialized for a special task. Fortunately this can be achieved very easily. QNetworkAccessManager::get() returns a pointer to the QNetworkReply which will get all information about the request you post with get(). By using this pointer, you can then connect specific slots to the reply's signals. For example if you have several URLs and you want to save all linked images from these sites to the hard drive, then you would request all web pages via QNetworkAccessManager::get() and connect their replies to a slot specialized for parsing the received HTML. If links to images are found, this slot would request them again with get(). However, this time the replies to these requests would be connected to a second slot, which is designed for saving the images to the disk. Thus you can separate the two tasks, parsing HTML and saving data to a local drive. The most important signals of QNetworkReply are. The finished signal The finished() signal is equivalent with the QNetworkAccessManager::finished() signal we used earlier. It is triggered as soon as a reply has been returned—successfully or not. After this signal has been emitted, neither the reply's data nor its metadata will be altered anymore. With this signal you are now able to connect a reply to a specific slot. This way you can realize the scenario outlined previously. However, one problem remains: if you post simultaneous requests, you do not know which one has finished and thus called the connected slot. Unlike QNetworkAccessManager::finished(), QNetworkReply::finished() does not pass a pointer to QNetworkReply; this would actually be a pointer to itself in this case. A quick solution to solve this problem is to use sender(). It returns a pointer to the QObject instance that has called the slot. Since we know that it was a QNetworkReply, we can write: QNetworkReply *reply = qobject_cast<QNetworkReply*>(sender()); if (!reply) return; This was done by casting sender() to a pointer of type QNetworkReply. Whenever casting classes that inherit QObject, use qobject_cast. Unlike dynamic_cast it does not use RTTI and works across dynamic library boundaries. Although we can be pretty confident the cast will work, do not forget to check if the pointer is valid. If it is a null pointer, exit the slot. Time for action – writing OOP conform code by using QSignalMapper A more elegant way that does not rely on sender(), would be to use QSignalMapper and a local hash, in which all replies that are connected to that slot are stored. So whenever you call QNetworkAccessManager::get() store the returned pointer in a member variable of type QHash<int, QNetworkReply*> and set up the mapper. Let's assume that we have following member variables and that they are set up properly: QNetworkAccessManager *m_nam; QSignalMapper *m_mapper; QHash<int, QNetworkReply*> m_replies; Then you would connect the finished() signal of a reply this way: QNetworkReply *reply = m_nam->get(QNetworkRequest(QUrl(/*...*/))); connect(reply, SIGNAL(finished()), m_mapper, SLOT(map())); int id = /* unique id, not already used in m_replies*/; m_replies.insert(id, reply); m_mapper->setMapping(reply, id); What just happened? First we post the request and fetch the pointer to the QNetworkReply with reply. Then we connect the reply's finished signal to the mapper's slot map(). Next we have to find a unique ID which must not already be in use in the m_replies variable. One could use random numbers generated with qrand() and fetch numbers as long as they are not unique. To determine if a key is already in use, call QHash::contains(). It takes the key as an argument against which it should be checked. Or even simpler: count up another private member variable. Once we have a unique ID we insert the pointer to QNetworkReply in the hash using the ID as a key. Last, with setMapping(), we set up the mapper's mapping: the ID's value corresponds to the actual reply. At a prominent place, most likely the constructor of the class, we already have connected the mappers map() signal to a custom slot. For example: connect(m_mapper, SIGNAL(mapped(int)), this, SLOT(downloadFinished(int))); When the slot downloadFinished() is called, we can get the corresponding reply with: void SomeClass::downloadFinished(int id) { QNetworkReply *reply = m_replies.take(id); // do some stuff with reply here reply->deleteLater(); } QSignalMapper also allows to map with QString as an identifier instead of an integer as used above. So you could rewrite the example and use the URL to identify the corresponding QNetworkReply; at least as long as the URLs are unique. The error signal If you download files sequentially, you can swap the error handling out. Instead of dealing with errors in the slot connected to the finished() signal, you can use the reply's signal error() which passes the error of type QNetworkReply::NetworkError to the slot. After the error() signal has been emitted, the finished() signal will most likely also be emitted shortly. The readyRead signal Until now, we used the slot connected to the finished() signal to get the reply's content. That works perfectly if you deal with small files. However, this approach is unsuitable when dealing with large files since they would unnecessarily bind too many resources. For larger files it is better to read and save transferred data as soon as it is available. We get informed by QIODevice::readyRead() whenever new data is available to be read. So for large files you should type in the following: connect(reply, SIGNAL(readyRead()), this, SLOT(readContent())); file.open(QIODevice::WriteOnly); This will help you connect the reply's signal readyRead() to a slot, set up QFile and open it. In the connected slot, type in the following snippet: const QByteArray ba = reply->readAll(); file.write(ba); file.flush(); Now you can fetch the content, which was transferred so far, and save it to the (already opened) file. This way the needed resources are minimized. Don't forget to close the file after the finished() signal was emitted. In this context it would be helpful if one could know upfront the size of the file one wants to download. Therefore, we can use QNetworkAccessManager::head(). It behaves like the get() function, but does not transfer the content of the file. Only the headers are transferred. And if we are lucky, the server sends the "Content-Length" header, which holds the file size in bytes. To get that information we type: reply->head(QNetworkRequest::ContentLengthHeader).toInt(); With this information, we could also check upfront if there is enough space left on the disk. The downloadProgress method Especially when a big file is being downloaded, the user usually wants to know how much data has already been downloaded and how long it will approximately take for the download to finish. Time for action – showing the download progress In order to achieve this we can use the reply's downloadProgress() signal. As a first argument it passes the information on how many bytes have already been received and as a second argument how many there are in total. This gives us the possibility to indicate the progress of the download with QProgressBar. As the passed arguments are of type qint64 we can't use them directly with QProgressBar since it only accepts int. So in the connected slot we first calculate the percentage of the download progress: void SomeClass::downloadProgress(qint64 bytesReceived, qint64 bytesTotal) { qreal progress = (bytesTotal < 1) ? 1.0 : bytesReceived * 100.0 / bytesTotal; progressBar->setValue(progress * progressBar->maximum()); } What just happened? With the percentage we set the new value for the progress bar where progressBar is the pointer to this bar. However, what value will progressBar->maximum() have and where do we set the range for the progress bar? What is nice is that you do not have to set it for every new download. It is only done once, for example in the constructor of the class containing the bar. As range values I would recommend: progressBar->setRange(0, 2048); The reason is that if you take for example a range of 0 to 100 and the progress bar is 500 pixels wide, the bar would jump 5 pixels forward for every value change. This will look ugly. To get a smooth progression where the bar expands by 1 pixel at a time, a range of 0 to 99.999.999 would surely work but would be highly inefficient. This is because the current value of the bar would change a lot without any graphical depiction. So the best value for the range would be 0 to the actual bar's width in pixel. Unfortunately, the width of the bar can change depending on the actual widget width and frequently querying the actual size of the bar every time the value change is also not a good solution. Why 2048, then? The idea behind this value is the resolution of the screen. Full HD monitors normally have a width of 1920 pixels, thus taking 2^11, aka 2048, ensure that a progress bar runs smoothly, even if it is fully expanded. So 2048 isn't the perfect number but a fairly good compromise. If you are targeting smaller devices, choose a smaller, more appropriate number. To be able to calculate the remaining time for the download to finish you have to start a timer. In this case use QElapsedTimer. After posting the request with QNetworkAccessManager::get() start the timer by calling QElapsedTimer::start(). Assuming the timer is called m_timer, the calculation would be: qint64 total = m_timer.elapsed() / progress; qint64 remaining = (total – m_timer.elapsed()) / 1000; QElapsedTimer::elapsed() returns the milliseconds counting from the moment when the timer was started. This value divided by the progress equals the estimated total download time. If you subtract the elapsed time and divide the result by 1000, you'll get the remaining time in seconds. Using a proxy If you like to use a proxy you first have to set up a QNetworkProxy. You have to define the type of the proxy with setType(). As arguments you most likely want to pass QNetworkProxy::Socks5Proxy or QNetworkProxy::HttpProxy. Then set up the host name with setHostName(), the user name with setUserName() and the password with setPassword(). The last two properties are, of course, only needed if the proxy requires an authentication. Once the proxy is set up you can set it to the access manager via QNetworkAccessManager::setProxy(). Now, all new requests will use that proxy. Summary In this article you familiarized yourself with QNetworkAccessManager. This class is at the heart of your code whenever you want to download or upload files to the Internet. After having gone through the different signals that you can use to fetch errors, to get notified about new data or to show the progress, you should now know everything you need on that topic. Resources for Article: Further resources on this subject: GUI Components in Qt 5[article] Code interlude – signals and slots [article] Configuring Your Operating System [article]
Read more
  • 0
  • 0
  • 14361

article-image-replacing-2d-sprites-3d-models
Packt
21 Sep 2015
21 min read
Save for later

Replacing 2D Sprites with 3D Models

Packt
21 Sep 2015
21 min read
In this article by Maya Posch author of the book Mastering AndEngine Game Development, when using a game engine that limits itself to handling scenes in two dimensions, it seems obvious that you would use two-dimensional images here, better known as sprites. After all, you won't need that third dimension, right? It is when you get into more advanced games and scenes that you notice that with animations, and also with the usage of existing assets, there are many advantages of using a three-dimensional model in a two-dimensional scene. In this article we will cover these topics: Using 3D models directly with AndEngine Loading of 3D models with an AndEngine game (For more resources related to this topic, see here.) Why 3D in a 2D game makes sense The reasons we want to use 3D models in our 2D scene include the following: Recycling of assets: You can use the same models as used for a 3D engine project, as well as countless others. Broader base of talent: You'll be able to use a 3D modeler for your 2D game, as good sprite artists are so rare. Ease of animation: Good animation with sprites is hard. With 3D models, you can use various existing utilities to get smooth animations with ease. As for the final impact it has on the game's looks, it's no silver bullet but should ease the development somewhat. The quality of the used models and produced animations as well as the way they are integrated into a scene will determine the final look. 2D and 3D compared In short: 2D sprite 3D model Defined using a 2D grid of pixels Defined using vertices in a 3D grid Only a single front view Rotatable to observe any desired side Resource-efficient Resource-intensive A sprite is an image, or—if it's animated—a series of images. Within the boundaries of its resolution (for example 64, x 64 pixels), the individual pixels make up the resulting image. This is a proven low-tech method, and it has been in use since the earliest video games. Even the first 3D games, such as Wolfenstein 3D and Doom, used sprites instead of models, as the former are easy to implement and require very few resources to render. With the available memory and processing capabilities of video consoles and personal computers until the later part of the 1990s, sprites were everywhere. It wasn't until the appearance of dedicated vertex graphics processors for consumer systems from companies such as 3dfx, Nvidia, and ATI that sprites would be largely replaced by vertex (3D) models. This is not to say that 3D models were totally new by then, of course. The technology had been in commercial use since the 1970s, when it was used for movie CGI and engineering in particular. In essence, both sprites and models are a representation of the same object; it's just that one contains more information than the other. Once rendered on the screen, the resulting image contains roughly the same amount of data. The biggest difference between sprites and models is the total amount of information that they can contain. For a sprite, there is no side or back. A model, on the other hand, has information about every part of its surface. It can be rotated in front of a camera to obtain a rendering of each of those orientations. A sprite is thus equivalent to a single orientation of a model. Dealing with the third dimension The first question that is likely to come to mind when it is suggested to use 3D models in what is advertised as a 2D engine is whether or not this will make the game engine into a 3D engine. The brief answer here is "No." The longer answer is that despite the presence of these models, the engine's camera and other features are not aware of this third dimension, and so they will not be able to deal with it. It's not unlike the ray-casting engine employed by titles such as Wolfenstein 3D, which always operated in a horizontal plane and, by default, was not capable of tilting the camera to look up or down. This does imply that AndEngine can be turned into a 3D engine if all of its classes are adapted to deal with another dimension. We're not going that far here, however. All that we are interested in right now is integrating 3D model support into the existing framework. For this, we need a number of things. The most important one is to be able to load these models. The second is to render them in such a way that we can use them within the AndEngine framework. As we explored earlier, the way of integrating 3D models into a 2D scene is by realizing that a model is just a very large collection of possible sprites. What we need is a camera so that we can orient it relatively to the model, similar to how the camera in a 3D engine works. We can then display the model from the orientation. Any further manipulations, such as scaling and scene-wide transformations, are performed on the model's camera configuration. The model is only manipulated to obtain a new orientation or frame of an animation. Setting up the environment We first need to load the model from our resources into the memory. For this, we require logic that fetches the file, parses it, and produces the output, which we can use in the following step of rendering an orientation of the model. To load the model, we can either write the logic for it ourselves or use an existing library. The latter approach is generally preferred, unless you have special needs that are not yet covered by an existing library. As we have no such special needs, we will use an existing library. Our choice here is the open Asset Import Library, or assimp for short. It can import numerous 3D model files in addition to other kinds of resource files, which we'll find useful later on. Assimp is written in C++, which means that we will be using it as a native library (.a or .so). To accomplish this, we first need to obtain its source code and compile it for Android. The main Assimp site can be found at http://assimp.sf.net/, and the Git repository is at https://github.com/assimp/assimp. From the latter, we obtain the current source for Assimp and put it into a folder called assimp. We can easily obtain the Assimp source by either downloading an archive file containing the full repository or by using the Git client (from http://git-scm.com/) and cloning the repository using the following command in an empty folder (the assimp folder mentioned): git clone https://github.com/assimp/assimp.git This will create a local copy of the remote Git repository. An advantage of this method is that we can easily keep our local copy up to date with the Assimp project's version simply by pulling any changes. As Assimp uses CMake for its build system, we will also need to obtain the CMake version for Android from http://code.google.com/p/android-cmake/. Android-Cmake contains the toolchain file that we will need to set up the cross-compilation from our host system to Android/ARM. Assuming that we put Android-cmake into the android-cmake folder, we can then find this toolchain file under android-cmake/toolchain/android.toolchain.cmake. We now need to either set the following environmental variable or make sure we have properly set it: ANDROID_NDK: This points to the root folder where the Android NDK is placed At this point, we can use either the command-line-based CMake tool or the cross-platform CMake GUI. We choose the latter for sheer convenience. Unless you are quite familiar with the working of CMake, the use of the GUI tool can make the experience significantly more intuitive, not to mention faster and more automated. Any commands we use in the GUI tool will, however, easily translate to the command-line tool. The first thing we do after opening the CMake GUI utility is specify the location of the source—the assimp source folder—and the output for the CMake-generated files. For this path to the latter, we will create a new folder called buildandroid inside the Assimp source folder and specify it as the build folder. We now need to set a variable inside the CMake GUI: CMAKE_MAKE_PROGRAM: This variable specifies the path to the Make executable. For Linux/BSD, use GNU Make or similar; for Windows, use MinGW Make. Next, we will want to click on the Configure button where we can set the type of Make files generated as well as specify the location of the toolchain file. For the Make file type, you will generally want to pick Unix makefiles on Linux or similar and MinGW makefiles on Windows. Next, pick the option that allows you to specify the cross-compile toolchain file and select this file inside the Android-cmake folder as detailed earlier. After this, the CMake GUI should output Configuring done. What has happened now is that the toolchain file that we linked to has configured CMake to use the NDK's compiler, which targets ARM as well as sets other configuration options. If we want, we can change some options here, such as the following: CMAKE_BUILD_TYPE: We can specify the type of build we want here, which includes the Debug and Release strings. ASSIMP_BUILD_STATIC_LIB: This is a boolean value. Setting it to true (or checking the box in the GUI) will generate only a library file for static linking and no .so file. Whether we want to build statically or not depends on our ultimate goals and distribution details. As static linking of external libraries is quite convenient and also reduces the total file size on the platform, which is generally already strapped for space, it seems obvious to link statically. The resulting .a library for a release build should be in the order of 16 megabytes, while a debug build is about 68 megabytes. When linking the final application, only those parts of the library that we'll use will be included in our application, shrinking the total file size once more. We are now ready to click on the Generate button, which should generate a Generating done output. If you get an error along the lines of Could not uniquely determine machine name for compiler, you should look at the paths used by CMake and check whether they exist. For the NDK toolchain on Windows, for example, the path may contain the windows part, whereas the NDK only has a folder called windows-x86_64. If we look into the buildandroid folder after this, we can see that CMake has generated a makefile and additional relevant files. We only need the central Make file in the buildandroid folder, however. In a terminal window, we navigate to this folder and execute the following command: make This should start the execution of the Make files that CMake generated and result in a proper build. At the end of this compilation sequence, we should have a library file in assimp/libs/armeabi-v7a/ called libassimp.a. For our project, we need this library and the Assimp include files. We can find them under assimp/include/assimp. We copy the folder with the include files to our project's /jni folder. The .a library is placed in the /jni folder as well. As this is a relatively simple NDK project, a simple file structure is fine. For a more complex project, we would want to have a separate /jni/libs folder, or something similar. Importing a model The Assimp library provides conversion tools for reading resource files, such as those for 3D mesh models, and provides a generic format on the application's side. For a 3D mesh file, Assimp provides us with an aiScene object that contains all the meshes and related data as described by the imported file. After importing a model, we need to read the sets of data that we require for rendering. These are the types of data: Vertices (positions) Normals Texture mapping (UV) Indices Vertices might be obvious; they are the positions of points between which lines of basic geometric shapes are drawn. Usually, three vertices are used to form a triangular face, which forms the basic shape unit for a model. Normals indicate the orientation of the vertex. We have one normal per vertex. Texture mapping is provided using so-called UV coordinates. Each vertex has a UV coordinate if texture mapping information is provided with the model. Finally, indices are values provided per face, indicating which vertices should be used. This is essentially a compression technique, allowing the faces to define the vertices that they will use so that shared vertices have to be defined only once. During the drawing process, these indices are used by OpenGL to find the vertices to draw. We start off our importer code by first creating a new file called assimpImporter.cpp in the /jni folder. We require the following include: #include "assimp/Importer.hpp" // C++ importer interface #include "assimp/scene.h" // output data structure #include "assimp/postprocess.h" // post processing flags // for native asset manager #include <sys/types.h> #include <android/asset_manager.h> #include <android/asset_manager_jni.h> The Assimp include give us access to the central Importer object, which we'll use for the actual import process, and the scene object for its output. The postprocess include contains various flags and presets for post-processing information to be used with Importer, such as triangulation. The remaining includes are meant to give us access to the Android Asset Manager API. The model file is stored inside the /assets folder, which once packaged as an APK is only accessible during runtime via this API, whether in Java or in native code. Moving on, we will be using a single function in our native code to perform the importing and processing. As usual, we have to first declare a C-style interface so that when our native library gets compiled, our Java code can find the function in the library: extern "C" { JNIEXPORT jboolean JNICALL Java_com_nyanko_andengineontour_MainActivity_getModelData(JNIEnv* env, jobject obj, jobject model, jobject assetManager, jstring filename); }; The JNIEnv* parameter and the first jobject parameter are standard in an NDK/JNI function, with the former being a handy pointer to the current JVM environment, offering a variety of utility functions. Our own parameters are the following: model assetManager filename The model is a basic Java class with getters/setters for the arrays of vertex, normal, UV and index data of which we create an instance and pass a reference via the JNI. The next parameter is the Asset Manager instance that we created in the Java code. Finally, we obtain the name of the file that we are supposed to load from the assets containing our mesh. One possible gotcha in the naming of the function we're exporting is that of underscores. Within the function name, no underscores are allowed, as underscores are used to indicate to the NDK what the package name and class names are. Our getModelData function gets parsed as being in the MainActivity class of the package com.nyanko.andengineontour. If we had tried to use, for example, get_model_data as the function name, it would have tried to find function data in the model class of the com.nyanko.andengineontour.get package. Next, we can begin the actual importing process. First, we define the aiScene instance, that will contain the imported scene, and the arrays for the imported data, as well as the Assimp Importer instance: const aiScene* scene = 0; jfloat* vertexArray; jfloat* normalArray; jfloat* uvArray; jshort* indexArray; Assimp::Importer importer; In order to use a Java string in native code, we have to use the provided method to obtain a reference via the env parameter: const char* utf8 = env->GetStringUTFChars(filename, 0); if (!utf8) { return JNI_FALSE; } We then create a reference to the Asset Manager instance that we created in Java: AAssetManager* mgr = AAssetManager_fromJava(env, assetManager); if (!mgr) { return JNI_FALSE; } We use this to obtain a reference to the asset we're looking for, being the model file: AAsset* asset = AAssetManager_open(mgr, utf8, AASSET_MODE_UNKNOWN); if (!asset) { return JNI_FALSE; } Finally, we release our reference to the filename string before moving on to the next stage: env->ReleaseStringUTFChars(filename, utf8); With access to the asset, we can now read it from the memory. While it is, in theory, possible to directly read a file from the assets, you will have to write a new I/O manager to allow Assimp to do this. This is because asset files, unfortunately, cannot be passed as a standard file handle reference on Android. For smaller models, however, we can read the entire file from the memory and pass this data to the Assimp importer. First, we get the size of the asset, create an array to store its contents, and read the file in it: int count = (int) AAsset_getLength(asset); char buf[count + 1]; if (AAsset_read(asset, buf, count) != count) { return JNI_FALSE; } Finally, we close the asset reference: AAsset_close(asset); We are now done with the asset manager and can move on to the importing of this model data: const aiScene* scene = importer.ReadFileFromMemory(buf, count, aiProcessPreset_TargetRealtime_Fast); if (!scene) { return JNI_FALSE; } The importer has a number of possible ways to read in the file data, as mentioned earlier. Here, we read from a memory buffer (buf) that we filled in earlier with the count parameter, indicating the size in bytes. The last parameter of the import function is the post-processing parameters. Here, we use the aiProcessPreset_TargetRealtime_Fast preset, which performs triangulation (converting non-triangle faces to triangles), and other sensible presets. The resulting aiScene object can contain multiple meshes. In a complete importer, you'd want to import all of them into a loop. We'll just look at importing the first mesh into the scene here. First, we get the mesh: aiMesh* mesh = scene->mMeshes[0]; This aiMesh object contains all of the information on the data we're interested in. First, however, we need to create our arrays: int vertexArraySize = mesh->mNumVertices * 3; int normalArraySize = mesh->mNumVertices * 3; int uvArraySize = mesh->mNumVertices * 2; int indexArraySize = mesh->mNumFaces * 3; vertexArray = new float[vertexArraySize]; normalArray = new float[normalArraySize]; uvArray = new float[uvArraySize]; indexArray = new jshort[indexArraySize]; For the vertex, normal, and texture mapping (UV) arrays, we use the number of vertices as defined in the aiMesh object as normal, and the UVs are defined per vertex. The former two have three components (x, y, z) and the UVs have two (x, y). Finally, indices are defined per vertex of the face, so we use the face count from the mesh multiplied by the number of vertices. All things but indices use floats for their components. The jshort type is a short integer type defined by the NDK. It's generally a good idea to use the NDK types for values that are sent to and from the Java side. Reading the data from the aiMesh object to the arrays is fairly straightforward: for (unsigned int i = 0; i < mesh->mNumVertices; i++) { aiVector3D pos = mesh->mVertices[i]; vertexArray[3 * i + 0] = pos.x; vertexArray[3 * i + 1] = pos.y; vertexArray[3 * i + 2] = pos.z; aiVector3D normal = mesh->mNormals[i]; normalArray[3 * i + 0] = normal.x; normalArray[3 * i + 1] = normal.y; normalArray[3 * i + 2] = normal.z; aiVector3D uv = mesh->mTextureCoords[0][i]; uvArray[2 * i * 0] = uv.x; uvArray[2 * i * 1] = uv.y; } for (unsigned int i = 0; i < mesh->mNumFaces; i++) { const aiFace& face = mesh->mFaces[i]; indexArray[3 * i * 0] = face.mIndices[0]; indexArray[3 * i * 1] = face.mIndices[1]; indexArray[3 * i * 2] = face.mIndices[2]; } To access the correct part of the array to write to, we use an index that uses the number of elements (floats or shorts) times the current iteration plus an offset to ensure that we reach the next available index. Doing things this way instead of pointing incrementation has the benefit that we do not have to reset the array pointer after we're done writing. There! We have now read in all of the data that we want from the model. Next is arguably the hardest part of using the NDK—passing data via the JNI. This involves quite a lot of reference magic and type-matching, which can be rather annoying and lead to confusing errors. To make things as easy as possible, we used the generic Java class instance so that we already had an object to put our data into from the native side. We still have to find the methods in this class instance, however, using what is essentially a Java reflection: jclass cls = env->GetObjectClass(model); if (!cls) { return JNI_FALSE; } The first goal is to get a jclass reference. For this, we use the jobject model variable, as it already contains our instantiated class instance: jmethodID setVA = env->GetMethodID(cls, "setVertexArray", "([F)V"); jmethodID setNA = env->GetMethodID(cls, "setNormalArray", "([F)V"); jmethodID setUA = env->GetMethodID(cls, "setUvArray", "([F)V"); jmethodID setIA = env->GetMethodID(cls, "setIndexArray", "([S)V"); We then obtain the method references for the setters in the class as jmethodID variables. The parameters in this class are the class reference we created, the name of the method, and its signature, being a float array ([F) parameter and a void (V) return type. Finally, we create our native Java arrays to pass back via the JNI: jfloatArray jvertexArray = env->NewFloatArray(vertexArraySize); env->SetFloatArrayRegion(jvertexArray, 0, vertexArraySize, vertexArray); jfloatArray jnormalArray = env->NewFloatArray(normalArraySize); env->SetFloatArrayRegion(jnormalArray, 0, normalArraySize, normalArray); jfloatArray juvArray = env->NewFloatArray(uvArraySize); env->SetFloatArrayRegion(juvArray, 0, uvArraySize, uvArray); jshortArray jindexArray = env->NewShortArray(indexArraySize); env->SetShortArrayRegion(jindexArray, 0, indexArraySize, indexArray); This code uses the env JNIEnv* reference to create the Java array and allocate memory for it in the JVM. Finally, we call the setter functions in the class to set our data. These essentially calls the methods on the Java class inside the JVM, providing the parameter data as Java types: env->CallVoidMethod(model, setVA, jvertexArray); env->CallVoidMethod(model, setNA, jnormalArray); env->CallVoidMethod(model, setUA, juvArray); env->CallVoidMethod(model, setIA, jindexArray); We only have to return JNI_TRUE now, and we're done. Building our library To build our code, we write the Android.mk and Application.mk files. Next, we go to the top level of our project in a terminal window and execute the ndk-build command. This will compile the code and place a library in the /libs folder of our project, inside a folder that indicates the CPU architecture it was compiled for. For further details on the ndk-build tool, you can refer to the official documentation at https://developer.android.com/ndk/guides/ndk-build.html. Our Android.mk file looks as follows: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := libassimp LOCAL_SRC_FILES := libassimp.a include $(PREBUILT_STATIC_LIBRARY) include $(CLEAR_VARS) LOCAL_MODULE := assimpImporter #LOCAL_MODULE_FILENAME := assimpImporter LOCAL_SRC_FILES := assimpImporter.cpp LOCAL_LDLIBS := -landroid -lz -llog LOCAL_STATIC_LIBRARIES := libassimp libgnustl_static include $(BUILD_SHARED_LIBRARY) The only things worthy of notice here are the inclusion of the Assimp library we compiled earlier and the use of the gnustl_static library. Since we only have a single native library in the project, we don't have to share the STL library. So, we link it with our library. Finally, we have the Application.mk file: APP_PLATFORM := android-9 APP_STL := gnustl_static There's not much to see here beyond the required specification of the STL runtime that we wish to use and the Android revision we are aiming for. After executing the build command, we are ready to build the actual application that performs the rendering of our model data. Summary With our code added, we can now load 3D models from a variety of formats, import it into our application, and create objects out of them, which we can use together with AndEngine. As implemented now, we essentially have an embedded rendering pipeline for 3D assets that extends the basic AndEngine 2D rendering pipeline. This provides a solid platform for the next stages in extending these basics even further to provide the texturing, lighting, and physics effects that we need to create an actual game. Resources for Article: Further resources on this subject: Cross-platform Building[article] Getting to Know LibGDX [article] Nodes [article]
Read more
  • 0
  • 0
  • 11785
article-image-overview-unreal-engine-4
Packt
18 Sep 2015
2 min read
Save for later

Overview of Unreal Engine 4

Packt
18 Sep 2015
2 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will discuss and evaluate the basic 3D physics and mathematics concepts in an effort to gain a basic understanding of Unreal Engine 4 physics and real-world physics. To start with, we will discuss the units of measurement, what they are, and how they are used in Unreal Engine 4. In addition, we will cover the following topics: The scientific notation 2D and 3D coordinate systems Scalars and vectors Newton's laws or Newtonian physics concepts Forces and energy For the purpose of this chapter, we will want to open Unreal Engine 4 and create a simple project using the First Person template by following these steps. (For more resources related to this topic, see here.) Launching Unreal Engine 4 When we first open Unreal Engine 4, we will see the Unreal Engine Launcher, which contains a News tab, a Learn tab, a Marketplace tab, and a Library tab. As the first title suggests, the News tab provides you with the latest news from Epic Games, ranging from Marketplace Content releases to Unreal Dev Grant winners, Twitch Stream Recaps, and so on. The Learn tab provides you with numerous resources to learn more about Unreal Engine 4, such as Written Documentation, Video Tutorials, Community Wikis, Sample Game Projects, and Community Contributions. The Marketplace tab allows you to purchase content, such as FX, Weapons Packs, Blueprint Scripts, Environmental Assets, and so on, from the community and Epic Games. Lastly, the Library tab is where you can download the newest versions of Unreal Engine 4, open previously created projects, and manage your project files. Let's start by first launching the Unreal Engine Launcher and choosing Launch from the Library tab, as seen in the following image: For the sake of consistency, we will use the latest version of the editor. At the time of writing this book, the version is 4.7.6. Next, we will select the New Project tab that appears at the top of the window, select the First Person project template with Starter Content, and name the project Unreal_PhyProject: Summary In this article we had an an overview of Unreal Engine 4 and how to launch Unreal Engine 4. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints [article] Unreal Development Toolkit: Level Design HQ [article] Configuration and Handy Tweaks for UDK [article]
Read more
  • 0
  • 0
  • 34533

article-image-development-workflow-docker
Xavier Bruhiere
18 Sep 2015
8 min read
Save for later

A Development Workflow with Docker

Xavier Bruhiere
18 Sep 2015
8 min read
In this post, we're going to explore the sacred developer workflow, and how we can leverage modern technologies to craft a very opinionated and trendy setup. As such, a topic might involve a lot of personal tastes, so we will mostly focus on ideas that have the potential to increase developer happiness, productivity and software quality. The tools used in this article made my life easier, but feel free to pick what you like and swap what you don't with your own arsenal. While it is a good idea to stick with mature tools and seriously learn how to master them, you should keep an open mind and periodically monitor what's new. Software development evolves at an intense pace and smart people regularly come up with new projects that can help us to be better at what we do. To keep things concrete and challenge our hypothesizes, we're going to develop a development tool. Our small command line application will manage the creation, listing and destruction of project tickets. We will write it in node.js to enjoy a scripting language, a very large ecosystem and a nice integration with yeoman. This last reason foreshadows future features and probably a post about them. Code Setup The code has been tested under Ubuntu 14.10, io.js version 1.8.1 and npm version 2.8.3. As this post focuses on the workflow, rather than on the code, I'll keep everything as simple as possible and assume you have a basic knowledge of docker and developing with node. Now let's build the basic structure of a new node project. code/ ➜ tree . ├── package.json ├── bin │   └── iago.js ├── lib │   └── notebook.js └── test    ├── mocha.opts    └── notebook.js Some details: bin/iago.js is the command line entry point. lib/notebook.js exports the methods to interact with tickets. test/ uses mocha and chai for unit-testing. package.json provides information on the project: { "name":"iago", "version":"0.1.0", "description":"Ticker management", "bin":{ "iago":"./bin/iago.js" } } Build Automation As TDD advocates, let's start with a failing test. // test/notebook.js # Mocha - the fun, simple, flexible JavaScript test framework # Chai - Assertion Library var expect = require('chai').expect; var notebook = require('../lib/notebook'); describe('new note', function() { beforeEach(function(done) { // Reset the database, used to store tickets, after each test, to keep them independent notebook.backend.remove(); done(); }) it('should be empty', function() { expect(notebook.backend.size()).to.equal(0); }); }); In order to run it, we first need to install node, npm, mocha and chai. Ideally, we share same software versions as the rest of the team, on the same OS. Hopefully, it won't collapse with other projects we might develop on the same machine and the production environment is exactly the same. Or we could use docker and don't bother. $ docker run -it --rm # start a new container, automatically removed once done --volume $PWD:/app # make our code available from within the container --workdir /app # set default working dir in project's root iojs # use official io.js image npm install --save-dev mocha chai # install test libraries and save it in package.json This one-liner install mocha and chai locally in node_modules/. With nothing more than docker installed, we can now run tests. $ docker run -it --rm --volume $PWD:/app --workdir /app iojs node_modules/.bin/mocha Having dependencies bundled along with the project let us use the stack container as is. This approach extends to other languages remarkably : ruby has Bundle and Go has Godep. Let's make the test pass with the following implementation of our notebook. /*jslint node: true */ 'use strict'; var path = require('path'); # Flat JSON file database built on lodash API var low = require('lowdb'); # Pretty unicode tables for the CLI withNode.JS var table = require('cli-table'); /** * Storage with sane defaults * @param{string} dbPath - Flat (json) file Lowdb will use * @param{string} dbName - Lowdb database name */ functiondb(dbPath, dbName) { dbPath = dbPath || process.env.HOME + '/.iago.json'; dbName = dbName || 'notebook'; console.log('using', dbPath, 'storage'); returnlow(dbPath)(dbName); } module.exports = { backend: db(), write: function(title, content, owner, labels) { var note = { meta: { project: path.basename(process.cwd()), date: newDate(), status: 'created', owner: owner, labels: labels, }, title: title, ticket: content, }; console.log('writing new note:', title); this.backend.push(note); }, list: function() { var i = 0; var grid = newtable({head:['title', 'note', 'author', 'date']}); var dump = db().cloneDeep(); for (; i < dump.length; i++) { grid.push([ dump[i].title, dump[i].ticket, dump[i].meta.author, dump[i].meta.date ]); } console.log(grid.toString()); }, done: function(title) { var notes = db().remove({title: title}); console.log('note', notes[0].title, 'removed'); } }; Again we install dependencies and re-run tests. # Install lowdb and cli-table locally docker run -it --rm --volume $PWD:/app --workdir /app iojs npm install lowdb cli-table # Successful tests docker run -it --rm --volume $PWD:/app --workdir /app iojs node_modules/.bin/mocha To sum up, so far: The iojs container gives us a consistent node stack. When mapping the code as a volume and bundling the dependencies locally, we can run tests or execute anything. In the second part, we will try to automate the process and integrate those ideas smoothly in our workflow. Coding Environment Containers provide a consistent way to package environments and distribute them. This is ideal to setup a development machine and share it with the team / world. The following Dockerfile builds such an artifact: # Save it as provision/Dockerfile FROM ruby:latest RUN apt-get update && apt-get install -y tmux vim zsh RUN gem install tmuxinator ENV EDITOR "vim" # Inject development configuration ADD workspace.yml /root/.tmuxinator/workspace.yml ENTRYPOINT ["tmuxinator"] CMD ["start", "workspace"] Tmux is a popular terminal multiplexer and tmuxinator let us easily control how to organize and navigate terminal windows. The configuration thereafter setup a single window split in three : The main pane where we can move around and edit files The test pane where tests continuously run on file changes The repl pane with a running interpreter # Save as provision/workspace.yml name: workspace # We find the same code path as earlier root: /app windows: -workspace: layout: main-vertical panes: - zsh # Watch files and rerun tests - docker exec -it code_worker_1 node_modules/.bin/mocha --watch -repl: # In case worker container is still bootstraping - sleep 3 - docker exec -it code_worker_1 node Let's dig what's behind docker exec -it code_worker_1 node_modules/.bin/mocha --watch. Workflow Deployment This command supposes an iojs container, named code_worker_1, is running. So we have two containers to orchestrate and docker compose is a very elegant solution for that. The configuration file below describes how to run them. # This container have the necessary tech stack worker: image: iojs volumes: -.:/app working_dir: /app # Just hang around # The other container will be in charge to run interesting commands command:"while true; do echo hello world; sleep 10; done" # This one is our development environment workspace: # Build the dockerfile we described earlier build: ./provision # Make docker client available within the container volumes: -/var/run/docker.sock:/var/run/docker.sock -/usr/bin/docker:/usr/bin/docker # Make the code available within the container volumes_from: - worker stdin_open: true tty: true Yaml gives us a very declarative expression of our machines. Let's infuse some life in them. $ # Run in detach mode $ docker-compose up -d $ # ... $ docker-compose ps Name Command State ----------------------------------------------------- code_worker_1 while true; do echo hello w Up code_workspace_1 tmuxinator start workspace Up The code stack and the development environment are ready. We can reach them with docker attach code_workspace_1, and find a tmux session as configured above, with tests and repl in place. Once done, ctrl-p + ctrl-q to detach the session from the container, and docker-compose stop to stop both machines. Next time we'll develop on this project a simple docker-compose up -d will bring us back the entire stack and our favorite tools. What's Next We combined a lot of tools, but most of them uses configuration files we can tweak. Actually, this is the very basics of a really promising reflection. Indeed, we could easily consider more sophisticated development environments, with personal dotfiles and a better provisioning system. This is also true for the stack container, which could be dedicated to android code and run on a powerful 16GB RAM remote server. Containers unlock new potential for deployment, but also for development. The consistency those technologies bring on the table should encourage best practices, automation and help us write more reliable code, faster. Otherwise: Courtesy of xkcd About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 5013

article-image-opencv-detecting-edges-lines-shapes
Oli Huggins
17 Sep 2015
19 min read
Save for later

OpenCV: Detecting Edges, Lines, and Shapes

Oli Huggins
17 Sep 2015
19 min read
Edges play a major role in both human and computer vision. We, as humans, can easily recognize many object types and their positons just by seeing a backlit silhouette or a rough sketch. Indeed, when art emphasizes edges and pose, it often seems to convey the idea of an archetype, such as Rodin's The Thinker or Joe Shuster's Superman. Software, too, can reason about edges, poses, and archetypes. This OpenCV tutorial has been taken from Learning OpenCV 3 Computer Vision with Python. If you want to learn more, click here. OpenCV provides many edge-finding filters, including Laplacian(), Sobel(), and Scharr(). These filters are supposed to turn non-edge regions to black, while turning edge regions to white or saturated colors. However, they are prone to misidentifying noise as edges. This flaw can be mitigated by blurring an image before trying to find its edges. OpenCV also provides many blurring filters, including blur() (simple average), medianBlur(), and GaussianBlur(). The arguments for the edge-finding and blurring filters vary, but always include ksize, an odd whole number that represents the width and height (in pixels) of the filter's kernel. For the purpose of blurring, let's use medianBlur(), which is effective in removing digital video noise, especially in color images. For the purpose of edge-finding, let's use Laplacian(), which produces bold edge lines, especially in grayscale images. After applying medianBlur(), but before applying Laplacian(), we should convert the BGR to grayscale. Once we have the result of Laplacian(), we can invert it to get black edges on a white background. Then, we can normalize (so that its values range from 0 to 1) and multiply it with the source image to darken the edges. Let's implement this approach in filters.py: def strokeEdges(src, dst, blurKsize = 7, edgeKsize = 5): if blurKsize >= 3: blurredSrc = cv2.medianBlur(src, blurKsize) graySrc = cv2.cvtColor(blurredSrc, cv2.COLOR_BGR2GRAY) else: graySrc = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY) cv2.Laplacian(graySrc, cv2.CV_8U, graySrc, ksize = edgeKsize) normalizedInverseAlpha = (1.0 / 255) * (255 - graySrc) channels = cv2.split(src) for channel in channels: channel[:] = channel * normalizedInverseAlpha cv2.merge(channels, dst) Note that we allow kernel sizes to be specified as arguments to strokeEdges(). The blurKsizeargument is used as ksize for medianBlur(), while edgeKsize is used as ksize for Laplacian(). With my webcams, I find that a blurKsize value of 7 and an edgeKsize value of 5 look best. Unfortunately, medianBlur() is expensive with a large ksize, such as 7. [box type="info" align="" class="" width=""]If you encounter performance problems when running strokeEdges(), try decreasing the blurKsize value. To turn off the blur option, set it to a value less than 3.[/box] Custom kernels – getting convoluted As we have just seen, many of OpenCV's predefined filters use a kernel. Remember that a kernel is a set of weights that determine how each output pixel is calculated from a neighborhood of input pixels. Another term for a kernel is a convolution matrix. It mixes up or convolvesthe pixels in a region. Similarly, a kernel-based filter may be called a convolution filter. OpenCV provides a very versatile function, filter2D(), which applies any kernel or convolution matrix that we specify. To understand how to use this function, let's first learn the format of a convolution matrix. This is a 2D array with an odd number of rows and columns. The central element corresponds to a pixel of interest and the other elements correspond to this pixel's neighbors. Each element contains an integer or floating point value, which is a weight that gets applied to an input pixel's value. Consider this example: kernel = numpy.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) Here, the pixel of interest has a weight of 9 and its immediate neighbors each have a weight of -1. For the pixel of interest, the output color will be nine times its input color, minus the input colors of all eight adjacent pixels. If the pixel of interest was already a bit different from its neighbors, this difference becomes intensified. The effect is that the image looks sharperas the contrast between neighbors is increased. Continuing our example, we can apply this convolution matrix to a source and destination image, respectively, as follows: cv2.filter2D(src, -1, kernel, dst) The second argument specifies the per-channel depth of the destination image (such as cv2.CV_8U for 8 bits per channel). A negative value (as used here) means that the destination image has the same depth as the source image. [box type="info" align="" class="" width=""]For color images, note that filter2D() applies the kernel equally to each channel. To use different kernels on different channels, we would also have to use the split()and merge() functions.[/box] Based on this simple example, let's add two classes to filters.py. One class, VConvolutionFilter, will represent a convolution filter in general. A subclass, SharpenFilter, will specifically represent our sharpening filter. Let's edit filters.py to implement these two new classes as follows: class VConvolutionFilter(object): """A filter that applies a convolution to V (or all of BGR).""" def __init__(self, kernel): self._kernel = kernel def apply(self, src, dst): """Apply the filter with a BGR or gray source/destination.""" cv2.filter2D(src, -1, self._kernel, dst) class SharpenFilter(VConvolutionFilter): """A sharpen filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) VConvolutionFilter.__init__(self, kernel) Note that the weights sum up to 1. This should be the case whenever we want to leave the image's overall brightness unchanged. If we modify a sharpening kernel slightly so that its weights sum up to 0 instead, then we have an edge detection kernel that turns edges white and non-edges black. For example, let's add the following edge detection filter to filters.py: class FindEdgesFilter(VConvolutionFilter): """An edge-finding filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]) VConvolutionFilter.__init__(self, kernel) Next, let's make a blur filter. Generally, for a blur effect, the weights should sum up to 1 and should be positive throughout the neighborhood. For example, we can take a simple average of the neighborhood as follows: class BlurFilter(VConvolutionFilter): """A blur filter with a 2-pixel radius.""" def __init__(self): kernel = numpy.array([[0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04], [0.04, 0.04, 0.04, 0.04, 0.04]]) VConvolutionFilter.__init__(self, kernel) Our sharpening, edge detection, and blur filters use kernels that are highly symmetric. Sometimes, though, kernels with less symmetry produce an interesting effect. Let's consider a kernel that blurs on one side (with positive weights) and sharpens on the other (with negative weights). It will produce a ridged or embossed effect. Here is an implementation that we can add to filters.py: class EmbossFilter(VConvolutionFilter): """An emboss filter with a 1-pixel radius.""" def __init__(self): kernel = numpy.array([[-2, -1, 0], [-1, 1, 1], [ 0, 1, 2]]) VConvolutionFilter.__init__(self, kernel) This set of custom convolution filters is very basic. Indeed, it is more basic than OpenCV's ready-made set of filters. However, with a bit of experimentation, you will be able to write your own kernels that produce a unique look. Modifying an application Now that we have high-level functions and classes for several filters, it is trivial to apply any of them to the captured frames in Cameo. Let's edit cameo.py and add the lines that appear in bold face in the following excerpt: import cv2 import filters from managers import WindowManager, CaptureManager class Cameo(object): def __init__(self): self._windowManager = WindowManager('Cameo', self.onKeypress) self._captureManager = CaptureManager( cv2.VideoCapture(0), self._windowManager, True) self._curveFilter = filters.BGRPortraCurveFilter() def run(self): """Run the main loop.""" self._windowManager.createWindow() while self._windowManager.isWindowCreated: self._captureManager.enterFrame() frame = self._captureManager.frame filters.strokeEdges(frame, frame) self._curveFilter.apply(frame, frame) self._captureManager.exitFrame() self._windowManager.processEvents() Here, I have chosen to apply two effects: stroking the edges and emulating Portra film colors. Feel free to modify the code to apply any filters you like. Here is a screenshot from Cameo, with stroked edges and Portra-like colors: Edge detection with Canny OpenCV also offers a very handy function, called Canny, (after the algorithm's inventor, John F. Canny) which is very popular not only because of its effectiveness, but also the simplicity of its implementation in an OpenCV program as it is a one-liner: import cv2 import numpy as np img = cv2.imread("../images/statue_small.jpg", 0) cv2.imwrite("canny.jpg", cv2.Canny(img, 200, 300)) cv2.imshow("canny", cv2.imread("canny.jpg")) cv2.waitKey() cv2.destroyAllWindows() The result is a very clear identification of the edges: The Canny edge detection algorithm is quite complex but also interesting: it's a five-step process that denoises the image with a Gaussian filter, calculates gradients, applies nonmaximum suppression (NMS) on edges and a double threshold on all the detected edges to eliminate false positives, and, lastly, analyzes all the edges and their connection to each other to keep the real edges and discard weaker ones. Contours detection Another vital task in computer vision is contour detection, not only because of the obvious aspect of detecting contours of subjects contained in an image or video frame, but because of the derivative operations connected with identifying contours. These operations are, namely computing bounding polygons, approximating shapes, and, generally, calculating regions of interest, which considerably simplifies the interaction with image data. This is because a rectangular region with numpy is easily defined with an array slice. We will be using this technique a lot when exploring the concept of object detection (including faces) and object tracking. Let's go in order and familiarize ourselves with the API first with an example: import cv2 import numpy as np img = np.zeros((200, 200), dtype=np.uint8) img[50:150, 50:150] = 255 ret, thresh = cv2.threshold(img, 127, 255, 0) image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) color = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) img = cv2.drawContours(color, contours, -1, (0,255,0), 2) cv2.imshow("contours", color) cv2.waitKey() cv2.destroyAllWindows() Firstly, we create an empty black image that is 200x200 pixels size. Then, we place a white square in the center of it, utilizing ndarray's ability to assign values for a slice. We then threshold the image, and call the findContours() function. This function takes three parameters: the input image, hierarchy type, and the contour approximation method. There are a number of aspects of particular interest about this function: The function modifies the input image, so it would be advisable to use a copy of the original image (for example, by passing img.copy()). Secondly, the hierarchy tree returned by the function is quite important: cv2.RETR_TREE will retrieve the entire hierarchy of contours in the image, enabling you to establish "relationships" between contours. If you only want to retrieve the most external contours, use cv2.RETR_EXTERNAL. This is particularly useful when you want to eliminate contours that are entirely contained in other contours (for example, in a vast majority of cases, you won't need to detect an object within another object of the same type). The findContours function returns three elements: the modified image, contours, and their hierarchy. We use the contours to draw on the color version of the image (so we can draw contours in green) and eventually display it. The result is a white square, with its contour drawn in green. Spartan, but effective in demonstrating the concept! Let's move on to more meaningful examples. Contours – bounding box, minimum area rectangle and minimum enclosing circle Finding the contours of a square is a simple task; irregular, skewed, and rotated shapes bring the best out of the cv2.findContours utility function of OpenCV. Let's take a look at the following image: In a real-life application, we would be most interested in determining the bounding box of the subject, its minimum enclosing rectangle, and circle. The cv2.findContours function in conjunction with another few OpenCV utilities makes this very easy to accomplish: import cv2 import numpy as np img = cv2.pyrDown(cv2.imread("hammer.jpg", cv2.IMREAD_UNCHANGED)) ret, thresh = cv2.threshold(cv2.cvtColor(img.copy(), cv2.COLOR_BGR2GRAY) , 127, 255, cv2.THRESH_BINARY) image, contours, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for c in contours: # find bounding box coordinates x,y,w,h = cv2.boundingRect(c) cv2.rectangle(img, (x,y), (x+w, y+h), (0, 255, 0), 2) # find minimum area rect = cv2.minAreaRect(c) # calculate coordinates of the minimum area rectangle box = cv2.boxPoints(rect) # normalize coordinates to integers box = np.int0(box) # draw contours cv2.drawContours(img, [box], 0, (0,0, 255), 3) # calculate center and radius of minimum enclosing circle (x,y),radius = cv2.minEnclosingCircle(c) # cast to integers center = (int(x),int(y)) radius = int(radius) # draw the circle img = cv2.circle(img,center,radius,(0,255,0),2) cv2.drawContours(img, contours, -1, (255, 0, 0), 1) cv2.imshow("contours", img) After the initial imports, we load the image, and then apply a binary threshold on a grayscale version of the original image. By doing this, we operate all find-contours calculations on a grayscale copy, but we draw on the original so that we can utilize color information. Firstly, let's calculate a simple bounding box: x,y,w,h = cv2.boundingRect(c) This is a pretty straightforward conversion of contour information to x and y coordinates, plus the height and width of the rectangle. Drawing this rectangle is an easy task: cv2.rectangle(img, (x,y), (x+w, y+h), (0, 255, 0), 2) Secondly, let's calculate the minimum area enclosing the subject: rect = cv2.minAreaRect(c) box = cv2.boxPoints(rect) box = np.int0(box) The mechanism here is particularly interesting: OpenCV does not have a function to calculate the coordinates of the minimum rectangle vertexes directly from the contour information. Instead, we calculate the minimum rectangle area, and then calculate the vertexes of this rectangle. Note that the calculated vertexes are floats, but pixels are accessed with integers (you can't access a "portion" of a pixel), so we'll need to operate this conversion. Next, we draw the box, which gives us the perfect opportunity to introduce the cv2.drawContours function: cv2.drawContours(img, [box], 0, (0,0, 255), 3) Firstly, this function—like all drawing functions—modifies the original image. Secondly, it takes an array of contours in its second parameter so that you can draw a number of contours in a single operation. So, if you have a single set of points representing a contour polygon, you need to wrap this into an array, exactly like we did with our box in the preceding example. The third parameter of this function specifies the index of the contour array that we want to draw: a value of -1 will draw all contours; otherwise, a contour at the specified index in the contour array (the second parameter) will be drawn. Most drawing functions take the color of the drawing and its thickness as the last two parameters. The last bounding contour we're going to examine is the minimum enclosing circle: (x,y),radius = cv2.minEnclosingCircle(c) center = (int(x),int(y)) radius = int(radius) img = cv2.circle(img,center,radius,(0,255,0),2) The only peculiarity of the cv2.minEnclosingCircle function is that it returns a two-element tuple, of which, the first element is a tuple itself, representing the coordinates of a circle's center, and the second element is the radius of this circle. After converting all these values to integers, drawing the circle is quite a trivial operation. The final result on the original image looks like this: Contours – convex contours and the Douglas-Peucker algorithm Most of the time, when working with contours, subjects will have the most diverse shapes, including convex ones. A convex shape is defined as such when there exists two points within that shape whose connecting line goes outside the perimeter of the shape itself. The first facility OpenCV offers to calculate the approximate bounding polygon of a shape is cv2.approxPolyDP. This function takes three parameters: A contour. An "epsilon" value representing the maximum discrepancy between the original contour and the approximated polygon (the lower the value, the closer the approximated value will be to the original contour). A boolean flag signifying that the polygon is closed. The epsilon value is of vital importance to obtain a useful contour, so let's understand what it represents. Epsilon is the maximum difference between the approximated polygon's perimeter and the perimeter of the original contour. The lower this difference is, the more the approximated polygon will be similar to the original contour. You may ask yourself why we need an approximate polygon when we have a contour that is already a precise representation. The answer is that a polygon is a set of straight lines, and the importance of being able to define polygons in a region for further manipulation and processing is paramount in many computer vision tasks. Now that we know what an epsilon is, we need to obtain contour perimeter information as a reference value; this is obtained with the cv2.arcLength function of OpenCV: epsilon = 0.01 * cv2.arcLength(cnt, True) approx = cv2.approxPolyDP(cnt, epsilon, True) Effectively, we're instructing OpenCV to calculate an approximated polygon whose perimeter can only differ from the original contour in an epsilon ratio. OpenCV also offers a cv2.convexHull function to obtain processed contour information for convex shapes, and this is a straightforward one-line expression: hull = cv2.convexHull(cnt) Let's combine the original contour, approximated polygon contour, and the convex hull in one image to observe the difference. To simplify things, I've applied the contours to a black image so that the original subject is not visible, but its contours are: As you can see, the convex hull surrounds the entire subject, the approximated polygon is the innermost polygon shape, and in between the two is the original contour, mainly composed of arcs. Detecting lines and circles Detecting edges and contours are not only common and important tasks, they also constitute the basis for other—more complex—operations. Lines and shape detection walk hand in hand with edge and contour detection, so let's examine how OpenCV implements these. The theory behind line and shape detection has its foundations in a technique called Hough transform, invented by Richard Duda and Peter Hart, extending (generalizing) the work done by Paul Hough in the early 1960s. Let's take a look at OpenCV's API for Hough transforms. Line detection First of all, let's detect some lines, which is done with the HoughLines and HoughLinesP functions. The only difference between the two functions is that one uses the standard Hough transform, and the second uses the probabilistic Hough transform (hence the P in the name). The probabilistic version is called as such because it only analyzes lines as subset of points and estimates the probability of these points to all belong to the same line. This implementation is an optimized version of the standard Hough transform, in that, it's less computationally intensive and executes faster. Let's take a look at a very simple example: import cv2 import numpy as np img = cv2.imread('lines.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray,50,120) minLineLength = 20 maxLineGap = 5 lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap) for x1,y1,x2,y2 in lines[0]: cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2) cv2.imshow("edges", edges) cv2.imshow("lines", img) cv2.waitKey() cv2.destroyAllWindows() The crucial point of this simple script—aside from the HoughLines function call—is the setting of the minimum line length (shorter lines will be discarded) and maximum line gap, which is the maximum size of a gap in a line before the two segments start being considered as separate lines. Also, note that the HoughLines function takes a single channel binary image, processed through the Canny edge detection filter. Canny is not a strict requirement, but an image that's been denoised and only represents edges is the ideal source for a Hough transform, so you will find this to be a common practice. The parameters of HoughLinesP are the image, MinLineLength and MaxLineGap, which we mentioned previously, rho and theta which refers to the geometrical representations of the lines, which are usually 1 and np.pi/180, threshold which represents the threshold below which a line is discarded. The Hough transform works with a system of bins and votes, with each bin representing a line, so any line with a minimum of <threshold> votes is retained, and the rest are discarded. Circle detection OpenCV also has a function used to detect circles, called HoughCircles. It works in a very similar fashion to HoughLines, but where minLineLength and maxLineGap were the parameters to discard or retain lines, HoughCircles has a minimum distance between the circles' centers and the minimum and maximum radius of the circles. Here's the obligatory example: import cv2 import numpy as np planets = cv2.imread('planet_glow.jpg') gray_img = cv2.cvtColor(planets, cv2.COLOR_BGR2GRAY) img = cv2.medianBlur(gray_img, 5) cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR) circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120, param1=100,param2=30,minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) for i in circles[0,:]: # draw the outer circle cv2.circle(planets,(i[0],i[1]),i[2],(0,255,0),2) # draw the center of the circle cv2.circle(planets,(i[0],i[1]),2,(0,0,255),3) cv2.imwrite("planets_circles.jpg", planets) cv2.imshow("HoughCirlces", planets) cv2.waitKey() cv2.destroyAllWindows() Here's a visual representation of the result: Detecting shapes The detection of shapes using the Hough transform is limited to circles; however, we've already implicitly explored the detection of shapes of any kind, specifically, when we talked about approxPolyDP. This function allows the approximation of polygons, so if your image contains polygons, they will be quite accurately detected combining the usage of cv2.findContours and cv2.approxPolyDP. Summary At this point, you should have gained a good understanding of color spaces, the Fourier transform, and several kinds of filters made available by OpenCV to process images. You should also be proficient in detecting edges, lines, circles and shapes in general, additionally you should be able to find contours and exploit the information they provide about the subjects contained in an image. These concepts will serve as the ideal background to explore the topics in the next chapter, Image Segmentation and Depth Estimation. Further resources on this subject: OpenCV: Basic Image Processing OpenCV: Camera Calibration OpenCV: Tracking Faces with Haar Cascades
Read more
  • 0
  • 0
  • 115909
article-image-virtualization-0
Packt
16 Sep 2015
16 min read
Save for later

Virtualization

Packt
16 Sep 2015
16 min read
This article by Skanda Bhargav, the author of Troubleshooting Ubuntu Server, deals with virtualization techniques—why virtualization is important and how administrators can install and serve users with services via virtualization. We will learn about KVM, Xen, and Qemu. So sit back and let's take a spin into the virtual world of Ubuntu. (For more resources related to this topic, see here.) What is virtualization? Virtualization is a technique by which you can convert a set of files into a live running machine with an OS. It is easy to set up one machine and much easier to clone and replicate the same machine across hardware. Also, each of the clones can be customized based on requirements. We will look at setting up a virtual machine using Kernel-based Virtual Machine, Xen, and Qemu in the sections that follow. Today, people are using the power of virtualization in different situations and environments. Developers use virtualization in order to have an independent environment in which to safely test and develop applications without affecting other working environments. Administrators are using virtualization to separate services and also commission or decommission services as and when required or requested. By default, Ubuntu supports the Kernel-based Virtual Machine (KVM), which has built-in extensions for AMD and Intel-based processors. Xen and Qemu are the options suggested where you have hardware that does not have extensions for virtualization. libvirt The libvirt library is an open source library that is helpful for interfacing with different virtualization technologies. One small task before starting with libvirt is to check your hardware support extensions for KVM. The command to do so is as follows: kvm-ok You will see a message stating whether or not your CPU supports hardware virtualization. An additional task would be to verify the BIOS settings for virtualization and activate it. Installation Use the following command to install the package for libvirt: sudo apt-get install kvm libvirt-bin Next, you will need to add the user to the group libvirt. This will ensure that user gets additional options for networking. The command is as follows: sudo adduser $USER libvirtd We are now ready to install a guest OS. Its installation is very similar to that of installing a normal OS on the hardware. If your virtual machine needs a graphical user interface (GUI), you can make use of an application virt-viewer and connect using VNC to the virtual machine's console. We will be discussing the virt-viewer and its uses in the later sections of this article. virt-install virt-install is a part of the python-virtinst package. The command to install this package is as follows: sudo apt-get install python-virtinst One of the ways of using virt-install is as follows: sudo virt-install -n new_my_vm -r 256 -f new_my_vm.img -s 4 -c jeos.iso --accelerate --connect=qemu:///system --vnc --noautoconsole -v Let's understand the preceding command part by part: -n: This specifies the name of virtual machine that will be created -r: This specifies the RAM amount in MBs -f: This is the path for the virtual disk -s: This specifies the size of the virtual disk -c: This is the file to be used as virtual CD, but it can be an .iso file as well --accelerate: This is used to make use of kernel acceleration technologies --vnc: This exports the guest console via vnc --noautoconsole: This disables autoconnect for the virtual machine console -v: This creates a fully virtualized guest Once virt-install is launched, you may connect to console with virt-viewer utility from remote connections or locally using GUI. Use to wrap long text to next line. virt-clone One of the applications to clone a virtual machine to another is virt-clone. Cloning is a process of creating an exact replica of the virtual machine that you currently have. Cloning is helpful when you need a lot of virtual machines with same configuration. Here is an example of cloning a virtual machine: sudo virt-clone -o my_vm -n new_vm_clone -f /path/to/ new_vm_clone.img --connect=qemu:///sys Let's understand the preceding command part by part: -o: This is the original virtual machine that you want to clone -n: This is the new virtual machine name -f: This is the new virtual machine's file path --connect: This specifies the hypervisor to be used Managing the virtual machine Let's see how to manage the virtual machine we installed using virt. virsh Numerous utilities are available for managing virtual machines and libvirt; virsh is one such utility that can be used via command line. Here are a few examples: The following command lists the running virtual machines: virsh -c qemu:///system list The following command starts a virtual machine: virsh -c qemu:///system start my_new_vm The following command starts a virtual machine at boot: virsh -c qemu:///system autostart my_new_vm The following command restarts a virtual machine: virsh -c qemu:///system reboot my_new_vm You can save the state of virtual machine in a file. It can be restored later. Note that once you save the virtual machine, it will not be running anymore. The following command saves the state of the virtual machine: virsh -c qemu://system save my_new_vm my_new_vm-290615.state The following command restores a virtual machine from saved state: virsh -c qemu:///system restore my_new_vm-290615.state The following command shuts down a virtual machine: virsh -c qemu:///system shutdown my_new_vm The following command mounts a CD-ROM in the virtual machine: virsh -c qemu:///system attach-disk my_new_vm /dev/cdrom /media/cdrom The virtual machine manager A GUI-type utility for managing virtual machines is virt-manager. You can manage both local and remote virtual machines. The command to install the package is as follows: sudo apt-get install virt-manager The virt-manager works on a GUI environment. Hence, it is advisable to install it on a remote machine other than the production cluster, as production cluster should be used for doing the main tasks. The command to connect the virt-manager to a local server running libvirt is as follows: virt-manager -c qemu:///system If you want to connect the virt-manager from a different machine, then first you need to have SSH connectivity. This is required as libvirt will ask for a password on the machine. Once you have set up passwordless authentication, use the following command to connect manager to server: virt-manager -c qemu+ssh://virtnode1.ubuntuserver.com/system Here, the virtualization server is identified with the hostname ubuntuserver.com. The virtual machine viewer A utility for connecting to your virtual machine's console is virt-viewer. This requires a GUI to work with the virtual machine. Use the following command to install virt-viewer: sudo apt-get install virt-viewer Now, connect to your virtual machine console from your workstation using the following command: virt-viewer -c qemu:///system my_new_vm You may also connect to a remote host using SSH passwordless authentication by using the following command: virt-viewer -c qemu+ssh://virtnode4.ubuntuserver.com/system my_new_vm JeOS JeOS, short for Just Enough Operation System, is pronounced as "Juice" and is an operating system in the Ubuntu flavor. It is specially built for running virtual applications. JeOS is no longer available as a downloadable ISO CD-ROM. However, you can pick up any of the following approaches: Get a server ISO of the Ubuntu OS. While installing, hit F4 on your keyboard. You will see a list of items and select the one that reads Minimal installation. This will install the JeOS variant. Build your own copy with vmbuilder from Ubuntu. The kernel of JeOS is specifically tuned to run in virtual environments. It is stripped off of the unwanted packages and has only the base ones. JeOS takes advantage of the technological advancement in VMware products. A powerful combination of limited size with performance optimization is what makes JeOS a preferred OS over a full server OS in a large virtual installation. Also, with this OS being so light, the updates and security patches will be small and only limited to this variant. So, the users who are running their virtual applications on the JeOS will have less maintenance to worry about compared to a full server OS installation. vmbuilder The second way of getting the JeOS is by building your own copy of Ubuntu; you need not download any ISO from the Internet. The beauty of vmbuilder is that it will get the packages and tools based on your requirements. Then, build a virtual machine with these and the whole process is quick and easy. Essentially, vmbuilder is a script that will automate the process of creating a virtual machine, which can be easily deployed. Currently, the virtual machines built with vmbuilder are supported on KVM and Xen hypervisors. Using command-line arguments, you can specify what additional packages you require, remove the ones that you feel aren't necessary for your needs, select the Ubuntu version, and do much more. Some developers and admins contributed to the vmbuilder and changed the design specifics, but kept the commands same. Some of the goals were as follows: Reusability by other distributions Plugin feature added for interactions, so people can add logic for other environments A web interface along with CLI for easy access and maintenance Setup Firstly, we will need to set up libvirt and KVM before we use vmbuilder. libvirt was covered in the previous section. Let's now look at setting up KVM on your server. We will install some additional packages along with the KVM package, and one of them is for enabling X server on the machine. The command that you will need to run on your Ubuntu server is as follows: sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils The output of this command will be as follows: Let's look at what each of the packages mean: libvirt-bin: This is used by libvirtd for administration of KVM and Qemu qemu-kvm: This runs in the background ubuntu-vm-builder: This is a tool for building virtual machines from the command line bridge-utils: This enables networking for various virtual machines Adding users to groups You will have to add the user to the libvirtd command; this will enable them to run virtual machines. The command to add the current user is as follows: sudo adduser `id -un` libvirtd The output is as follows:   Installing vmbuilder Download the latest vmbuilder called python-vm-builder. You may also use the older ubuntu-vm-builder, but there are slight differences in the syntax. The command to install python-vm-builder is as follows: sudo apt-get install python-vm-builder The output will be as follows:   Defining the virtual machine While defining the virtual machine that you want to build, you need to take care of the following two important points: Do not assume that the enduser will know the technicalities of extending the disk size of virtual machine if the need arises. Either have a large virtual disk so that the application can grow or document the process to do so. However, it would be better to have your data stored in an external storage device. Allocating RAM is fairly simple. But remember that you should allocate your virtual machine an amount of RAM that is safe to run your application. To check the list of parameters that vmbuilder provides, use the following command: vmbuilder ––help   The two main parameters are virtualization technology, also known as hypervisor, and targeted distribution. The distribution we are using is Ubuntu 14.04, which is also known as trusty because of its codename. The command to check the release version is as follows: lsb_release -a The output is as follows:   Let's build a virtual machine on the same version of Ubuntu. Here's an example of building a virtual machine with vmbuilder: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system Now, we will discuss what the parameters mean: --suite: This specifies which Ubuntu release we want the virtual machine built on --flavour: This specifies which virtual kernel to use to build the JeOS image --arch: This specifies the processor architecture (64 bit or 32 bit) -o: This overwrites the previous version of the virtual machine image --libvirt: This adds the virtual machine to the list of available virtual machines Now that we have created a virtual machine, let's look at the next steps. JeOS installation We will examine the settings that are required to get our virtual machine up and running. IP address A good practice for assigning IP address to the virtual machines is to set a fixed IP address, usually from the private pool. Then, include this info as part of the documentation. We will define an IP address with following parameters: --ip (address): This is the IP address in dotted form --mask (value): This is the IP mask in dotted form (default is 255.255.255.0) --net (value): This is the IP net address (default is X.X.X.0) --bcast (value): This is the IP broadcast (default is X.X.X.255) --gw (address): This is the gateway address (default is X.X.X.1) --dns (address): This is the name server address (default is X.X.X.1) Our command looks like this now: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 You may have noticed that we have assigned only the IP, and all others will take the default value. Enabling the bridge We will have to enable the bridge for our virtual machines, as various remote hosts will have to access the applications. We will configure libvirt and modify the vmbuilder template to do so. First, create the template hierarchy and copy the default template into this folder: mkdir -p VMBuilder/plugins/libvirt/templates cp /etc/vmbuilder/libvirt/* VMBuilder/plugins/libvirt/templates/ Use your favorite editor and modify the following lines in the VMBuilder/plugins/libvirt/templates/libvirtxml.tmpl file: <interface type='network'> <source network='default'/> </interface> Replace these lines with the following lines: <interface type='bridge'> <source bridge='br0'/> </interface>   Partitions You have to allocate partitions to applications for their data storage and working. It is normal to have a separate storage space for each application in /var. The command provided by vmbuilder for this is --part: --part PATH vmbuilder will read the file from the PATH parameter and consider each line as a separate partition. Each line has two entries, mountpoint and size, where size is defined in MBs and is the maximum limit defined for that mountpoint. For this particular exercise, we will create a new file with name vmbuilder.partition and enter the following lines for creating partitions: root 6000 swap 4000 --- /var 16000 Also, please note that different disks are identified by the delimiter ---. Now, the command should be like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition Use to wrap long text to the next line. Setting the user and password We have to define a user and a password in order for the user to log in to the virtual machine after startup. For now, let's use a generic user identified as user and the password password. We can ask user to change the password after first login. The following parameters are used to set the username and password: --user (username): This sets the username (default is ubuntu) --name (fullname): This sets a name for the user (default is ubuntu) --pass (password): This sets the password for the user (default is ubuntu) So, now our command will be as follows: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password Final steps in the installation – first boot There are certain things that will need to be done at the first boot of a machine. We will install openssh-server at first boot. This will ensure that each virtual machine has a key, which is unique. If we had done this earlier in the setup phase, all virtual machines would have been given the same key; this might have posed a security issue. Let's create a script called first_boot.sh and run it at the first boot of every new virtual machine: # This script will run the first time the virtual machine boots # It is run as root apt-get update apt-get install -qqy --force-yes openssh-server Then, add the following line to the command line: --firstboot first_boot.sh Final steps in the installation – first login Remember we had specified a default password for the virtual machine. This means all the machines where this image will be used for installation will have the same password. We will prompt the user to change the password at first login. For this, we will use a shell script named first_login.sh. Add the following lines to the file: # This script is run the first time a user logs in. echo "Almost at the end of setting up your machine" echo "As a security precaution, please change your password" passwd Then, add the parameter to your command line: --firstlogin first_login.sh Auto updates You can make your virtual machine update itself at regular intervals. To enable this feature, add a package named unattended-upgrades to the command line: --addpkg unattended-upgrades ACPI handling ACPI handling will enable your virtual machine to take care of shutdown and restart events that are received from a remote machine. We will install the acipd package for the same: --addpkg acipd The complete command So, the final command with the parameters that we discussed previously would look like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password --firstboot first_boot.sh --firstlogin first_login.sh --addpkg unattended-upgrades --addpkg acipd   Summary In this article, we discussed various virtualization techniques. We discussed virtualization as well as the tools and packages that help in creating and running a virtual machine. Also, you learned about the ways we can view, manage, connect to, and make use of the applications running on the virtual machine. Then, we saw the lightweight version of Ubuntu that is fine-tuned to run virtualization and applications on a virtual platform. At the later stages of this article, we covered how to build a virtual machine from a command line, how to add packages, how to set up user profiles, and the steps for first boot and first login. Resources for Article: Further resources on this subject: Introduction to OpenVPN [article] Speeding up Gradle builds for Android [article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 6715

article-image-how-deploy-simple-django-app-using-aws
Liz Tom
16 Sep 2015
6 min read
Save for later

How to Deploy a Simple Django App Using AWS

Liz Tom
16 Sep 2015
6 min read
So you've written your first Django app and now you want to show the world your awesome To Do List. If you like me, your first Django app was from the awesome Django tutorial on their site. You may have heard of AWS. What exactly does this mean, and how does it pertain to getting your app out there. AWS is Amazon Web Services. They have many different products, but we're just going to focus on using one today: Elastic Compute Cloud (EC2) - Scalable virtual private servers. So you have your Django app and it runs beautifully locally. The goal is to reproduce everything but on Amazon's servers. Note: There are many different ways to set up your servers, this is just one way. You can and should experiment to see what works best for you. Application Server First up we're going to need to spin up a server to host your application. Let's go back, since the very first step would actually be to sign up for an AWS account. Please make sure to do that first. Now that we're back on track, you'll want to log into your account and go to your management dashboard. Click on EC2 under compute. Then click "Launch Instance". Now choose your operating system. I use Ubuntu because that's what we use at work. Basically, you should choose an operating system that is as close to the operating system that you use to develop in. Step 2 has you choosing an instance type. Since this is a small app and I want to be in the free tier the t2.micro will do. When you have a production ready app to go, you can read up more on EC2 instance types here. Basically you can add more power to your EC2 instance as you move up. Step 3: Click Next: Configure Instance Details For a simple app we don't need to change anything on this page. One thing to note is the Purchasing option. There are three different types of EC2 Purchasing Options, Spot Instances, Reserved Instances and Dedicated Instances. See them but since we're still on the free tier, let's not worry about this for now. Step 4: Click Next: Add Storage You don't need to change anything here, but this is where you'd click Next: Tag Instance (Step 5). You also don't need to change anything here, but if you're managing a lot of EC2 instances it's probably a good idea to to tag your instances. Step 6: Click Next: Configure Security Group. Under Type select HTTP and the rest should autofill. Otherwise you will spend hours wondering why Nginx hates you and doesn't want to work. Finally, Click Launch. A modal should have popped up prompting you to select an existing key pair or create a new key pair. Unless you already have an exisiting key pair, select Create a new key pair and give it name. You have to download this file and make sure to keep it somewhere safe and somewhere you will remember. You won't be able to download this file again, but you can always spin up another EC2 instance, and create a new key again. Click Launch Instances! You did it! You launched an EC2 instance! Configuring your EC2 Instance But I'm sorry to tell you that your journey is not over. You'll still need to configure your server with everything it needs to run your Django app. Click View Instances. This should bring you to a panel that shows you if your instance is running or not. You'll need to grab your Public IP address from here. So do you remember that private key you downloaded? You'll be needing that for this step. Open your terminal: cd path/to/your/secret/key chmod 400 your_key-pair_name.pem chmod 400 your_key-pair_name.pem is to set the permissions on the key so only you can read it. Now let's SSH to your instance. ssh -i path/to/your/secret/key/your_key-pair_name.pem ubuntu@IP-ADDRESS Since we're running Ubuntu and will be using apt, we need to make sure that apt is up to date: sudo apt-get update Then you need your webserver (nginx): sudo apt-get install nginx Since we installed Ubuntu 14.04, Nginx starts up automatically. You should be able to visit your public IP address and see a screen that says Welcome to nginx! Great, nginx was downloaded correctly and is all booted up. Let's get your app on there! Since this is a Django project, you'll need to install Django on your server. sudo apt-get install python-pip sudo pip install virtualenv sudo pip install git Pull your project down from github: git clone my-git-hub-url In your project's root directory make sure you have at a minimum a requirements.txt file with the following: django gunicorn Side note: gunicorn is a Python WSGI HTTP Server for UNIX. You can find out more here. Make a virtualenv and install your pip requirements using: pip install -r requirements.txt Now you should have django and gunicorn installed. Since nginx starts automatically you'll want to shut it down. sudo service nginx stop Now you'll turn on gunicorn by running: gunicorn app-name.wsgi Now that gunicorn is up and running it's time to turn on nginx: cd ~/etc/nginx sudo vi nginx.conf Within the http block either at the top or the bottom, you'll want to insert this block: server { listen 80; server_name public-ip-address; access_log /var/log/nginx-access.log; error_log /var/log/nginx-error.log; root /home/ubuntu/project-root; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Now start up nginx again: sudo service nginx start Go to your public IP address and you should see your lovely app on the Internet. The End Congratulations! You did it. You just deployed your awesome Django app using AWS. Do a little dance, pat yourself on back and feel good about what you just accomplished! But, one note, as soon as you close your connection and terminate gunicorn, your app will no longer be running. You'll need to set up something like Upstart to keep your app running all the time. Hope you had fun!   About the author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C. Liz’s passion for full stack development and digital media makes her a natural fit at ISL. Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 19500
Modal Close icon
Modal Close icon