Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-extending-chef
Packt
07 Jul 2015
34 min read
Save for later

Extending Chef

Packt
07 Jul 2015
34 min read
In this article by Mayank Joshi, the author of Mastering Chef, we'll learn how to go about building custom Knife plugins and we'll also see how we can write custom handlers that can help us extend the functionality provided by a chef-client run to report any issues with a chef-client run. (For more resources related to this topic, see here.) Custom Knife plugins Knife is one of the most widely used tools in the Chef ecosystem. Be it managing your clients, nodes, cookbooks, environments, roles, users, or handling stuff such as provisioning machines in Cloud environments such as Amazon AWS, Microsoft Azure, and so on, there is a way to go about doing all of these things through Knife. However, Knife, as provided during installation of Chef, isn't capable of performing all these tasks on its own. It comes with a basic set of functionalities, which helps provide an interface between the local Chef repository, workstation and the Chef server. The following are the functionalities, which is provided, by default, by the Knife executable: Management of nodes Management of clients and users Management of cookbooks, roles, and environments Installation of chef-client on the nodes through bootstrapping Searching for data that is indexed on the Chef server. However, apart from these functions, there are plenty more functions that can be performed using Knife; all this is possible through the use of plugins. Knife plugins are a set of one (or more) subcommands that can be added to Knife to support an additional functionality that is not built into the base set of Knife subcommands. Most of the Knife plugins are initially built by users such as you, and over a period of time, they are incorporated into the official Chef code base. A Knife plugin is usually installed into the ~/.chef/plugins/knife directory, from where it can be executed just like any other Knife subcommand. It can also be loaded from the .chef/plugins/knife directory in the Chef repository or if it's installed through RubyGems, it can be loaded from the path where the executable is installed. Ideally, a plugin should be kept in the ~/.chef/plugins/knife directory so that it's reusable across projects, and also in the .chef/plugins/knife directory of the Chef repository so that its code can be shared with other team members. For distribution purpose, it should ideally be distributed as a Ruby gem. The skeleton of a Knife plugin A Knife plugin is structured somewhat like this: require 'chef/knife'   module ModuleName class ClassName < Chef::Knife      deps do      require 'chef/dependencies'    end      banner "knife subcommand argument VALUE (options)"      option :name_of_option      :short => "-l value",      :long => "--long-option-name value",      :description => "The description of the option",      :proc => Proc.new { code_to_be_executed },      :boolean => true | false,      :default => default_value      def run      #Code    end end end Let's look at this skeleton, one line at a time: require: This is used to require other Knife plugins required by a new plugin. module ModuleName: This defines the namespace in which the plugin will live. Every Knife plugin lives in its own namespace. class ClassName < Chef::Knife: This declares that a plugin is a subclass of Knife. deps do: This defines a list of dependencies. banner: This is used to display a message when a user enters Knife subcommand –help. option :name_of_option: This defines all the different command line options available for this new subcommand. def run: This is the place in which we specify the Ruby code that needs to be executed. Here are the command-line options: :short defines the short option name :long defines the long option name :description defines a description that is displayed when a user enters knife subclassName –help :boolean defines whether an option is true or false; if the :short and :long names define value, then this attribute should not be used :proc defines the code that determines the value for this option :default defines a default value The following example shows a part of a Knife plugin named knife-windows: require 'chef/knife' require 'chef/knife/winrm_base'base'   class Chef class Knife    class Winrm < Knife        include Chef::Knife::WinrmBase        deps do        require 'readline'        require 'chef/search/query'        require 'em-winrm'      end        attr_writer :password        banner "knife winrm QUERY COMMAND (options)"        option :attribute,        :short => "-a ATTR",        :long => "--attribute ATTR",        :description => "The attribute to use for opening the connection - default is fqdn",        :default => "fqdn"        ... # more options        def session        session_opts = {}        session_opts[:logger] = Chef::Log.logger if Chef::Log.level == :debug        @session ||= begin          s = EventMachine::WinRM::Session.new(session_opts)          s.on_output do |host, data|            print_data(host, data)          end          s.on_error do |host, err|            print_data(host, err, :red)          end          s.on_command_complete do |host|             host = host == :all ? 'All Servers' : host            Chef::Log.debug("command complete on #{host}")          end          s        end        end        ... # more def blocks      end end end Namespace As we saw with skeleton, the Knife plugin should have its own namespace and the namespace is declared using the module method as follows: require 'chef/knife' #Any other require, if needed   module NameSpace class SubclassName < Chef::Knife Here, the plugin is available under the namespace called NameSpace. One should keep in mind that Knife loads the subcommand irrespective of the namespace to which it belongs. Class name The class name declares a plugin as a subclass of both Knife and Chef. For example: class SubclassName < Chef::Knife The capitalization of the name is very important. The capitalization pattern can be used to define the word grouping that makes the best sense for the use of a plugin. For example, if we want our plugin subcommand to work as follows: knife bootstrap hdfs We should have our class name as: BootstrapHdfs. If, say, we used a class name such as BootStrapHdfs, then our subcommand would be as follows: knife boot strap hdfs It's important to remember that a plugin can override an existing Knife subcommand. For example, we already know about commands such as knife cookbook upload. If you want to override the current functionality of this command, all you need to do is create a new plugin with the following name: class CookbookUpload < Chef::Knife Banner Whenever a user enters the knife –help command, he/she is presented with a list of available subcommands. For example: knife --help Usage: knife sub-command (options)    -s, --server-url URL             Chef Server URL Available subcommands: (for details, knife SUB-COMMAND --help)   ** BACKUP COMMANDS ** knife backup export [COMPONENT [COMPONENT ...]] [-D DIR] (options) knife backup restore [COMPONENT [COMPONENT ...]] [-D DIR] (options)   ** BOOTSTRAP COMMANDS ** knife bootstrap FQDN (options) .... Let us say we are creating a new plugin and we would want Knife to be able to list it when a user enters the knife –help command. To accomplish this, we would need to make use of banner. For example, let's say we've a plugin called BootstrapHdfs with the following code: module NameSpace class BootstrapHdfs < Chef::Knife    ...    banner "knife bootstrap hdfs (options)"    ... end end Now, when a user enters the knife –help command, he'll see the following output: ** BOOTSTRAPHDFS COMMANDS ** knife bootstrap hdfs (options) Dependencies Reusability is one of the key paradigms in development and the same is true for Knife plugins. If you want a functionality of one Knife plugin to be available in another, you can use the deps method to ensure that all the necessary files are available. The deps method acts like a lazy loader, and it ensures that dependencies are loaded only when a plugin that requires them is executed. This is one of the reasons for using deps over require, as the overhead of the loading classes is reduced, thereby resulting in code with a lower memory footprint; hence, faster execution. One can use the following syntax to specify dependencies: deps do require 'chef/knife/name_of_command' require 'chef/search/query' #Other requires to fullfill dependencies end Requirements One can acquire the functionality available in other Knife plugins using the require method. This method can also be used to require the functionality available in other external libraries. This method can be used right at the beginning of the plugin script, however, it's always wise to use it inside deps, or else the libraries will be loaded even when they are not being put to use. The syntax to use require is fairly simple, as follows: require 'path_from_where_to_load_library' Let's say we want to use some functionalities provided by the bootstrap plugin. In order to accomplish this, we will first need to require the plugin: require 'chef/knife/bootstrap' Next, we'll need to create an object of that plugin: obj = Chef::Knife::Bootstrap.new Once we've the object with us, we can use it to pass arguments or options to that object. This is accomplished by changing the object's config and the name_arg variables. For example: obj.config[:use_sudo] = true Finally, we can run the plugin using the run method as follows: obj.run Options Almost every other Knife plugin accepts some command line option or other. These options can be added to a Knife subcommand using the option method. An option can have a Boolean value, string value, or we can even write a piece of code to determine the value of an option. Let's see each of them in action once: An option with a Boolean value (true/false): option :true_or_false, :short => "-t", :long => "—true-or-false", :description => "True/False?", :boolean => true | false, :default => true Here is an option with a string value: option :some_string_value, :short => "-s VALUE", :long => "—some-string-value VALUE", :description => "String value", :default => "xyz" An option where a code is used to determine the option's value: option :tag, :short => "-T T=V[,T=V,...]", :long => "—tags Tag=Value[,Tag=Value,...]", :description => "A list of tags", :proc => Proc.new { |tags| tag.split(',') } Here the proc attribute will convert a list of comma-separated values into an array. All the options that are sent to the Knife subcommand through a command line are available in form of a hash, which can be accessed using the config method. For example, say we had an option: option :option1 :short => "-s VALUE", :long => "—some-string-value VALUE", :description => "Some string value for option1", :default => "option1" Now, while issuing the Knife subcommand, say a user entered something like this: $ knife subcommand –option1 "option1_value" We can access this value for option1 in our Knife plugin run method using config[:option1] When a user enters the knife –help command, the description attributes are displayed as part of help. For example: **EXAMPLE COMMANDS** knife example -s, --some-type-of-string-value    This is not a random string value. -t, --true-or-false                 Is this value true? Or is this value false? -T, --tags                         A list of tags associated with the virtual machine. Arguments A Knife plugin can also accept the command-line arguments that aren't specified using the option flag, for example, knife node show NODE. These arguments are added using the name_args method: require 'chef/knife' module MyPlugin class ShowMsg << Chef::Knife    banner 'knife show msg MESSAGE'    def run      unless name_args.size == 1      puts "You need to supply a string as an argument."        show_usage        exit 1      end      msg = name_args.join(" ")      puts msg    end end end Let's see this in action: knife show msg You need to supply a string as an argument. USAGE: knife show msg MESSAGE    -s, --server-url URL             Chef Server URL        --chef-zero-host HOST       Host to start chef-zero on ... Here, we didn't pass any argument to the subcommand and, rightfully, Knife sent back a message saying You need to supply a string as an argument. Now, let's pass a string as an argument to the subcommand and see how it behaves: knife show msg "duh duh" duh duh Under the hood what's happening is that name_args is an array, which is getting populated by the arguments that we have passed in the command line. In the last example, the name_args array would've contained two entries ("duh","duh"). We use the join method of the Array class to create a string out of these two entities and, finally, print the string. The run method Every Knife plugin will have a run method, which will contain the code that will be executed when the user executes the subcommand. This code contains the Ruby statements that are executed upon invocation of the subcommand. This code can access the options values using the config[:option_hash_symbol_name] method. Search inside a custom Knife plugin Search is perhaps one of the most powerful and most used functionalities provided by Chef. By incorporating a search functionality in our custom Knife plugin, we can accomplish a lot of tasks, which would otherwise take a lot of efforts to accomplish. For example, say we have classified our infrastructure into multiple environments and we want a plugin that can allow us to upload a particular file or folder to all the instances in a particular environment on an ad hoc basis, without invoking a full chef-client run. This kind of stuff is very much doable by incorporating a search functionality into the plugin and using it to find the right set of nodes in which you want to perform a certain operation. We'll look at one such plugin in the next section. To be able to use Chef's search functionality, all you need to do is to require the Chef's query class and use an object of the Chef::Search::Query class to execute a query against the Chef server. For example: require 'chef/search/query' query_object = Chef::Search::Query.new query = 'chef_environment:production' query_object.search('node',query) do |node| puts "Node name = #{node.name}" end Since the name of a node is generally FQDN, you can use the values returned in node.name to connect to remote machines and use any library such as net-scp to allow users to upload their files/folders to a remote machine. We'll try to accomplish this task when we write our custom plugin at the end of this article. We can also use this information to edit nodes. For example, say we had a set of machines acting as web servers. Initially, all these machines were running Apache as a web server. However, as the requirements changed, we wanted to switch over to Nginx. We can run the following piece of code to accomplish this task: require 'chef/search/query'   query_object = Chef::Search::Query.new query = 'run_list:*recipe\[apache2\]*' query_object.search('node',query) do |node| ui.msg "Changing run_list to recipe[nginx] for #{node.name}" node.run_list("recipe[nginx]") node.save ui.msg "New run_list: #{node.run_list}" end knife.rb settings Some of the settings defined by a Knife plugin can be configured so that they can be set inside the knife.rb script. There are two ways to go about doing this: By using the :proc attribute of the option method and code that references Chef::Config[:knife][:setting_name] By specifying the configuration setting directly within the def Ruby blocks using either Chef::Config[:knife][:setting_name] or config[:setting_name] An option that is defined in this way can be configured in knife.rb by using the following syntax: knife [:setting_name] This approach is especially useful when a particular setting is used a lot. The precedence order for the Knife option is: The value passed via a command line. The value saved in knife.rb The default value. The following example shows how the Knife bootstrap command uses a value in knife.rb using the :proc attribute: option :ssh_port :short => '-p PORT', :long => '—ssh-port PORT', :description => 'The ssh port', :proc => Proc.new { |key| Chef::Config[:knife][:ssh_port] = key } Here Chef::Config[:knife][:ssh_port] tells Knife to check the knife.rb file for a knife[:ssh_port] setting. The following example shows how the Knife bootstrap command calls the knife ssh subcommand for the actual SSH part of running a bootstrap operation: def knife_ssh ssh = Chef::Knife::Ssh.new ssh.ui = ui ssh.name_args = [ server_name, ssh_command ] ssh.config[:ssh_user] = Chef::Config[:knife][:ssh_user] || config[:ssh_user] ssh.config[:ssh_password] = config[:ssh_password] ssh.config[:ssh_port] = Chef::Config[:knife][:ssh_port] || config[:ssh_port] ssh.config[:ssh_gateway] = Chef::Config[:knife][:ssh_gateway] || config[:ssh_gateway] ssh.config[:identity_file] = Chef::Config[:knife][:identity_file] || config[:identity_file] ssh.config[:manual] = true ssh.config[:host_key_verify] = Chef::Config[:knife][:host_key_verify] || config[:host_key_verify] ssh.config[:on_error] = :raise ssh end Let's take a look at the preceding code: ssh = Chef::Knife::Ssh.new creates a new instance of the Ssh subclass named ssh A series of settings in Knife ssh are associated with a Knife bootstrap using the ssh.config[:setting_name] syntax Chef::Config[:knife][:setting_name] tells Knife to check the knife.rb file for various settings It also raises an exception if any aspect of the SSH operation fails User interactions The ui object provides a set of methods that can be used to define user interactions and to help ensure a consistent user experience across all different Knife plugins. One should make use of these methods, rather than handling user interactions manually. Method Description ui.ask(*args, &block) The ask method calls the corresponding ask method of the HighLine library. More details about the HighLine library can be found at http://www.rubydoc.info/gems/highline/1.7.2. ui.ask_question(question, opts={}) This is used to ask a user a question. If :default => default_value is passed as a second argument, default_value will be used if the user does not provide any answer. ui.color (string, *colors) This method is used to specify a color. For example: server = connections.server.create(server_def)   puts "#{ui.color("Instance ID", :cyan)}: #{server.id}"   puts "#{ui.color("Flavor", :cyan)}: #{server.flavor_id}"   puts "#{ui.color("Image", :cyan)}: #{server.image_id}"   ...   puts "#{ui.color("SSH Key", :cyan)}: #{server.key_name}" print "n#{ui.color("Waiting for server", :magenta)}" ui.color?() This indicates that the colored output should be used. This is only possible if an output is sent across to a terminal. ui.confirm(question,append_instructions=true) This is used to ask (Y/N) questions. If a user responds back with N, the command immediately exits with the status code 3. ui.edit_data(data,parse_output=true) This is used to edit data. This will result in firing up of an editor. ui.edit_object(class,name) This method provides a convenient way to download an object, edit it, and save it back to the Chef server. It takes two arguments, namely, the class of object to edit and the name of object to edit. ui.error This is used to present an error to a user. ui.fatal This is used to present a fatal error to a user. ui.highline This is used to provide direct access to a highline object provided by many ui methods. ui.info This is used to present information to a user. ui.interchange This is used to determine whether the output is in a data interchange format such as JSON or YAML. ui.list(*args) This method is a way to quickly and easily lay out lists. This method is actually a wrapper to the list method provided by the HighLine library. More details about the HighLine library can be found at http://www.rubydoc.info/gems/highline/1.7.2. ui.msg(message) This is used to present a message to a user. ui.output(data) This is used to present a data structure to a user. This makes use of a generic default presenter. ui.pretty_print(data) This is used to enable the pretty_print output for JSON data. ui.use_presenter(presenter_class) This is used to specify a custom output presenter. ui.warn(message) This is used to present a warning to a user. For example, to show a fatal error in a plugin in the same way that it would be shown in Knife, do something similar to the following: unless name_args.size == 1    ui.fatal "Fatal error !!!"    show_usage    exit 1 end Exception handling In most cases, the exception handling available within Knife is enough to ensure that the exception handling for a plugin is consistent across all the different plugins. However, if the required one can handle exceptions in the same way as any other Ruby program, one can make use of the begin-end block, along with rescue clauses, to tell Ruby which exceptions we want to handle. For example: def raise_and_rescue begin    puts 'Before raise'    raise 'An error has happened.'    puts 'After raise' rescue    puts 'Rescued' end puts 'After begin block' end   raise_and_rescue If we were to execute this code, we'd get the following output: ruby test.rb Before raise Rescued After begin block A simple Knife plugin With the knowledge about how Knife's plugin system works, let's go about writing our very own custom Knife plugin, which can be quite useful for some users. Before we jump into the code, let's understand the purpose that this plugin is supposed to serve. Let's say we've a setup where our infrastructure is distributed across different environments and we've also set up a bunch of roles, which are used while we try to bootstrap the machines using Chef. So, there are two ways in which a user can identify machines: By environments By roles Actually, any valid Chef search query that returns a node list can be the criteria to identify machines. However, we are limiting ourselves to these two criteria for now. Often, there are situations where a user might want to upload a file or folder to all the machines in a particular environment, or to all the machines belonging to a particular role. This plugin will help users accomplish this task with lots of ease. The plugin will accept three arguments. The first one will be a key-value pair with the key being chef_environment or a role, the second argument will be a path to the file or folder that is required to be uploaded, and the third argument will be the path on a remote machine where the files/folders will be uploaded to. The plugin will use Chef's search functionality to find the FQDN of machines, and eventually make use of the net-scp library to transfer the file/folder to the machines. Our plugin will be called knife-scp and we would like to use it as follows: knife scp chef_environment:production /path_of_file_or_folder_locally /path_on_remote_machine Here is the code that can help us accomplish this feat: require 'chef/knife'   module CustomPlugins class Scp < Chef::Knife    banner "knife scp SEARCH_QUERY PATH_OF_LOCAL_FILE_OR_FOLDER PATH_ON_REMOTE_MACHINE"      option :knife_config_path,      :short => "-c PATH_OF_knife.rb",      :long => "--config PATH_OF_knife.rb",      :description => "Specify path of knife.rb",      :default => "~/.chef/knife.rb"      deps do      require 'chef/search/query'      require 'net/scp'      require 'parallel'    end      def run      if name_args.length != 3        ui.msg "Missing arguments! Unable to execute the command successfully."        show_usage        exit 1      end                  Chef::Config.from_file(File.expand_path("#{config[:knife_config_path]}"))      query = name_args[0]      local_path = name_args[1]      remote_path = name_args[2]      query_object = Chef::Search::Query.new      fqdn_list = Array.new      query_object.search('node',query) do |node|        fqdn_list << node.name      end      if fqdn_list.length < 1        ui.msg "No valid servers found to copy the files to"      end      unless File.exist?(local_path)        ui.msg "#{local_path} doesn't exist on local machine"        exit 1      end        Parallel.each((1..fqdn_list.length).to_a, :in_processes => fqdn_list.length) do |i|        puts "Copying #{local_path} to #{Chef::Config[:knife][:ssh_user]}@#{fqdn_list[i-1]}:#{remote_path} "        Net::SCP.upload!(fqdn_list[i-1],"#{Chef::Config[:knife][:ssh_user]}","#{local_path}","#{remote_path}",:ssh => { :keys => ["#{Chef::Config[:knife][:identity_file]}"] }, :recursive => true)      end    end end end This plugin uses the following additional gems: The parallel gem to execute statements in parallel. More information about this gem can be found at https://github.com/grosser/parallel. The net-scp gem to do the actual transfer. This gem is a pure Ruby implementation of the SCP protocol. More information about the gem can be found at https://github.com/net-ssh/net-scp. Both these gems and the Chef search library are required in the deps block to define the dependencies. This plugin accepts three command line arguments and uses knife.rb to get information about which user to connect over SSH and also uses knife.rb to fetch information about the SSH key file to use. All these command line arguments are stored in the name_args array. A Chef search is then used to find a list of servers that match the query, and eventually a parallel gem is used to parallely SCP the file from a local machine to a list of servers returned by a Chef query. As you can see, we've tried to handle a few error situations, however, there is still a possibility of this plugin throwing away errors as the Net::SCP.upload function can error out at times. Let's see our plugin in action: Case1: The file that is supposed to be uploaded doesn't exist locally. We expect the script to error out with an appropriate message: knife scp 'chef_environment:ft' /Users/mayank/test.py /tmp /Users/mayank/test.py doesn't exist on local machine Case2: The /Users/mayank/test folder is: knife scp 'chef_environment:ft' /Users/mayank/test /tmp Copying /Users/mayank/test to ec2-user@host02.ft.sychonet.com:/tmp Copying /Users/mayank/test to ec2-user@host01.ft.sychonet.com:/tmp Case3: A config other than /etc/chef/knife.rb is specified: knife scp -c /Users/mayank/.chef/knife.rb 'chef_environment:ft' /Users/mayank/test /tmp Copying /Users/mayank/test to ec2-user@host02.ft.sychonet.com:/tmp Copying /Users/mayank/test to ec2-user@host01.ft.sychonet.com:/tmp Distributing plugins using gems As you must have noticed, until now we've been creating our plugins under ~/.chef/plugins/knife. Though this is sufficient for plugins that are meant to be used locally, it's just not good enough to be distributed to a community. The most ideal way of distributing a Knife plugin is by packaging your plugin as a gem and distributing it via a gem repository such as rubygems.org. Even if publishing your gem to a remote gem repository sounds like a far-fetched idea, at least allowing people to install your plugin by building a gem locally and installing it via gem install. This is a far better way than people downloading your code from an SCM repository and copying it over to either ~/.chef/plugins/knife or any other folder they've configured for the purpose of searching for custom Knife plugins. With distributing your plugin using gems, you ensure that the plugin is installed in a consistent way and you can also ensure that all the required libraries are preinstalled before a plugin is ready to be consumed by users. All the details required to create a gem are contained in a file known as Gemspec, which resides at the root of your project's directory and is typically named the <project_name>.gemspec. Gemspec file that consists of the structure, dependencies, and metadata required to build your gem. The following is an example of a .gemspec file: Gem::Specification.new do |s| s.name = 'knife-scp' s.version = '1.0.0' s.date = '2014-10-23' s.summary = 'The knife-scp knife plugin' s.authors = ["maxcoder"] s.email = 'maxcoder@sychonet.com" s.files = ["lib/chef/knife/knife-scp.rb"] s.homepage = "https://github.com/maxc0d3r/knife-plugins" s.add_runtime_dependency "parallel","~> 1.2", ">= 1.2.0" s.add_runtime_dependency "net-scp","~> 1.2", ">= 1.2.0" end The s.files variable contains the list of files that will be deployed by a gem install command. Knife can load the files from gem_path/lib/chef/knife/<file_name>.rb, and hence we've kept the knife-scp.rb script in that location. The s.add_runtime_dependency dependency is used to ensure that the required gems are installed whenever a user tries to install our gem. Once the file is there, we can just run a gem build to build our gem file as follows: knife-scp git:(master) x gem build knife-scp.gemspec WARNING: licenses is empty, but is recommended. Use a license abbreviation from: http://opensource.org/licenses/alphabetical WARNING: See http://guides.rubygems.org/specification-reference/ for help Successfully built RubyGem Name: knife-scp Version: 1.0.0 File: knife-scp-1.0.0.gem The gem file is created and now, we can just use gem install knife-scp-1.0.0.gem to install our gem. This will also take care of the installation of any dependencies such as parallel, net-scp gems, and so on. You can find a source code for this plugin at the following location: https://github.com/maxc0d3r/knife-plugins. Once the gem has been installed, the user can run it as mentioned earlier. For the purpose of distribution of this gem, it can either be pushed using a local gem repository, or it can be published to https://rubygems.org/. To publish it to https://rubygems.org/, create an account there. Run the following command to log in using a gem: gem push This will ask for your email address and password. Next, push your gem using the following command: gem push your_gem_name.gem That's it! Now you should be able to access your gem at the following location: http://www.rubygems.org/gems/your_gem_name. As you might have noticed, we've not written any tests so far to check the plugin. It's always a good idea to write test cases before submitting your plugin to the community. It's useful both to the developer and consumers of the code, as both know that the plugin is going to work as expected. Gems support adding test files into the package itself so that tests can be executed when a gem is downloaded. RSpec is a popular choice to test a framework, however, it really doesn't matter which tool you use to test your code. The point is that you need to test and ship. Some popular Knife plugins, built by a community, and their uses, are as follows: knife-elb: This plugin allows the automation of the process of addition and deletion of nodes from Elastic Load Balancers on AWS. knife-inspect: This plugin allows you to see the difference between what's on a Chef server versus what's on a local Chef repository. knife-community: This plugin helps to deploy Chef cookbooks to Chef Supermarket. knife-block: This plugin allows you to configure and manage multiple Knife configuration files against multiple Chef servers. knife-tagbulk: This plugin allows bulk tag operations (creation or deletion) using standard Chef search queries. More information about the plugin can be found at: https://github.com/priestjim/knife-tagbulk. You can find a lot of other useful community-written plugins at: https://docs.chef.io/community_plugin_knife.html. Custom Chef handlers A Chef handler is used to identify different situations that might occur during a chef-client run, and eventually it instructs the chef-client on what it should do to handle these situations. There are three types of handlers in Chef: The exception handler: This is used to identify situations that have caused a chef-client run to fail. This can be used to send out alerts over an email or dashboard. The report handler: This is used to report back when a chef-client run has successfully completed. This can report details about the run, such as the number of resources updated, time taken for a chef-client run to complete, and so on. The start handler: This is used to run events at the beginning of a chef-client run. Writing custom Chef handlers is nothing more than just inheriting your class from Chef::Handler and overriding the report method. Let's say we want to send out an email every time a chef-client run breaks. Chef provides a failed? method to check the status of a chef-client run. The following is a very simple piece of code that will help us accomplish this: require 'net/smtp' module CustomHandler class Emailer < Chef::Handler    def send_email(to,opts={})      opts[:server] ||= 'localhost'      opts[:from] ||='maxcoder@sychonet.com'      opts[:subject] ||='Error'      opts[:body] ||= 'There was an error running chef-client'        msg = <<EOF      From: <#{opts[:from]}>      To: #{to}      Subject: #{opts[:subject]}        #{opts[:body]}      EOF        Net::SMTP.start(opts[:server]) do |smtp|        smtp.send_message msg, opts[:from], to      end    end      def report      name = node.name      subject = "Chef run failure on #{name}"      body = [run_status.formatted_exception]      body += ::Array(backtrace).join("n")      if failed?        send_email(          "ops@sychonet.com",          :subject => subject,          :body => body        )      end    end end end If you don't have the required libraries already installed on your machine, you'll need to make use of chef_gem to install them first before you actually make use of this code. With your handler code ready, you can make use of the chef_handler cookbook to install this custom handler. To do so, create a new cookbook, email-handler, and copy the file emailer.rb created earlier to the file's source. Once done, add the following recipe code: include_recipe 'chef_handler'   handler_path = node['chef_handler']['handler_path'] handler = ::File.join handler_path, 'emailer'   cookbook_file "#{handler}.rb" do source "emailer.rb" end   chef_handler "CustomHandler::Emailer" do source handler    action :enable end Now, just include this handler into your base role, or at the start of run_list and during the next chef-client run, if anything breaks, an email will be sent across to ops@sychonet.com. You can configure many different kinds of handlers like the ones that push notifications over to IRC, Twitter, and so on, or you may even write them for scenarios where you don't want to leave a component of a system in a state that is undesirable. For example, say you were in a middle of a chef-client run that adds/deletes collections from Solr. Now, you might not want to leave the Solr setup in a messed-up state if something were to go wrong with the provisioning process. In order to ensure that a system is in the right state, you can write your own custom handlers, which can be used to handle such situations and revert the changes done until now by the chef-client run. Summary In this article, we learned about how custom Knife plugins can be used. We also learned how we can write our own custom Knife plugin and distribute it by packaging it as a gem. Finally, we learned about custom Chef handlers and how they can be used effectively to communicate information and statistics about a chef-client run to users/admins, or handle any issues with a chef-client run. Resources for Article: Further resources on this subject: An Overview of Automation and Advent of Chef [article] Testing Your Recipes and Getting Started with ChefSpec [article] Chef Infrastructure [article]
Read more
  • 0
  • 0
  • 2233

article-image-style-management-qgis
Packt
06 Jul 2015
11 min read
Save for later

Style Management in QGIS

Packt
06 Jul 2015
11 min read
In this article by Alexander Bruy and Daria Svidzinska, authors of the book QGIS By Example, you will learn how to work with styles, including saving and loading them, using different styles, and working with the Style Manager. (For more resources related to this topic, see here.) Working with styles In QGIS, a style is a way of cartographic visualization that takes into account a layer’s individual and thematic features. It encompasses basic characteristics of symbology, such as the color and presence of fill, outline parameters, the use of markers, scale-dependent rendering, layer transparency, and interactions with other layers. Style incorporates not only rendering appearance, but also other things, such as labeling settings. A well-chosen style greatly simplifies data perception and readability, so it is important to learn how to work with styles to be able to represent your data the best way. Styling is an important part of data preparation, and QGIS provides many handy features that make this process much more productive and easier. Let’s look at some of them! Saving and loading styles Creating good-looking styles can be a time-consuming task, but the good thing is that once developed styles don’t go to nowhere. You can save them for further use in other projects. When you have finished polishing your style, it is wise to save it. Usually, this is done from the Layer Properties dialog, which can be opened from the layer's context menu. There is a Style button at the bottom of this dialog. It provides access to almost all actions that can be performed with the layer's style, including saving, loading, making the style default, and so on. The style can be saved to the file on the disk (this works for any layer type), or stored in the database (possible only for database-based layers). To save style in the file, perform these steps: Open the Layer Properties dialog. Click on the Style button at the bottom of the Properties dialog and go to the Save Style submenu: Choose one of the available formats. A standard file dialog will open. Navigate to the desired location in your filesystem and save the style. Currently QGIS provides support for the following formats of saving styles: QGIS style file: The style is saved as a .qml file, which is a native QGIS format used to store symbology definition and other layer properties. Style Layer Descriptor (SLD) file: The style is exported to a .sld file. SLD format is widely used in web cartography, for example, by applications such as GeoServer. It is necessary to mention that currently, SLD support in QGIS is a bit limited. Also, you should remember that while you can save any style (or renderer type) in SLD, during import, you will get either a rule-based or a single-symbol renderer. If you work with the spatial database, you may want to save layer styles in the same database, together with layers. Such a feature is very useful in corporate environments, as it allows you to assign multiple styles for a single layer and easily keep the styles in sync. Saving styles in the database currently works only for PostgreSQL and SpatiaLite. To save style in the database, follow these steps: Open the Layer Properties dialog. Click on the Style button at the bottom of the Properties dialog and go to the Save Style submenu. Select the Save in database (format) item, where format can be spatialite or postgres, depending on the database type: The Save style in database dialog opens. Enter the style name and (optional) description in the corresponding fields, and click on the OK button to save the style: The saved style can be loaded and applied to the layer. To load a style from the file, use these steps: Open the Layer Properties dialog from the context menu. Click on the Style button at the bottom of the Properties dialog and select the Load Style item. Choose the style file to load. Loading a style from the database is a bit different: Open the Layer Properties dialog from the context menu. Click on the Style button at the bottom of the Properties dialog and go to Load Style | From database. The Load style from database dialog opens. Select the style you want to load and click on the Load Style button. With all of these options, we can easily save styles in the format that meets our requirements and tasks. Copy and paste styles Very often, you need to apply mostly the same style with really minor differences to multiple layers. There are several ways of doing this. First, you can save the style (as described in the previous section) in one of the supported formats, and then apply this saved style to another layer and edit it. But there is simpler way. Starting from QGIS 1.8, you can easily copy and paste styles between layers. To copy a style from one layer to another, perform these steps: In the QGIS layer tree, select the source layer from which you want to copy the style. Right-click to open the context menu. Go to Styles | Copy Style to copy the style of the source layer to the clipboard. Now, in the QGIS layer tree, select the target layer. Right-click to open its context menu. Go to Styles | Paster Style to paste the previously copied style from the clipboard and apply it to the target layer. It is important to note that QGIS allows you to copy, for example, a polygonal style and apply it to a point or line layer. This may lead to incorrect layer rendering, or the layer can even disappear from the map even though it still present in the layer tree. Instead of using the layer context menu to copy and paste styles, you can use the QGIS main menu. Both of these actions (Copy Style and Paste Style) can be found in the Layer menu. The copied style can be pasted in a text editor. Just copy the style using the context menu or QGIS main menu, open the text editor, and press Ctrl + V (or another shortcut used in your system to paste data from the clipboard) to paste the style. Now you can study it. Also, with this feature, you can apply the same style to multiple layers at once. Copy the style as previously described. Then select in the QGIS layer tree all the layers that you want to style (use the Ctrl key and/or the Shift key to select multiple layers). When all the desired layers are selected, go to Layer | Paste Style. Voilà! Now the style is applied to all selected layers. Using multiple styles per layer Sometimes, you may need to show the same data with different styles on the map. The most common and well-known solution to do this is to duplicate the layer in the QGIS layer tree and change the symbology. QGIS 2.8 allows us to achieve the same result in a simpler and more elegant way. Now we can define multiple styles for a single layer and easily switch between them when necessary. This functionality is available from the layer context menu and the layer Properties dialog. By default, all layers have only one style, called default. To create an additional style, use these steps: Select the layer in the layer tree. Right-click to open the context menu. Go to Styles | Add. The New style dialog opens. Enter the name of the new style and click on OK: A new style will be added and become active. It is worth mentioning that after adding a new style, the layer's appearance will remain the same, as the new style inherits all the properties of the active style. Adjust the symbology and other settings according to your needs. These changes will affect only the current style; previously created styles will remain unchanged. You can add as many styles as you want. All available styles are listed in the layer context menu, at the bottom of the Styles submenu. The current (or active) style is marked with a checkbox. To switch to another style, just select its name in the menu. If necessary, you can rename the active style (go to Styles | Rename Current) or remove it (go to Styles | Remove Current). Also, the current style can be saved and copied as previously described. Moreover, it is worth mentioning that multiple styles are supported by the QGIS server. The available layer styles are displayed via the GetCapabilities response, and the user can request them in, for example, the GetMap request. This handy feature also works in the Print Composer. Using Style manager Style manager provides extended capabilities for symbology management, allowing the user to save developed symbols; tag and merge them into thematic groups; and edit, delete, import, or export ready-to-use predefined symbology sets. If you created a symbol and want it to be available for further use and management, you should first save it to the Symbol library by following these steps: In the Style section of the layer Properties window, click on the Save button underneath the symbol preview window. In the Symbol name window, type a name for the new symbol and click on OK: After that, the symbol will appear and become available from the symbol presets in the right part of the window. It will also become available for the Style Manager. The Style Manger window can be opened by: Clicking on the Open library button after going to Properties | Style Going to Settings | Style Manager The window consists of three sections: In the left section, you can see a tree view of the available thematic symbology groups (which, by default, don't contain any user-specified groups) In the right part, there are symbols grouped on these tabs: Marker (for point symbols), Line (for line symbols), Fill (for polygon symbols), and Color ramp (for gradient symbols). If you double-click on any symbol on these tabs, the Symbol selector window will be opened, where you can change any available symbol properties (Symbol layer type, Size, Fill and Outline colors, and so on). Similarly, you can use the Edit button to change the appearance of the symbol. The bottom section of the window contains symbol management buttons—Add, Remove, Edit, and Share—for groups and their items. Let's create a thematic group called Hydrology. It will include symbology for hydrological objects, whether they are linear (river, canal, and so on) or polygonal (lake, water area, and so on). For this, perform the following steps: Highlight the groups item in the left section of the Style manager window and click on the very first + button. When the New group appears, type the Hydrology name. Now you need to add some symbols to the newly emerged group. There are two approaches to doing this: Right-click on any symbol (or several by holding down the Ctrl key) you want to add, and select from its contextual shortcut Apply Group | Hydrology. Alternatively, highlight the Hydrology group in the groups tree, and from the button below, select Group Symbols, as shown in this screenshot: As a result, checkboxes will appear beside the symbols, and you can toggle them to add the necessary symbol (or symbols) to the group. After you have clicked on the Close button, the symbols will be added to the group. Once the group is created, you can use it for quick access to the necessary symbology by going to Properties | Style | Symbols in group, as shown in the following screenshot: Note that you can combine the symbology for different symbology types within a single group (Marker, Line, Fill, and Color ramp), but when you upload symbols in this group for a specific layer, the symbols will be filtered according to the layer geometry type (for example, Fill for the polygon layer type). Another available option is to create a so-called Smart Group, where you can flexibly combine various conditions to merge symbols into meaningful groups. As an example, the following screenshot shows how we can create a wider Water group that includes symbols that are not only present in Hydrology already, but are also tagged as blue: Use the Share button to Export or Import selected symbols from external sources. Summary This article introduced the different aspects of style management in QGIS: saving and loading styles, copying and pasting styles, and using multiple styles per layer. Resources for Article: Further resources on this subject: How Vector Features are Displayed [article] Geocoding Address-based Data [article] Editing attributes [article]
Read more
  • 0
  • 0
  • 33960

article-image-aws-global-infrastructure
Packt
06 Jul 2015
5 min read
Save for later

AWS Global Infrastructure

Packt
06 Jul 2015
5 min read
In this article by Uchit Vyas, who is the author of the Mastering AWS Development book, we will see how to use AWS services in detail. It is important to have a choice of placing applications as close as possible to your users or customers, in order to ensure the lowest possible latency and best user experience while deploying them. AWS offers a choice of nine regions located all over the world (for example, East Coast of the United States, West Coast of the United States, Europe, Tokyo, Singapore, Sydney, and Brazil), 26 redundant Availability Zones, and 53 Amazon CloudFront points of presence. (For more resources related to this topic, see here.) It is very crucial and important to have the option to put applications as close as possible to your customers and end users by ensuring the best possible lowest latency and user-expected features and experience, when you are creating and deploying apps for performance. For this, AWS provides worldwide means to the regions located all over the world. To be specific via name and location, they are as follows: US East (Northern Virginia) region US West (Oregon) region US West (Northern California) region EU (Ireland) Region Asia Pacific (Singapore) region Asia Pacific (Sydney) region Asia Pacific (Tokyo) region South America (Sao Paulo) region US GovCloud In addition to regions, AWS has 25 redundant Availability Zones and 51 Amazon CloudFront points of presence. Apart from these infrastructure-level highlights, they have plenty of managed services that can be the cream of AWS candy bar! The managed services bucket has the following listed services: Security: For every organization, security in each and every aspect is the vital element. For that, AWS has several remarkable security features that distinguishes AWS from other Cloud providers as follows : Certifications and accreditations Identity and Access Management Right now, I am just underlining the very important security features. Global infrastructure: AWS provides a fully-functional, flexible technology infrastructure platform worldwide, with managed services over the globe with certain characteristics, for example: Multiple global locations for deployment Low-latency CDN service Reliable, low-latency DNS service Compute: AWS offers a huge range of various cloud-based core computing services (including variety of compute instances that can be auto scaled to justify the needs of your users and application), a managed elastic load balancing service, and more of fully managed desktop resources on the pathway of cloud. Some of the common characteristics of computer services include the following: Broad choice of resizable compute instances Flexible pricing opportunities Great discounts for always on compute resources Lower hourly rates for elastic workloads Wide-ranging networking configuration selections A widespread choice of operating systems Virtual desktops Save further as you grow with tiered pricing model Storage: AWS offers low cost with high durability and availability with their storage services. With pay-as-you-go pricing model with no commitment, provides more flexibility and agility in services and processes for storage with a highly secured environment. AWS provides storage solutions and services for backup, archive, disaster recovery, and so on. They also support block, file, and object kind of storages with a highly available and flexible infrastructure. A few major characteristics for storage are as follows: Cost-effective, high-scale storage varieties Data protection and data management Storage gateway Choice of instance storage options Content delivery and networking: AWS offers a wide set of networking services that enables you to create a logical isolated network that the architect defines and to create a private network connection to the AWS infrastructure, with fault tolerant, scalable, and highly available DNS service. It also provides delivery services for content to your end users, by very low latency and high data transfer speed with AWS CDN service. A few major characteristics for content delivery and networking are as follows: Application and media files delivery Software and large file distribution Private content Databases: AWS offers fully managed, distributed relational and NoSQL type of database services. Moreover, database services are capable of in-memory caching, sharding, and scaling with/without data warehouse solutions. A few major characteristics for databases are as follows: RDS DynamoDB Redshift ElastiCache Application services: AWS provides a variety of managed application services with lower cost such as application streaming and queuing, transcoding, push notification, searching, and so on. A few major characteristics for databases are as follows: AppStream CloudSearch Elastic transcoder SWF, SES, SNS, SQS Deployment and management: AWS offers management of credentials to explore AWS services such as monitor services, application services, and updating stacks of AWS resources. They also have deployment and security services alongside with AWS API activity. A few major characteristics for deployment and management services are as follows: IAM CloudWatch Elastic Beanstalk CloudFormation Data Pipeline OpsWorks CloudHSM Cloud Trail Summary There are a few more additional important services from AWS, such as support, integration with existing infrastructure, Big Data, and ecosystem, which puts it on the top of other infrastructure providers. As a cloud architect, it is necessary to learn about cloud service offerings and their all-important functionalities. Resources for Article: Further resources on this subject: Amazon DynamoDB - Modelling relationships, Error handling [article] Managing Microsoft Cloud [article] Amazon Web Services [article]
Read more
  • 0
  • 0
  • 15051

Packt
06 Jul 2015
8 min read
Save for later

CoreOS – Overview and Installation

Packt
06 Jul 2015
8 min read
In this article by Rimantas Mocevicius, author of the book CoreOS Essentials, has described CoreOS is often as Linux for massive server deployments, but it can also run easily as a single host on bare-metal, cloud servers, and as a virtual machine on your computer as well. It is designed to run application containers as docker and rkt, and you will learn about its main features later in this article. This article is a practical, example-driven guide to help you learn about the essentials of the CoreOS Linux operating system. We assume that you have experience with VirtualBox, Vagrant, Git, Bash shell scripting and the command line (terminal on UNIX-like computers), and you have already installed VirtualBox, Vagrant, and git on your Mac OS X or Linux computer. As for a cloud installation, we will use Google Cloud's Compute Engine instances. By the end of this article, you will hopefully be familiar with setting up CoreOS on your laptop or desktop, and on the cloud. You will learn how to set up a local computer development machine and a cluster on a local computer and in the cloud. Also, we will cover etcd, systemd, fleet, cluster management, deployment setup, and production clusters. In this article, you will learn how CoreOS works and how to carry out a basic CoreOS installation on your laptop or desktop with the help of VirtualBox and Vagrant. We will basically cover two topics in this article: An overview of CoreOS Installing the CoreOS virtual machine (For more resources related to this topic, see here.) An overview of CoreOS CoreOS is a minimal Linux operation system built to run docker and rkt containers (application containers). By default, it is designed to build powerful and easily manageable server clusters. It provides automatic, very reliable, and stable updates to all machines, which takes away a big maintenance headache from sysadmins. And, by running everything in application containers, such setup allows you to very easily scale servers and applications, replace faulty servers in a fraction of a second, and so on. How CoreOS works CoreOS has no package manager, so everything needs to be installed and used via docker containers. Moreover, it is 40 percent more efficient in RAM usage than an average Linux installation, as shown in this diagram: CoreOS utilizes an active/passive dual-partition scheme to update itself as a single unit, instead of using a package-by-package method. Its root partition is read-only and changes only when an update is applied. If the update is unsuccessful during reboot time, then it rolls back to the previous boot partition. The following image shows OS updated gets applied to partition B (passive) and after reboot it becomes the active to boot from. The docker and rkt containers run as applications on CoreOS. Containers can provide very good flexibility for application packaging and can start very quickly—in a matter of milliseconds. The following image shows the simplicity of CoreOS. Bottom part is Linux OS, the second level is etcd/fleet with docker daemon and the top level are running containers on the server. By default, CoreOS is designed to work in a clustered form, but it also works very well as a single host. It is very easy to control and run application containers across cluster machines with fleet and use the etcd service discovery to connect them as it shown in the following image. CoreOS can be deployed easily on all major cloud providers, for example, Google Cloud, Amazon Web Services, Digital Ocean, and so on. It runs very well on bare-metal servers as well. Moreover, it can be easily installed on a laptop or desktop with Linux, Mac OS X, or Windows via Vagrant, with VirtualBox or VMware virtual machine support. This short overview should throw some light on what CoreOS is about and what it can do. Let's now move on to the real stuff and install CoreOS on to our laptop or desktop machine. Installing the CoreOS virtual machine To use the CoreOS virtual machine, you need to have VirtualBox, Vagrant, and git installed on your computer. In the following examples, we will install CoreOS on our local computer, which will serve as a virtual machine on VirtualBox. Okay, let's get started! Cloning the coreos-vagrant GitHub project Let‘s clone this project and get it running. In your terminal (from now on, we will use just the terminal phrase and use $ to label the terminal prompt), type the following command: $ git clone https://github.com/coreos/coreos-vagrant/ This will clone from the GitHub repository to the coreos-vagrant folder on your computer. Working with cloud-config To start even a single host, we need to provide some config parameters in the cloud-config format via the user data file. In your terminal, type this: $ cd coreos-vagrant$ mv user-data.sample user-data The user data should have content like this (the coreos-vagrant Github repository is constantly changing, so you might see a bit of different content when you clone the repository): #cloud-config coreos: etcd2:    #generate a new token for each unique cluster from “     https://discovery.etcd.io/new    #discovery: https://discovery.etcd.io/<token>    # multi-region and multi-cloud deployments need to use “     $public_ipv4    advertise-client-urls: http://$public_ipv4:2379    initial-advertise-peer-urls: http://$private_ipv4:2380    # listen on both the official ports and the legacy ports    # legacy ports can be omitted if your application doesn‘t “     depend on them    listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001    listen-peer-urls: “     http://$private_ipv4:2380,http://$private_ipv4:7001 fleet:    public-ip: $public_ipv4 flannel:    interface: $public_ipv4 units:    - name: etcd2.service      command: start    - name: fleet.service      command: start    - name: docker-tcp.socket      command: start      enable: true      content: |        [Unit]        Description=Docker Socket for the API          [Socket]        ListenStream=2375        Service=docker.service        BindIPv6Only=both        [Install]        WantedBy=sockets.target Replace the text between the etcd2: and fleet: lines to look this: etcd2:    name: core-01    initial-advertise-peer-urls: http://$private_ipv4:2380    listen-peer-urls: “     http://$private_ipv4:2380,http://$private_ipv4:7001    initial-cluster-token: core-01_etcd    initial-cluster: core-01=http://$private_ipv4:2380    initial-cluster-state: new    advertise-client-urls: “     http://$public_ipv4:2379,http://$public_ipv4:4001    listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 fleet: You can also download the latest user-data file from https://github.com/rimusz/coreos-essentials-book/blob/master/Chapter1/user-data. This should be enough to bootstrap a single-host CoreOS VM with etcd, fleet, and docker running there. Startup and SSH It's now time to boot our CoreOS VM and log in to its console using ssh. Let's boot our first CoreOS VM host. To do so, using the terminal, type the following command: $ vagrant up This will trigger vagrant to download the latest CoreOS alpha (this is the default channel set in the config.rb file, and it can easily be changed to beta, or stable) channel image and the lunch VM instance. You should see something like this as the output in your terminal: CoreOS VM has booted up, so let's open the ssh connection to our new VM using the following command: $ vagrant ssh It should show something like this: CoreOS alpha (some version) core@core-01 ~ $ Perfect! Let's verify that etcd, fleet, and docker are running there. Here are the commands required and the corresponding screenshots of the output: $ systemctl status etcd2 To check the status of fleet, type this: $ systemctl status fleet To check the status of docker, type the following command: $ docker version Lovely! Everything looks fine. Thus, we've got our first CoreOS VM up and running in VirtualBox. Summary In this article, we saw what CoreOS is and how it is installed. We covered a simple CoreOS installation on a local computer with the help of Vagrant and VirtualBox, and checked whether etcd, fleet, and docker are running there. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [article] Clustering [article] Deploying a Play application on CoreOS and Docker [article]
Read more
  • 0
  • 0
  • 1771

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,bill@williamrice.net,Bill,Binky,open-enrollmentmoodlers moodler_2,rose@williamrice.net,Rose,Krial,open-enrollmentmoodlers moodler_3,jeff@williamrice.net,Jeff,Marco,open-enrollmentmoodlers moodler_4,dave@williamrice.net,Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,bill@williamrice.net,Bill,Binky,history101,odds,science101 moodler_2,rose@williamrice.net,Rose,Krial,history101,even,science101 moodler_3,jeff@williamrice.net,Jeff,Marco,history101,odds,science101 moodler_4,dave@williamrice.net,Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 10261

article-image-network-development-swift
Packt
06 Jul 2015
30 min read
Save for later

Network Development with Swift

Packt
06 Jul 2015
30 min read
In this article by Jon Hoffman, author of the book Mastering Swift, you will learn how to use Apple's system configuration API to figure out what type of network connection we have. If we are developing applications for a mobile device (iPhone, iPod, or iPad), it is e essential to know if we have a network connection and what type of connection it is. (For more resources related to this topic, see here.) What is network development? Network development is writing code that will allow our application to send and receive data from remote services or devices. The large majority of these bulletin board services used a single modem, which meant that only one user could connect to them at any one time. These bulletin boards would seem very strange and archaic for those that grew up with the Internet; however, back then, they were how computers shared information. At that time, being able to connect to a computer across town and upload/download files was amazing. Today, however, we communicate with services and devices all over the world without thinking twice about it. Back when I first started writing applications, it was rare to develop an application that communicated over a standard network, and it was also hard to find developers with experience in network development. In today's world, just about every application has a requirement for some sort of network communication. In this article, we will show you how to connect to Representational State Transfer (REST) based services like the one Apple supplies that lets developers search the iTunes Store. Apple has documented this service very well. The documentation can be found at https://www.apple.com/itunes/affiliates/resources/documentation/itunes-store-web-service-search-api.html. Before we look at how to connect to REST services, let's look at the classes in Apple's networking API that we will be using. These classes are part of Apple's powerful URL loading system. An overview of the URL session classes Apple's URL loading system is a framework of classes available to interact with URLs. Using these classes together lets us communicate with services that use standard Internet protocols. The classes that we will be using in this article to connect to and retrieve information from REST services are as follows: NSURLSession: This is the main session object. It was written as a replacement for the older NSURLConnection API. NSURLSessionConfiguration: This is used to configure the behavior of the NSURLSession object. NSURLSessionTask: This is a base class to handle the data being retrieved from the URL. Apple provides three concrete subclasses of the NSURLSessionTask class. NSURL: This is an object that represents the URL to connect to. NSMutableURLRequest: This class contains information about the request that we are making and is used by the NSURLSessionTask service to make the request. NSHTTPURLResponse: This class contains the response to our request. Now, let's look at each of these classes a little more in depth so we have a basic understanding of what each does. NSURLSession Prior to iOS 7 and OS X 10.9, when a developer wanted to retrieve contents from a URL, they used the NSURLConnection API. Starting with iOS 7 and OS X 10.9, the preferred API became NSURLSession. The NSURLSession API can be thought of as an improvement to the older NSURLConnection API. An NSURLSession object provides an API for interacting with various protocols such as HTTP and HTTPS. The session object, which is an instance of the NSURLSession, manages this interaction. These session objects are highly configurable, which allows us to control how our requests are made and how we handle the data that is returned. Like most networking API, NSURLSession is asynchronous. This means that we have to provide a way to return the response from the service back to the code that needs it. The most popular way to return the results from a session is to pass a completion handler block (closure) to the session. This completion handler is then called when the service successfully responds or we receive an error. All of the examples in this article use completion handlers to process the data that is returned from the services. NSURLSessionConfiguration The NSURLSessionConfiguration class defines the behavior and policies to use when using the NSURLSession object to connect to a URL. When using the NSURLSession object, we usually create an NSURLSessionConfiguration instance first because an instance of this class is required when we create an instance of the NSURLSession class. The NSURLSessionConfiguration class defines three session types. These are: Default session configuration: This configuration behaves similar to the NSURLConnection API. Ephemeral Session configuration: This configuration behaves similar to the default session configuration, except it does not cache anything to disk. Background Session configuration: This session allows for uploads and downloads to be performed, even when the app is running in the background. It is important to note that we should make sure that we configure the NSURLSessionConfiguration object appropriately before we use it to create an instance of the NSURLSession class. When the session object is created, it creates a copy of the configuration object that we provided it. Any changes made to the configuration object once the session object is created are ignored by the session. If we need to make changes to the configuration, we must create another instance of the NSURLSession class. NSURLSessionTask The NSURLSession service uses an instance of the NSURLSessionTask classes to make the call to the service that we are connecting to. The NSURLSessionTask class is a base class, and Apple has provided three concrete subclasses that we can use, and they are as follows: NSURLSessionDataTask: This returns the response, in memory, directly to the application as one or more NSData objects. This is the task that we generally use most often. NSURLSessionDownloadTask: This writes the response directly to a temporary file. NSURLSessionUploadTask: This is used for making requests that require a request body such as a POST or PUT request. It is important to note that a task will not send the request to the service until we call the resume() method. Using the NSURL class The NSURL object represents the URL that we are going to connect to. The NSURL class is not limited to URLs that represent remote servers but it can also be used to represent a local file on disk. In this article, we will be using the NSURL class exclusively to represent the URL of the remote service that we are connecting to. NSMutableURLRequest The NSMutableURLRequest class is a mutable subclass of the NSURLRequest class, which represents a URL load request. We use the NSMutableRequest class to encapsulate our URL and the request properties. It is important to understand that the NSMutableURLRequest class is used to encapsulate the necessary information to make our request but it does not make the actual request. To make the request, we use instances of the NSURLSession and NSURLSessionTask classes. NSURLHTTPResponse NSURLHTTPResponse is a subclass of the NSURLResponse class that encapsulates the metadata associated with the response to a URL request. The NSURLHTTPResponse class provides methods for accessing specific information associated with an HTTP response. Specifically, this class allows us to access the HTTP header fields and the response status codes. We briefly covered a number of classes in this section and it may not be clear how they all actually fit together; however, once you see the examples a little further in this article, it will become much clearer. Before we go into our examples, let's take a quick look at the type of service that we will be connecting to. REST web services REST has become one of the most important technologies for stateless communications between devices. Due to the lightweight and stateless nature of the REST base services, its importance is likely to continue to grow as more devices are connected to the Internet. REST is an architecture style for designing networked applications. The idea behind REST is instead of using complex mechanisms, such as SOAP or CORBA to communicate between devices, we use simple HTTP requests for the communication. While, in theory, REST is not dependent on the Internet protocols, it is almost always implemented using them. Therefore, when we are accessing REST services, we are almost always interacting with web servers in the same way that our web browsers interact with these servers. REST web services use the HTTP POST, GET, PUT, or DELETE methods. If we think about a standard CRUD (create/read/update/delete) application, we would use a POST request to create or update data, a GET request to read data, and a DELETE request to delete data. When we type a URL into our browser's address bar and hit Enter, we are generally making a GET request to the server and asking it to send us the web page associated with that URL. When we fill out a web form and click the the submit button, we are generally making a POST request to the server. We then include the parameters from the web form in the body of our POST request. Now, let's look at how to make an HTTP GET request using Apple's networking API. Making an HTTP GET request In this example, we will make a GET request to Apple's iTunes search API to get a list of items related to the search term Jimmy Buffett. Since we are retrieving data from the service, by REST standards, we should use a GET request to retrieve the data. While the REST standard is to use GET requests to retrieve data from a service, there is nothing stopping a developer of a web service from using a GET request to create or update a data object. It is not recommended to use a GET request in this manner, but just be aware that there are services out there that do not adhere to the REST standards. The following code makes a request to Apple's iTunes search API and then prints the results to the console: public typealias DataFromURLCompletionClosure = (NSURLResponse!,   NSData!) -> Void  public func sendGetRequest(handler: DataFromURLCompletionClosure) {  var queue = NSOperationQueue()  var sessionConfiguration = NSURLSessionConfiguration.defaultSessionConfiguration();     var urlString = "https://itunes.apple.com/search?term=jimmy+buffett"  if let encodeString = urlString.stringByAddingPercentEscapesUsingEncoding      (NSUTF8StringEncoding) {    if let url = NSURL(string: encodeString) {          var request = NSMutableURLRequest(URL: url)      request.HTTPMethod = "GET"      var urlSession = NSURLSession(configuration: sessionConfiguration, delegate: nil, delegateQueue: queue)            var sessionTask = urlSession.dataTaskWithRequest(request) {        (data, response, error) in                handler(response, data)      }      sessionTask.resume()      }    }} We start off by creating a type alias named DataFromURLCompletionClosure. The DataFromURLCompletionClosure type will be used for both the GET and POST examples of this `. If you are not familiar with using a typealias object to define a closure type. We then create a function named sendGetRequest() that will be used to make the GET request to Apple's iTunes API. This function accepts one argument named handler, which is a closure that conforms to the DataFromURLCompletionClosure type. The handler closure will be used to return the results from our request. Within our sendGetRequest() method, we begin by creating an instance of the NSOperationQueue class. This queue will be used by our NSURLSession instance for scheduling the delegate calls and completion handlers. We then create an instance of the NSURLSessionConfiguration class using the defaultSessionConfiguration() method, which creates a default session configuration instance. If we need to, we can modify the session configuration properties after we create it, but in this example, the default configuration is what we want. After we create our session configuration, we create our URL string. This is the URL of the service we are connecting to. With a GET request, we put our parameters in the URL itself. In this specific example, https://itunes.apple.com/search is the URL of the web service. We then follow the web service URL with a question mark (?), which indicates that the rest of the URL string consists of parameters for the web service. Parameters take the form of key/value pairs, which means that each parameter has a key and a value. The key and the value of a parameter, in a URL, are separated by an equals sign (=). In our example, the key is term and the value is jimmy+buffett. Next, we run the URL string that we just created through the stringByAddingPercentEscapesUsingEncoding() method to make sure our URL string is encoded properly. Next, we use the URL string that we just built to create an NSURL instance named url. Since we are making a GET request, this NSURL instance will represent both the location of the web service and the parameters that we are sending to it. We create an instance of the NSMutableURLRequest class using the NSURL instance that we just created. We use the NSMutableURLRequest class, instead of the NSURLRequest class, so we can set the properties needed for our request. In this example, we set the HTTPMethod property; however, we can also set other properties like the timeout interval, or add items to our HTTP header. Now, we use the sessionConfiguration variable (instance of the NSURLSessionConfiguration) and the queue (instance of NSOperationQueue) that we created at the beginning of the sendGetRequest() function to create an instance of the NSURLSession class. The NSURLSession class provides the API that we will use to connect to Apple's iTunes search API. In this example, we use the dataTaskWithRequest() method of the NSURLSession instance to return an instance of the NSURLSessionDataTask instance named sessionTask. The sessionTask instance is what makes the request to the iTunes search API. When we receive the response from the service, we use the handler callback to return both the NSURLResponse object and the NSData object. The NSURLResponse contains information about the response, and the NSData instance contains the body of the response. Finally, we call the resume() method of the NSURLSessionDataTask instance to make the request to the web service. Remember, as we mentioned earlier, an NSURLSessionTask instance will not send the request to the service until we call the resume() method. Now, let's look at how we would call the sendGetRequest() function. The first thing we need to do is to create a closure that will be passed to the sendGetRequest() function and called when the response from the web service is received. In this example, we will simply print the response to the console. Here is the code: var printResultsClosure: HttpConnect.DataFromURLCompletionClosure   = {   if let data = $1 {    var sString = NSString(data: data, encoding: NSUTF8StringEncoding)    println(sString) } } We define this closure, named printResultsClosure, to be an instance of the DataFromURLCompletionClosure type. Within the closure, we unwrap the first parameter and set the value to a constant named data. If the first parameter is not nil, we convert the data constant to an instance of the NSString class, which is then printed to the console. Now, let's call the getConnection() method with the following code: let aConnect = HttpConnect() aConnect.sendGetRequest(printResultsClosure) This code creates an instance of the HttpConnect class and then calls the sendGetRequest() method, passing the printResultsClosure closure as the only parameter. If we run this code while we are connected to the Internet, we will receive a JSON response that contains a list of items related to Jimmy Buffett on iTunes. Now that we have seen how to make a simple HTTP GET request, let's look at how we would make an HTTP Post POST request to a web service. Making an HTTP POST request Since Apple's iTunes, APIs use GET requests to retrieve data. In this section, we will use the free httpbin.org service to show you how to make a POST request. The POST service that httpbin.org provides can be found at http://httpbin.org/post. This service will echo back the parameters that it receives so we can verify our request was made properly. When we make a POST request, we generally have some data that we want to send or post to the server. This data takes the form of key-value pairs. These pairs are separated by an ampersand (&) symbol and each key is separated from its value by an equals sign (=). As an example, let's say that we want to submit the following data to our service: firstname: Jon lastname: Hoffman age: 47 years The body of the POST request would take the following format: firstname=Jon&lastname=Hoffman&age=47 Once we have the data in the proper format, we will then use the dataUsingEncoding() method, like we did with the GET request to properly encode the POST data. Since the data going to the server is in the key-value format, the most appropriate way to store this data, prior to sending it to the service, is with a Dictionary object. With this in mind, we will need to create a method that will take a Dictionary object and return a string object that can be used for the POST request. The following code will do that: func dictionaryToQueryString(dict: [String : String]) -> String { var parts = [String]() for (key, value) in dict {    var part: String = key + "=" + value    parts.append(part); } return join("&", parts) } This function loops though each key-value pair of the Dictionary object and creates a String object that contains the key and the value separated by the equals sign (=). We then use the join() function to join each item in the array separated by the specified sting. In our case, we want to separate each string with the ampersand symbol (&). We then return this newly created string to the code that called it. Now, let's create our sendPostRequest() function that will send the POST request to the httpbin.org post service. We will see a lot of similarities between this sendPostRequest() function and the sendGetRequest() function that we showed you in the Making an HTTP GET request section of this article. Let's take a look at the following code: public func sendPostRequest(handler: DataFromURLCompletionClosure) { var queue = NSOperationQueue()        var sessionConfiguration = NSURLSessionConfiguration.defaultSessionConfiguration()    var urlString = "http://httpbin.org/post" if let encodeString =     urlString.stringByAddingPercentEscapesUsingEncoding (NSUTF8StringEncoding){    if let url: NSURL = NSURL(string: encodeString) {                     var request = NSMutableURLRequest(URL: url)      request.HTTPMethod = "POST"      var params = dictionaryToQueryString(["One":"1 and 1", "Two"  : "2 and 2"])      request.HTTPBody = params.dataUsingEncoding(NSUTF8StringEncoding, allowLossyConversion: true)      var urlSession = NSURLSession(configuration: sessionConfiguration, delegate: nil, delegateQueue: queue)           var sessionTask = urlSession.dataTaskWithRequest(request) {        (data, response, error) in             handler(response, data)      }      sessionTask.resume()      }    } } Now, let's walk though this code. Notice that we are using the same type alias, named DataFromURLCompletionClosure, that we used with the sendGetRequest() function. If you are not familiar with using a typealias object to define a closure type. The sendPostRequest() function accepts one argument named handler, which is a closure that conforms to the DataFromURLCompletionClosure type. The handler closure will be used to process the data from the httpbin.org service once the service responds to our request. Within our sendPostRequest() method, we start off by creating an instance of the NSOperationQueue named queue. This queue will be used by our NSURLSession instance to schedule the delegate calls and completion handlers. We then create an instance of the NSURLSessionConfiguration class using the defaultSessionConfiguration() method, which creates a default session configuration instance. We are able to modify the session configuration properties after we create it, but, in this example, the default configuration is what we want. After we created our session configuration, we create our URL string. This is the URL of the service we are connecting to. In this example, the URL is http://httpbin.org/post. Next, we run the URL string through the stringByAddingPercentEscapesUsingEncoding() method to make sure that our URL string is encoded properly. Next, we use the URL string that we just built to create an instance of the NSURL class named url. Since this is a POST request, this NSURL instance will represent the location of the web service that we are connecting to. We now create an instance of the NSMutableURLRequest class using the NSURL instance that we just created. We use the NSMutableURLRequest class, instead of the NSURLRequest class, so that we can set the properties needed for our request. In this example, we set the HTTPMethod property; however, we can also set other properties like the timeout interval, or add items to our HTTP header. Now, we use our dictionaryToQueryString() function, that we showed you at the beginning of this section, to build the data that we are going to post to the server. We then use the dataUsingEncoding() function to make sure that our data is properly encoded prior to sending it to the server, and finally, the data is added to the HTTPBody property of the NSMutableURLRequest instance. Next, we use the sessionConfiguration variable (instance of the NSURLSessionConfiguration) and the queue (NSOperationQueue) that we created at the beginning of the function to create an instance of the NSURLSession class. The NSURLSession class provides the API that we will use to connect to the post on httpbin.org's post service. In this example, we use the dataTaskWithRequest() method of the NSURLSession instance to return an instance of the NSURLSessionDataTask named sessionTask. The sessionTask instance is what makes the request to the httpbin.org's POST service. When we receive the response from the service, we use the handler callback to return both the NSURLResponse object and the NSData object. The NSURLResponse contains information about the response and the NSData instance contains the body of the response. Finally, we call the resume() method of the NSURLSessionDataTask instance to make the request to the web service. Remember, as we mentioned earlier, an NSURLSessionTask class will not send the request to the service until we call the resume() method. We can then call the sendPostRequest() method in exactly the same way that we called the sendGetRequest() method. When developing applications that communicate to other devices and services over the Internet, it is also good practice to verify that we have a network connection. When developing mobile applications, it is also good practice to verify that we are not using a mobile connection (3G, 4G, and so on) to transfer large amounts of data. Let's look at how to verify that we have a network connection and what type of connection we have. Encoding a URL In both the sendGetRequest and sendPostRequest examples, we used the stringByAddingPercentEscapesUsingEncoding() function to make sure that we had a valid URL. While this function does make sure that we have a valid URL, it does not always return the URL that we expect. In Apple's documentation of this function, it notes the following issue, in Apple's documentation of this function, it notes that it may be difficult to use this function to clean up unescaped or partially escaped URL strings where sequences are unpredictable. With this in mind, the following function is a much better way to convert a standard string to a valid URL: private func urlEncode(s: String) -> String { return CFURLCreateStringByAddingPercentEscapes(nil, s, nil,   "!*'"();:@&=+$,/?%#[]", CFStringBuiltInEncodings.UTF8.rawValue)     as! String } Within this function, we use the CFURLCreateStringByAddingPercentEscapes() function to replace the characters listed with the equivalent percent escape sequence as defined by the encoding. This is a much better function for converting a string instance to a valid URL than the stringByAddingPercentEscapesUsingEncoding() function. Checking network connection As we create applications that communicate with other devices and services over the Internet, eventually, we will want to verify that we have a network connection prior to making the network calls. Another thing to consider when we are writing mobile applications is the type of network connection that the user has. As mobile application developers, we need to keep in mind that our users probably have a mobile data plan that limits the amount of data they can send/receive in a month. If they exceed that limit, they may have to pay an extra fee. If our application sends large amounts of data, it might be appropriate to warn our user prior to sending this data. This next example will show how we can verify that we have a network connection and also tells us what type of connection we have. We will begin by importing the system configuration API and also defining an enum that contains the different connection types. We will import the system configuration API like this: import SystemConfiguration The ConnectionType enum is then defined like this: public enum ConnectionType {    case NONETWORK    case MOBILE3GNETWORK    case WIFINETWORK } Now, let's look at the code to check the network connection type: public func networkConnectionType(hostname: NSString) ->   ConnectionType {        var reachabilityRef =  SCNetworkReachabilityCreateWithName (nil,hostnamehostnamehostnamehostname.UTF8String)        var reachability = reachabilityRef.takeUnretainedValue() var flags: SCNetworkReachabilityFlags = 0 SCNetworkReachabilityGetFlags (reachabilityRef.takeUnretainedValue(), &flags) var reachable: Bool = (flags & UInt32(kSCNetworkReachabilityFlagsReachable) != 0) var needsConnection: Bool = (flags & UInt32(kSCNetworkReachabilityFlagsConnectionRequired) != 0) if reachable && !needsConnection {       // what type of connection is available    var isCellularConnection = (flags & UInt32(kSCNetworkReachabilityFlagsIsWWAN) != 0)      if isCellularConnection { // cellular Cellular connection available        return ConnectionType.MOBILE3GNETWORK;      }      else {        // wifi Wifi connection available        return ConnectionType.WIFINETWORK;      }    }    //No or unknown connection    return ConnectionType.NONETWORK } The networkConnectionType() function begins by creating a SCNetworkReachability reference. To create the SCNetworkRechabilityRef reference, we use the SCNetworkReachabilityCreateWithName() function, which creates a reachability reference to the host provided. In Swift, when we receive an unmanaged object, we should immediately convert it to a memory-managed object before we work with it. This allows Swift to manage the memory for us. For this, we use the takeUnretainedValue() function. After we get our SCNetworkReachabilityRef reference, we need to retrieve the SCNetworkReachabilityFlags enum from the reference. This is done with the SCNetworkReachabilityGetFlags() function. Once we have the network reachability flags, we can begin testing our connection. We use the bitwise AND (&) operator to see if the host is reachable and if we need to establish a connection before we can connect to the host (needsConnection). If the reachable flag is false (we cannot currently connect to the host), or if needsConnection is true (we need to establish a connection before we can connect), we return NONETWORK, which means the host is currently not reachable. If we are able to connect to the host, we then check to see if we have a cellular connection by checking the network reachability flags again. If we have a cellular connection, we return MOBILE3GNETWORK; otherwise, we assume we have a Wi-Fi connection and return WIFINETWORK. If you are writing applications that connect to other devices or services over the Internet, I would recommend putting this function in a standard library to use because you will want to check for networking connectivity, and also the type of connection that you have pretty regularly. Now that we have seen how to use Apple's networking APIs to connect to remote services, I would like to demonstrate a network library that you can use in your own applications. This network library makes it very easy and simple to connect to various types of services on the Internet. This is a library that I created and maintained, but I would definitely welcome anyone that would like to contribute to the code base. This library is called RSNetworking. RSNetworking You can find RSNetworking on GitHub with this URL https://github.com/hoffmanjon/RSNetworking. RSNetworking is a network library written entirely in the Swift programming language. RSNetworking is built using Apple's powerful URL loading system (https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/URLLoadingSystem/URLLoadingSystem.html), which features the NSURLSession class that we used earlier in this article. The main design goal of RSNetworking is to make it easy and quick for developers to add powerful asynchronous networking requests to their applications that are written in Swift. There are three ways that we can use RSNetworking, which are as follows: RSURLRequest: This API provides a very simple and easy interface to make single GET requests to a service. RSTransaction and RSTransactionRequest: These APIs provide a very powerful and flexible way to make both GET and POST requests to a service. This API also makes it very easy to make multiple requests to a service. Extensions: RSNetworking provides extensions to both the UIImageView and the UIButton classes to dynamically load images from a URL and insert them into the UIImageView or UIButton classes after they are loaded. Let's look at each of these APIs in greater detail and then provide some examples of how to use them. RSURLRequest With the RSURLRequest API, we can make a GET request to a service and the only thing we need to provide is the URL and the parameters we wish to send to the service. The RSURLRequest API exposes four functions. These functions are as follows: dataFromURL(url: NSURL, completionHandler handler: RSNetworking.dataFromURLCompletionClosure): This retrieves an NSData object from a URL. This is the main function and is used by the other three functions to retrieve an NSData object prior to converting it to the requested format. stringFromURL(url: NSURL, completionHandler handler: RSNetworking.stringFromURLCompletionClosure): This retrieves an NSString object from a URL. This function uses the dataFromURL() function to retrieve an NSData object and then converts it to an NSString object. dictionaryFromJsonURL(url: NSURL, completionHandler handler: RSNetworking.dictionaryFromURLCompletionClosure): This retrieves an NSDictionary object from a URL. This function uses the dataFromURL() function to retrieve an NSData object and then converts it to an NSDictionary object. The data returned from the URL should be in the JSON format for this function to work properly. imageFromURL(url: NSURL, completionHandler handler: RSNetworking.imageFromURLCompletionClosure): This retrieves a UIImage object from a URL. This function uses the dataFromURL() function to retrieve an NSData object and then converts it to a UIImage object. Now, let's look at an example on how to use the RSURLRequest API. In this example, we will make a request to Apple's iTunes search API, as we did in the Making an HTTP GET request section of this article: func rsURLRequestExample() { var client = RSURLRequest() if let testURL =  NSURL(string:"https://itunes.apple.com/search?term=jimmy+buffett&media=music") {         client.dictionaryFromJsonURL(testURL, completionHandler: resultsHandler)   } } Let's walk through this code. We begin by creating an instance of the RSURLRequest class and an instance of the NSURL class. The NSURL instance represents the URL of the service that we wish to connect to and since we are making a GET request, it also contains the parameters that we are sending to the service. If we recall from the previous Making an HTTP GET Request section, when we make a HTTP GET request, the parameters that we are sending to the service are contained within the URL itself. Apple's iTunes search API returns the results of the search in the JSON format. We can see that in the API documentation and also by printing out the results of the search to the console; therefore, we will use the dictionaryFromJsonURL() method of the RSURLRequest class to make our request to the service. We could also use the dataFromURL() or stringFromURL() methods to retrieve the data if we wanted to, but this method is specifically written to handle JSON data that is returned form a REST-based web service. The dictionaryFromJsonURL() method will take the data that is returned from the NSURLSession request and convert it to an NSDictionary object. We use the NSDictionary object here rather that Swift's Dictionary object because the web service could return multiple types (Strings, Arrays, Numbers, and so on), and if we recall, a Swift Dictionary object can have only a single type for the key and a single type for the value. When we call the dictionaryFromJsonURL() method, we pass the URL that we want to connect to and also a completion handler that will be called once the information from the service is returned and converted to an NSDicationary object. Now, let's look at our completion handler; var resultsHandler:RSURLRequestRSURLRequestRSURLRequestRSURLRequest.dictionaryFromURLCompletionClosure = { var response = $0 var responseDictionary = $1 var error = $2 if error == nil {    var res = "results"    if let results = responseDictionary[res] as? NSArray {      println(results[0])         }    else {      println("Problem")    } } else {    //If there was an error, log it    println("Error : (error)") } } Our completion handler is of the RSURLRequest.dictionaryFromURLCompletionClosure type. This type is defined in the same way as the RSTransactionRequest.dictionaryFromRSTransactionCompletionClosure type, which allows us to use this same closure for RSURLRequests and RSTransactionRequest requests. We begin the completion handler by retrieving the three parameters that were passed and assign them to the response, the responseDictionary and error variables. We then check the error variable to see if it is nil. If it is nil, then we received a valid response and can retrieve values for the NSDictionary object. In this example, we retrieve the NSArray value that is associated with the results key in the NSDictionary object that was returned from the service. This NSArray value will contain a list of items in the iTunes store that are associated with our search term. Once we have the NSArray value, we print out the first element of the array to the console. The RSURLRequest API is very good for making single GET requests to a service. Now, let's look at the RSTransaction and RSTransactionRequest API, which can be used for both POST and GET requests and should be used when we need to make multiple requests to the same service. Summary In today's world, it is essential that a developer has a good working knowledge of network development. In this article, we showed you how to use Apple's NSURLSession API, with other classes, to connect to REST-based web services. The NSURLSession API was written as a replacement for the older NSURLConnection API and is now the recommended API to use when making network requests. We ended the discussion with RSNetworking, which is an open source network library, written entirely in Swift, that I maintain. RSNetworking allows us to very quickly and easily add network functionality to our applications. Resources for Article: Further resources on this subject: Flappy Swift [article] Installing OpenStack Swift [article] Dragging a CCNode in Cocos2D-Swift [article]
Read more
  • 0
  • 0
  • 6422
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-developing-opencart-themes
Packt
06 Jul 2015
13 min read
Save for later

Developing OpenCart Themes

Packt
06 Jul 2015
13 min read
In this article by Rupak Nepali, author of the book OpenCart Theme and Module Development, we learn about the basic features of OpenCart and its functionalities. Similarly, you will learn about global methods used in OpenCart. (For more resources related to this topic, see here.) The features of OpenCart The latest version of OpenCart at the time of writing this article is 2.0.1.1, which boasts of a multitude of great features: Modern and fully responsive design, OCMod (virtual file modification) A redesigned admin area and frontend More payment gateways included in the standard download Event notification system Custom form fields An unlimited module instance system to increase functionality Its pre-existing features include the following: Open source nature Templatable for changing the presentation section It also supports: Downloadable products Unlimited categories, products, manufacturers Multilanguage Multicurrency Product reviews and ratings PCI-compliant Automatic image resizing Multiple tax rates related products Unlimited information pages Shipping weight calculation Discount coupon system It is search engine optimized and has backup and restoration tools. It provides features such as printable invoices, sales reports, error logging, multistore features, multiple marketplace integration tools such as OpenBay Pro, and many more. Now, let's start with some basic general setting that will be helpful in creating our theme and module. Advantages of using Bootstrap in OpenCart themes The following can be the advantages of using Bootstrap in an OpenCart 2 theme: Speeds up development and saves time: There are many ready-made components, such as those available at http://getbootstrap.com/components/, which can be used directly in the template like we can use buttons, alert messages, many typography tables, forms, and many JavaScript functionalities. These are made responsive by default. So, there is no need to spend much time checking each device, which ultimately helps decrease development time and save time. Responsiveness: Bootstrap is made for devices of all shapes. So, using the conventions provided by bootstrap, it is easy to gain responsiveness in the site and design. Can upgrade easily: If we create our OpenCart theme with bootstrap, we can easily upgrade Bootstrap with little effort. There is no need to invest lots of time searching for upgrades of CSS and devices. Things to remember while making an OpenCart theme Here are a few things to take care of before stepping into HTML and CSS to create an OpenCart theme: In the header section, you can include a logo section, search section, currency section, language section, category menu section as well as mini-cart section. You can also include links to the home page, wish list page, account pages, shopping cart page, and checkout page. You can even show telephone numbers. These are provided by default. In the footer section, you can include links to information pages, customer service pages (such as a contact us page), a return page, a site map page, extra links (such as brands, gift vouchers, affiliates, and specials), and links to pages such as my account page, the order history, wish list, and newsletter. Include CSS modules in the style sheet, such as .box, .box .box-heading, .box .box-content, and so on, as clients can add many extra modules to fulfill their requirements. So, if we do not include these, then the design of the extra module may be hampered. Include CSS that supports three-column structure as well as right-column-activated-two-column structure, left-column-activated-two-column structure, and one-column structure in such a way that the following happens: if the left columns are deactivated, then the right-column structure is activated. If the right columns are deactivated, then the left-column structure is activated. Finally, if both the columns are deactivated, then one column structure is activated. The following diagram shows the four styles of a theme: Include only the modified files and folders. If the CSS does not find the referenced files, then it takes them from the default folder. Try to create a folder structure like what is shown in this screenshot: Prepare CSS for the buttons, checkout steps, cart pages, table, heading, and carousel. Steps to create new theme based on default theme The following steps help create a new theme based on the default theme: Navigate to the catalog/view/themefolder and create a new folder. Let's name it packttheme. Now navigate to the catalog/view/theme/defaultfolder, copy the image and stylesheetfolder, go to the catalog/view/theme/packtthemefolder, and paste it here. Go to the catalog/view/theme/packtthemefolder and create a new folder, named template. Next, navigate to the catalog/view/theme/default/template folder, copy the commonfolder, go to the catalog/view/theme/packttheme/templatefolder, and paste the commonfolder there. Now the folder structure looks like this: Open catalog/view/theme/packttheme/header.tpl in your favorite text editor, and find and replace the word default with packttheme. After performing the replacement, save header.tpl, and your default base theme is ready for use. Now log in to the admin section and go to Administrator | System | Setting. Edit the store for which you wish to change the theme, and click on the Store tab. Then choose packttheme in the template select box. Next, click on save. Now refresh the frontend, and packttheme will be activated. Now we can make changes to the CSS or the template files, and see the changes. Sometimes, we need theme-specific JavaScript. In such cases, we can create a javascript folder, or another similar folder as per the requirement. Global library methods OpenCart has many predefined methods that can be called anywhere, for example, in controller, model, as well as view template files. You can find system-level library files at system/library/. We will show you how methods can be written and what their functions are. Affiliate (affiliate.php) You can find most of the affiliate code written in the affiliate section. You can check out the files at catalog/controller/affiliate/ and catalog/model/affiliate/. Here is a list of methods we can use for the affiliate library: When an e-mail and password are passed to this method, it logs in to the affiliate section if the username (e-mail) and password match among the affiliates. You can find this code at catalog/controller/affiliate/ login.php on validate method: $this->affiliate->login($email, $password); The affiliate gets logged out. This means the affiliate ID is cleared and its session is destroyed. Also, the affiliate's first name, last name, e-mail, telephone, and fax are given an empty value: $this->affiliate->logout(); Check whether the affiliate is logged in. If you like to show a message to the logged-in affiliate only, then you can use this code: $this->affiliate->isLogged(); When we echo the following line, it will show the ID of the active affiliate:: if ($this->affiliate->isLogged()){echo "Welcome to the Affiliate Section";} else {echo "You are not at Affiliate Section";}$this->affiliate->getId(); Changes made in the catalog folder The following changes need to be made in the catalogfolder: Go to catalog/model/shipping and copy weight.php. Paste it in the same folder and rename it to totalcost.php. Open it and find the following line: classModelShippingWeight extends Model { Change the class name to this: classModelShippingTotalcost extends Model { Now, find weightand replace all its occurrences with totalcost. After performing the replacement, find the following lines of code: $totalcost = $this->cart->gettotalcost(); Make the changes as shown here: $totalcost = $this->cart->getSubTotal(); Our requirement is to show the shipping cost as per the total cost purchased, so we have made the change you just saw. Now, find these lines of code: if ((string)$cost != '') { $quote_data['totalcost_' . $result['geo_zone_id']] = array( 'code'=>'totalcost.totalcost_'.$result['geo_zone_id'], 'title'=>$result['name'].'('.$this->language->get('text_ totalcost') . ' ' . $this->totalcost->format($totalcost, $this- >config->get('config_totalcost_class_id')) . ')', 'cost' => $cost, 'tax_class_id' => $this->config->get('totalcost_tax_class_id'), 'text'=> $this->currency->format($this->tax->calculate($cost, $this->config->get('totalcost_tax_class_id'), $this->config- >get('config_tax'))) ); } In them, consider the following lines: 'title'=>$result['name'].'('.$this->language->get('text_totalcost') . ' ' . $this->totalcost->format($totalcost, $this->config->get('config_totalcost_class_id')) . ')', Make this change, as we only need the name: 'title' => $result['name'], Weight has different classes, such as kilogram, gram, pound, and so on, but in our total cost purchased, we did not have any class specified so we removed it. Save the file. Go to catalog/language/english/shipping and copy weight.php. Paste it in the same folder and rename it to totalcost.php. Open it, find Weight, and replace it with Total Cost. With these changes, the module is ready to install. Go to Admin | Extensions | Shipping, and then find Total Cost Based Shipping. Click on Install and grant permission to modify and access to the user. Then, edit to con figure it. In the General tab, change the status to Enabled. Other tabs are loaded as per the geo zones setting. The default geo zones for OpenCart are set as UK Shipping and UK VAT. Now, insert the value for Total Cost versus Rates. If the subtotal reaches 25, then the shipping cost is 10; if it reaches 50, then the shipping cost is 12; and if it reaches 100, then the shipping cost is 15. So, we have inserted 25:10, 50:12, 100:15. If the customer tries to order more than the inserted total cost, then no shipping is activated. In this way, you can now clone the shipping modules and make changes to the logic as per your requirement. Database tables for feedback Let's begin by creating tables in the database. We all know that OpenCart has multistore abilities, supports multiple languages, and can be displayed in multiple layouts, so when creating a database we should take this into consideration. Four tables are created: feedback, feedback_description, feedback_to_layout, and feedback_to_store. A feedback table is created for saving feedback-related data, and feedback_description is created for storing multiple-language data for the feedback. A feedback_to_layout table is created for saving the association of layout to feedback, and a feedback_to_store table is created for saving the association of store to feedback. When we install OpenCart, we use oc_ as a database prefix as shown in the following image and query. A database prefix is defined in config.php where mostly you find something such as define('DB_PREFIX', 'oc_'); , or what you entered when installing OpenCart. We create the oc_feedback table. It saves the status, sort order, date added, and feedback ID. Then we create the oc_feedback_description table, where we will save the feedback writer's name, feedback given, and language ID, for multiple languages. Then we create the oc_feedback_to_store table to save the store ID and feedback ID and keep the relationship between feedback and whichever store's feedback is to be shown. Finally, we create the oc_feedback_to_layout table to save the feedback_id and layout_id to show the feedback for the layout you want. This diagram shows the database schema: Creating the template file for the frontend Go to catalog/view/theme/default/template and create a feedback folder. Then create a feedback.tpl file and insert this code: <?php echo $header; ?><div class="container"><ul class="breadcrumb"><?php foreach ($breadcrumbs as $breadcrumb) { ?><li><a href="<?php echo $breadcrumb['href']; ?>"><?php echo$breadcrumb['text']; ?></a></li><?php } ?></ul> The preceding code shows the breadcrumbs array. The following code shows the list of feedback: <div class="row"><?php echo $column_left; ?><?php if ($column_left && $column_right) { ?><?php $class = 'col-sm-6'; ?><?php } elseif ($column_left || $column_right) { ?><?php $class = 'col-sm-9'; ?><?php } else { ?><?php $class = 'col-sm-12'; ?><?php } ?><div id="content" class="<?php echo $class; ?>"><?php echo $content_top; ?><h1><?php echo $heading_title; ?></h1><?php foreach ($feedbacks as $feedback) { ?><div class="col-xs-12"><div class="row"><h4><?php echo $feedback['author']; ?></h4><p><?php echo $feedback['description']; ?></p><hr /></div></div><?php } ?> This shows the list of feedback authors and descriptions, as shown in the following screenshot: To show the pagination in the template file, we have to insert the following lines of code into whichever part we want to show the pagination in: <div class="row"><div class="col-sm-6 text-left"><?php echo $pagination; ?></div><div class="col-sm-6 text-right"><?php echo $results; ?></div></div> It shows the pagination in the template file, and we mostly show the pagination at the bottom, so paste it at the end of the feedback.tpl file: <?php if (!$feedbacks) { ?><p><?php echo $text_empty; ?></p><div class="buttons"><div class="pull-right"><a href="<?php echo $continue; ?>"class="btn btn-primary"><?php echo $button_continue; ?></a></div></div><?php } ?> If there are no feedbacks, then a message similar to There are no feedbacks to showis shown, as per the language file: <?php echo $content_bottom; ?></div><?php echo $column_right; ?></div></div><?php echo $footer; ?> In this way, the template file is completed and so is our feedback management. You can create a module and show it as a module as well. To view the list of feedback at the frontend, we have to use a link similar to http://www.example.com/index.php?route=feedback/feedback, and insert the link somewhere in the templates so that visitors will be able to see the feedback list. Like this, you can extend to show the feedback as a module, and create a form at the frontend from which visitors can submit feedback. You can find these at demo code. Try this first and check out the code if you need any help. We have made the code files as descriptive as possible. Summary Using OpenCart themes, you can customize the presentation layer of OpenCart. Likewise, if you can code OpenCart's extensions or modules, then you can also customize the functionality of the OpenCart e-commerce framework and make an e-commerce site easier to administer and look better Resources for Article: Further resources on this subject: OpenCart Themes: Using the jCarousel Plugin [Article] Implementing OpenCart Modules [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article]
Read more
  • 0
  • 0
  • 3188

article-image-building-untangle-game-canvas-and-drawing-api
Packt
06 Jul 2015
25 min read
Save for later

Building the Untangle Game with Canvas and the Drawing API

Packt
06 Jul 2015
25 min read
In this article by Makzan, the author of HTML5 Game Development by Example: Beginner's Guide - Second Edition has discussed the new highlighted feature in HTML5—the canvas element. We can treat it as a dynamic area where we can draw graphics and shapes with scripts. (For more resources related to this topic, see here.) Images in websites have been static for years. There are animated GIFs, but they cannot interact with visitors. Canvas is dynamic. We draw and modify the context in the Canvas, dynamically through the JavaScript drawing API. We can also add interaction to the Canvas and thus make games. In this article, we will focus on using new HTML5 features to create games. Also, we will take a look at a core feature, Canvas, and some basic drawing techniques. We will cover the following topics: Introducing the HTML5 canvas element Drawing a circle in Canvas Drawing lines in the canvas element Interacting with drawn objects in Canvas with mouse events The Untangle puzzle game is a game where players are given circles with some lines connecting them. The lines may intersect the others and the players need to drag the circles so that no line intersects anymore. The following screenshot previews the game that we are going to achieve through this article: You can also try the game at the following URL: http://makzan.net/html5-games/untangle-wip-dragging/ So let's start making our Canvas game from scratch. Drawing a circle in the Canvas Let's start our drawing in the Canvas from the basic shape—circle. Time for action – drawing color circles in the Canvas First, let's set up the new environment for the example. That is, an HTML file that will contain the canvas element, a jQuery library to help us in JavaScript, a JavaScript file containing the actual drawing logic, and a style sheet: index.html js/ js/jquery-2.1.3.js js/untangle.js js/untangle.drawing.js js/untangle.data.js js/untangle.input.js css/ css/untangle.css images/ Put the following HTML code into the index.html file. It is a basic HTML document containing the canvas element: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Drawing Circles in Canvas</title> <link rel="stylesheet" href="css/untangle.css"> </head> <body> <header>    <h1>Drawing in Canvas</h1> </header> <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> <script src="js/jquery-2.1.3.min.js"></script> <script src="js/untangle.data.js"></script> <script src="js/untangle.drawing.js"></script> <script src="js/untangle.input.js"></script> <script src="js/untangle.js"></script> </body> </html> Use CSS to set the background color of the Canvas inside untangle.css: canvas { background: grey; } In the untangle.js JavaScript file, we put a jQuery document ready function and draw a color circle inside it: $(document).ready(function(){ var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }); Open the index.html file in a web browser and we will get the following screenshot: What just happened? We have just created a simple Canvas context with circles on it. There are not many settings for the canvas element itself. We set the width and height of the Canvas, the same as we have fixed the dimensions of real drawing paper. Also, we assign an ID attribute to the Canvas for an easier reference in JavaScript: <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> Putting in fallback content when the web browser does not support the Canvas Not every web browser supports the canvas element. The canvas element provides an easy way to provide fallback content if the canvas element is not supported. The content also provides meaningful information for any screen reader too. Anything inside the open and close tags of the canvas element is the fallback content. This content is hidden if the web browser supports the element. Browsers that don't support canvas will instead display that fallback content. It is good practice to provide useful information in the fallback content. For instance, if the canvas tag's purpose is a dynamic picture, we may consider placing an <img> alternative there. Or we may also provide some links to modern web browsers for the visitor to upgrade their browser easily. The Canvas context When we draw in the Canvas, we actually call the drawing API of the canvas rendering context. You can think of the relationship of the Canvas and context as Canvas being the frame and context the real drawing surface. Currently, we have 2d, webgl, and webgl2 as the context options. In our example, we'll use the 2D drawing API by calling getContext("2d"). var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); Drawing circles and shapes with the Canvas arc function There is no circle function to draw a circle. The Canvas drawing API provides a function to draw different arcs, including the circle. The arc function accepts the following arguments: Arguments Discussion X The center point of the arc in the x axis. Y The center point of the arc in the y axis. radius The radius is the distance between the center point and the arc's perimeter. When drawing a circle, a larger radius means a larger circle. startAngle The starting point is an angle in radians. It defines where to start drawing the arc on the perimeter. endAngle The ending point is an angle in radians. The arc is drawn from the position of the starting angle, to this end angle. counter-clockwise This is a Boolean indicating the arc from startingAngle to endingAngle drawn in a clockwise or counter-clockwise direction. This is an optional argument with the default value false. Converting degrees to radians The angle arguments used in the arc function are in radians instead of degrees. If you are familiar with the degrees angle, you may need to convert the degrees into radians before putting the value into the arc function. We can convert the angle unit using the following formula: radians = p/180 x degrees Executing the path drawing in the Canvas When we are calling the arc function or other path drawing functions, we are not drawing the path immediately in the Canvas. Instead, we are adding it into a list of the paths. These paths will not be drawn until we execute the drawing command. There are two drawing executing commands: one command to fill the paths and the other to draw the stroke. We fill the paths by calling the fill function and draw the stroke of the paths by calling the stroke function, which we will use later when drawing lines: ctx.fill(); Beginning a path for each style The fill and stroke functions fill and draw the paths in the Canvas but do not clear the list of paths. Take the following code snippet as an example. After filling our circle with the color red, we add other circles and fill them with green. What happens to the code is both the circles are filled with green, instead of only the new circle being filled by green: var canvas = document.getElementById('game'); var ctx = canvas.getContext('2d'); ctx.fillStyle = "red"; ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.fill();   ctx.arc(210, 100, 50, 0, Math.PI*2, true); ctx.fillStyle = "green"; ctx.fill(); This is because, when calling the second fill command, the list of paths in the Canvas contains both circles. Therefore, the fill command fills both circles with green and overrides the red color circle. In order to fix this issue, we want to ensure we call beginPath before drawing a new shape every time. The beginPath function empties the list of paths, so the next time we call the fill and stroke commands, they will only apply to all paths after the last beginPath. Have a go hero We have just discussed a code snippet where we intended to draw two circles: one in red and the other in green. The code ends up drawing both circles in green. How can we add a beginPath command to the code so that it draws one red circle and one green circle correctly? Closing a path The closePath function will draw a straight line from the last point of the latest path to the first point of the path. This is called closing the path. If we are only going to fill the path and are not going to draw the stroke outline, the closePath function does not affect the result. The following screenshot compares the results on a half circle with one calling closePath and the other not calling closePath: Pop quiz Q1. Do we need to use the closePath function on the shape we are drawing if we just want to fill the color and not draw the outline stroke? Yes, we need to use the closePath function. No, it does not matter whether we use the closePath function. Wrapping the circle drawing in a function Drawing a circle is a common function that we will use a lot. It is better to create a function to draw a circle now instead of entering several code lines. Time for action – putting the circle drawing code into a function Let's make a function to draw the circle and then draw some circles in the Canvas. We are going to put code in different files to make the code simpler: Open the untangle.drawing.js file in our code editor and put in the following code: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.drawCircle = function(x, y, radius) { var ctx = untangleGame.ctx; ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(x, y, radius, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }; Open the untangle.data.js file and put the following code into it: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.drawCircle(x, y, circleRadius); } }; Then open the untangle.js file. Replace the original code in the JavaScript file with the following code: if (untangleGame === undefined) { var untangleGame = {}; }   // Entry point $(document).ready(function(){ var canvas = document.getElementById("game"); untangleGame.ctx = canvas.getContext("2d");   var width = canvas.width; var height = canvas.height;   untangleGame.createRandomCircles(width, height);   }); Open the HTML file in the web browser to see the result: What just happened? The code of drawing circles is executed after the page is loaded and ready. We used a loop to draw several circles in random places in the Canvas. Dividing code into files We are putting the code into different files. Currently, there are the untangle.js, untangle.drawing.js, and untangle.data.js files. The untangle.js is the entry point of the game. Then we put logic that is related to the context drawing into untangle.drawing.js and logic that's related to data manipulation into the untangle.data.js file. We use the untangleGame object as the global object that's being accessed across all the files. At the beginning of each JavaScript file, we have the following code to create this object if it does not exist: if (untangleGame === undefined) { var untangleGame = {}; } Generating random numbers in JavaScript In game development, we often use random functions. We may want to randomly summon a monster for the player to fight, we may want to randomly drop a reward when the player makes progress, and we may want a random number to be the result of rolling a dice. In this code, we place the circles randomly in the Canvas. To generate a random number in JavaScript, we use the Math.random() function. There is no argument in the random function. It always returns a floating number between 0 and 1. The number is equal or bigger than 0 and smaller than 1. There are two common ways to use the random function. One way is to generate random numbers within a given range. The other way is generating a true or false value. Usage Code Discussion Getting a random integer between A and B Math.floor(Math.random()*B)+A Math.floor() function cuts the decimal point of the given number. Take Math.floor(Math.random()*10)+5 as an example. Math.random() returns a decimal number between 0 to 0.9999…. Math.random()*10 is a decimal number between 0 to 9.9999…. Math.floor(Math.random()*10) is an integer between 0 to 9. Finally, Math.floor(Math.random()*10) + 5 is an integer between 5 to 14. Getting a random Boolean (Math.random() > 0.495) (Math.random() > 0.495) means 50 percent false and 50 percent true. We can further adjust the true/false ratio. (Math.random() > 0.7) means almost 70 percent false and 30 percent true. Saving the circle position When we are developing a DOM-based game, we often put the game objects into DIV elements and accessed them later in code logic. It is a different story in the Canvas-based game development. In order to access our game objects after they are drawn in the Canvas, we need to remember their states ourselves. Let's say now we want to know how many circles are drawn and where they are, and we will need an array to store their position. Time for action – saving the circle position Open the untangle.data.js file in the text editor. Add the following circle object definition code in the JavaScript file: untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; } Now we need an array to store the circles' positions. Add a new array to the untangleGame object: untangleGame.circles = []; While drawing every circle in the Canvas, we save the position of the circle in the circles array. Add the following line before calling the drawCircle function, inside the createRandomCircles function: untangleGame.circles.push(new untangleGame.Circle(x,y,circleRadius)); After the steps, we should have the following code in the untangle.data.js file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.circles = [];   untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; };   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.circles.push(new      untangleGame.Circle(x,y,circleRadius));    untangleGame.drawCircle(x, y, circleRadius); } }; Now we can test the code in the web browser. There is no visual difference between this code and the last example when drawing random circles in the Canvas. This is because we are saving the circles but have not changed any code that affects the appearance. We just make sure it looks the same and there are no new errors. What just happened? We saved the position and radius of each circle. This is because Canvas drawing is an immediate mode. We cannot directly access the object drawn in the Canvas because there is no such information. All lines and shapes are drawn on the Canvas as pixels and we cannot access the lines or shapes as individual objects. Imagine that we are drawing on a real canvas. We cannot just move a house in an oil painting, and in the same way we cannot directly manipulate any drawn items in the canvas element. Defining a basic class definition in JavaScript We can use object-oriented programming in JavaScript. We can define some object structures for our use. The Circle object provides a data structure for us to easily store a collection of x and y positions and the radii. After defining the Circle object, we can create a new Circle instance with an x, y, and radius value using the following code: var circle1 = new Circle(100, 200, 10); For more detailed usage on object-oriented programming in JavaScript, please check out the Mozilla Developer Center at the following link: https://developer.mozilla.org/en/Introduction_to_Object-Oriented_JavaScript Have a go hero We have drawn several circles randomly on the Canvas. They are in the same style and of the same size. How about we randomly draw the size of the circles? And fill the circles with different colors? Try modifying the code and then play with the drawing API. Drawing lines in the Canvas Now we have several circles here, so how about connecting them with lines? Let's draw a straight line between each circle. Time for action – drawing straight lines between each circle Open the index.html file we just used in the circle-drawing example. Change the wording in h1 from drawing circles in Canvas to drawing lines in Canvas. Open the untangle.data.js JavaScript file. We define a Line class to store the information that we need for each line: untangleGame.Line = function(startPoint, endPoint, thickness) { this.startPoint = startPoint; this.endPoint = endPoint; this.thickness = thickness; } Save the file and switch to the untangle.drawing.js file. We need two more variables. Add the following lines into the JavaScript file: untangleGame.thinLineThickness = 1; untangleGame.lines = []; We add the following drawLine function into our code, after the existing drawCircle function in the untangle.drawing.js file. untangleGame.drawLine = function(ctx, x1, y1, x2, y2, thickness) { ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.lineWidth = thickness; ctx.strokeStyle = "#cfc"; ctx.stroke(); } Then we define a new function that iterates the circle list and draws a line between each pair of circles. Append the following code in the JavaScript file: untangleGame.connectCircles = function() { // connect the circles to each other with lines untangleGame.lines.length = 0; for (var i=0;i< untangleGame.circles.length;i++) {    var startPoint = untangleGame.circles[i];    for(var j=0;j<i;j++) {      var endPoint = untangleGame.circles[j];      untangleGame.drawLine(startPoint.x, startPoint.y,        endPoint.x,      endPoint.y, 1);      untangleGame.lines.push(new untangleGame.Line(startPoint,        endPoint,      untangleGame.thinLineThickness));    } } }; Finally, we open the untangle.js file, and add the following code before the end of the jQuery document ready function, after we have called the untangleGame.createRandomCircles function: untangleGame.connectCircles(); Test the code in the web browser. We should see there are lines connected to each randomly placed circle: What just happened? We have enhanced our code with lines connecting each generated circle. You may find a working example at the following URL: http://makzan.net/html5-games/untangle-wip-connect-lines/ Similar to the way we saved the circle position, we have an array to save every line segment we draw. We declare a line class definition to store some essential information of a line segment. That is, we save the start and end point and the thickness of the line. Introducing the line drawing API There are some drawing APIs for us to draw and style the line stroke: Line drawing functions Discussion moveTo The moveTo function is like holding a pen in our hand and moving it on top of the paper without touching it with the pen. lineTo This function is like putting the pen down on the paper and drawing a straight line to the destination point. lineWidth The lineWidth function sets the thickness of the strokes we draw afterwards. stroke The stroke function is used to execute the drawing. We set up a collection of moveTo, lineTo, or styling functions and finally call the stroke function to execute it on the Canvas. We usually draw lines by using the moveTo and lineTo pairs. Just like in the real world, we move our pen on top of the paper to the starting point of a line and put down the pen to draw a line. Then, keep on drawing another line or move to the other position before drawing. This is exactly the flow in which we draw lines on the Canvas. We just demonstrated how to draw a simple line. We can set different line styles to lines in the Canvas. For more details on line styling, please read the styling guide in W3C at http://www.w3.org/TR/2dcontext/#line-styles and the Mozilla Developer Center at https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors. Using mouse events to interact with objects drawn in the Canvas So far, we have shown that we can draw shapes in the Canvas dynamically based on our logic. There is one part missing in the game development, that is, the input. Now, imagine that we can drag the circles around on the Canvas, and the connected lines will follow the circles. In this section, we will add mouse events to the canvas to make our circles draggable. Time for action – dragging the circles in the Canvas Let's continue with our previous code. Open the html5games.untangle.js file. We need a function to clear all the drawings in the Canvas. Add the following function to the end of the untangle.drawing.js file: untangleGame.clear = function() { var ctx = untangleGame.ctx; ctx.clearRect(0,0,ctx.canvas.width,ctx.canvas.height); }; We also need two more functions that draw all known circles and lines. Append the following code to the untangle.drawing.js file: untangleGame.drawAllLines = function(){ // draw all remembered lines for(var i=0;i<untangleGame.lines.length;i++) {    var line = untangleGame.lines[i];    var startPoint = line.startPoint;    var endPoint = line.endPoint;    var thickness = line.thickness;    untangleGame.drawLine(startPoint.x, startPoint.y,      endPoint.x,    endPoint.y, thickness); } };   untangleGame.drawAllCircles = function() { // draw all remembered circles for(var i=0;i<untangleGame.circles.length;i++) {    var circle = untangleGame.circles[i];    untangleGame.drawCircle(circle.x, circle.y, circle.radius); } }; We are done with the untangle.drawing.js file. Let's switch to the untangle.js file. Inside the jQuery document-ready function, before the ending of the function, we add the following code, which creates a game loop to keep drawing the circles and lines: // set up an interval to loop the game loop setInterval(gameloop, 30);   function gameloop() { // clear the Canvas before re-drawing. untangleGame.clear(); untangleGame.drawAllLines(); untangleGame.drawAllCircles(); } Before moving on to the input handling code implementation, let's add the following code to the jQuery document ready function in the untangle.js file, which calls the handleInput function that we will define: untangleGame.handleInput(); It's time to implement our input handling logic. Switch to the untangle.input.js file and add the following code to the file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.handleInput = function(){ // Add Mouse Event Listener to canvas // we find if the mouse down position is on any circle // and set that circle as target dragging circle. $("#game").bind("mousedown", function(e) {    var canvasPosition = $(this).offset();    var mouseX = e.pageX - canvasPosition.left;    var mouseY = e.pageY - canvasPosition.top;      for(var i=0;i<untangleGame.circles.length;i++) {      var circleX = untangleGame.circles[i].x;      var circleY = untangleGame.circles[i].y;      var radius = untangleGame.circles[i].radius;      if (Math.pow(mouseX-circleX,2) + Math.pow(        mouseY-circleY,2) < Math.pow(radius,2)) {        untangleGame.targetCircleIndex = i;        break;      }    } });   // we move the target dragging circle // when the mouse is moving $("#game").bind("mousemove", function(e) {    if (untangleGame.targetCircleIndex !== undefined) {      var canvasPosition = $(this).offset();      var mouseX = e.pageX - canvasPosition.left;      var mouseY = e.pageY - canvasPosition.top;      var circle = untangleGame.circles[        untangleGame.targetCircleIndex];      circle.x = mouseX;      circle.y = mouseY;    }    untangleGame.connectCircles(); });   // We clear the dragging circle data when mouse is up $("#game").bind("mouseup", function(e) {    untangleGame.targetCircleIndex = undefined; }); }; Open index.html in a web browser. There should be five circles with lines connecting them. Try dragging the circles. The dragged circle will follow the mouse cursor and the connected lines will follow too. What just happened? We have set up three mouse event listeners. They are the mouse down, move, and up events. We also created the game loop, which updates the Canvas drawing based on the new position of the circles. You can view the example's current progress at: http://makzan.net/html5-games/untangle-wip-dragging-basic/. Detecting mouse events in circles in the Canvas After discussing the difference between DOM-based development and Canvas-based development, we cannot directly listen to the mouse events of any shapes drawn in the Canvas. There is no such thing. We cannot monitor the event in any shapes drawn in the Canvas. We can only get the mouse event of the canvas element and calculate the relative position of the Canvas. Then we change the states of the game objects according to the mouse's position and finally redraw it on the Canvas. How do we know we are clicking on a circle? We can use the point-in-circle formula. This is to check the distance between the center point of the circle and the mouse position. The mouse clicks on the circle when the distance is less than the circle's radius. We use this formula to get the distance between two points: Distance = (x2-x1)2 + (y2-y1)2. The following graph shows that when the distance between the center point and the mouse cursor is smaller than the radius, the cursor is in the circle: The following code we used explains how we can apply distance checking to know whether the mouse cursor is inside the circle in the mouse down event handler: if (Math.pow(mouseX-circleX,2) + Math.pow(mouseY-circleY,2) < Math.pow(radius,2)) { untangleGame.targetCircleIndex = i; break; } Please note that Math.pow is an expensive function that may hurt performance in some scenarios. If performance is a concern, we may use the bounding box collision checking. When we know that the mouse cursor is pressing the circle in the Canvas, we mark it as the targeted circle to be dragged on the mouse move event. During the mouse move event handler, we update the target dragged circle's position to the latest cursor position. When the mouse is up, we clear the target circle's reference. Pop quiz Q1. Can we directly access an already drawn shape in the Canvas? Yes No Q2. Which method can we use to check whether a point is inside a circle? The coordinate of the point is smaller than the coordinate of the center of the circle. The distance between the point and the center of the circle is smaller than the circle's radius. The x coordinate of the point is smaller than the circle's radius. The distance between the point and the center of the circle is bigger than the circle's radius. Game loop The game loop is used to redraw the Canvas to present the later game states. If we do not redraw the Canvas after changing the states, say the position of the circles, we will not see it. Clearing the Canvas When we drag the circle, we redraw the Canvas. The problem is the already drawn shapes on the Canvas won't disappear automatically. We will keep adding new paths to the Canvas and finally mess up everything in the Canvas. The following screenshot is what will happen if we keep dragging the circles without clearing the Canvas on every redraw: Since we have saved all game statuses in JavaScript, we can safely clear the entire Canvas and draw the updated lines and circles with the latest game status. To clear the Canvas, we use the clearRect function provided by Canvas drawing API. The clearRect function clears a rectangle area by providing a rectangle clipping region. It accepts the following arguments as the clipping region: context.clearRect(x, y, width, height) Argument Definition x The top left point of the rectangular clipping region, on the x axis. y The top left point of the rectangular clipping region, on the y axis. width The width of the rectangular region. height The height of the rectangular region. The x and y values set the top left position of the region to be cleared. The width and height values define how much area is to be cleared. To clear the entire Canvas, we can provide (0,0) as the top left position and the width and height of the Canvas to the clearRect function. The following code clears all things drawn on the entire Canvas: ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Pop quiz Q1. Can we clear a portion of the Canvas by using the clearRect function? Yes No Q2. Does the following code clear things on the drawn Canvas? ctx.clearRect(0, 0, ctx.canvas.width, 0); Yes No Summary You learned a lot in this article about drawing shapes and creating interaction with the new HTML5 canvas element and the drawing API. Specifically, you learned to draw circles and lines in the Canvas. We added mouse events and touch dragging interaction with the paths drawn in the Canvas. Finally, we succeeded in developing the Untangle puzzle game. Resources for Article: Further resources on this subject: Improving the Snake Game [article] Playing with Particles [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 20900

article-image-getting-current-weather-forecast
Packt
06 Jul 2015
4 min read
Save for later

Getting the current weather forecast

Packt
06 Jul 2015
4 min read
In this article by Marco Schwartz, author of the book Intel Galileo Blueprints, we will configure Forecast.io. Forecast.io is a web API that returns weather forecasts of your exact location, and it updates them by the minute. The API has stunning maps, weather animations, temperature unit options, forecast lines, and more. (For more resources related to this topic, see here.) We will use this API to integrate global weather measurements with our local measurements. To do so, first go to the following link: http://forecast.io/ Then, look for the Forecast API link at the bottom of the page. It should be under the Developers section of the footer: You need to create your own Forecast.io account. You can do so by clicking on the Register button on the top right-hand side portion of the page. It will take you to the registration page where you need to provide an e-mail address and a strong password. Then, you will be required to get an API key, which will be displayed in the Forecast.io interface: Write this down somewhere, as you will need it soon. We will then use a Node.js module to get the forecast. The steps are described in more detail at the following link: https://github.com/mateodelnorte/forecast.io Next, you need to determine the latitude and longitude that you are currently in. Head on to the following link to generate it automatically: http://www.ip2location.com/ Then, we modify the main.js file: var Forecast = require('forecast.io'); var util = require('util'); This will set our API key with the one used/returned before: var options = { APIKey: 'your_api_key' }, forecast = new Forecast(options); Next, we'll define a new API route for the forecast. This is also where you need to put your longitude and latitude: app.get('/api/forecast', function(req, res) { forecast.get('latitude', 'longitude', function (err, result, data) {    if (err) throw err;    console.log('data: ' + util.inspect(data));    res.json(data); }); }); We will also modify the Interface.jade file with a new container: h3.row .col-md-4    div Forecast .col-md-4    div#summary Summary In the JavaScript file, we will refresh the field in the interface. We simply get the summary of the current weather conditions: $.get('/api/forecast', function(json_data) {      $('#summary').html('Summary: ' + json_data.currently.summary);    }); Again, the complete code can be found at the following link: https://github.com/marcoschwartz/galileo-blueprints-book After downloading, you can now build and upload the application to the board. Next comes the fun part, as we will test our creation. Go to the IP address of your board with port 3000: http://192.168.1.103:3000/ You will be able to see the interface as follows: Congratulations! You have been able to retrieve the data from Forecast.io and display it in your browser. If you're not getting the expected result, don't worry. You can go back and check everything. Ensure that you have downloaded and installed the correct software on your board. Also ensure that you correctly entered your API key in the application. Of course, you can modify your own interface as you wish. You can also add more fields from the answer response of Forecast.io. For instance, you can add a Fahrenheit measurement counterpart. Alternately, you can even add forecasts for the next hour and for the next 24 hours. Just ensure that you check the Forecast.io documentation for all the fields that you wish to use. Summary In this article, we configured Forecast.io that is a web API that returns weather forecasts of your exact location and it updates the same every minute. Resources for Article: Further resources on this subject: Controlling DC motors using a shield [article] Getting Started with Intel Galileo [article] Dealing with Interrupts [article]
Read more
  • 0
  • 0
  • 8062

article-image-building-your-first-bpm-application
Packt
06 Jul 2015
16 min read
Save for later

Building Your First BPM Application

Packt
06 Jul 2015
16 min read
In this article by Simone Fiorini and Arun V Gopalakrishnan, authors of the book Mastering jBPM6 , we will build our first BPM application by using the jBPM tool stack. This article will guide you through the following topics: Installing the jBPM tool stack Hacking the default installation configurations Modeling and deploying a jBPM project Embedding jBPM inside a standalone Java project This article gives you the hands-on flexibility of the jBPM tool stack and provides information on hacking the configuration and playing around. (For more resources related to this topic, see here.) Installing the jBPM tool stack A jBPM release comes with an installation zip file, which contains the essentials for the jBPM environment and tools for building a demo runtime for easy hands-on management of the jBPM runtime environment. For downloading jBPM: Go to http://jboss.org/jbpm | Download | Download jBPM 6.2.0.Final | jbpm-6.2.0.Final-installer-full.zip.Use the latest stable version. The content of the book follows the 6.2.0 release. Unzip and extract the installer content and you will find an install.html file that contains the helper documentation for installing a demo jBPM runtime with inbuilt projects. jBPM installation needs JDK 1.6+ to be installed and set as JAVA_HOME and the tooling for installation is done using ANT scripts (ANT version 1.7+). The tooling for installation is basically an ANT script, which is a straightforward method for installation and can be customized easily. To operate the tooling, the ANT script consists of the ANT targets that act as the commands for the tooling. The following figure will make it easy for you to understand the relevant ANT targets available in the script. Each box represents an ANT target and helps you to manage the environment. The basic targets available are for installing, starting, stopping, and cleaning the environment. To run the ANT target, install ANT 1.7+, navigate to the installer folder (by using the shell or the command line tool available in your OS), and run the target by using the following command: ant <targetname> The jBPM installer comes with a default demo environment, which uses a basic H2 database as its persistence storage. The persistence of jBPM is done using Hibernate; this makes it possible for jBPM to support an array of popular databases including the databases in the following list: Hibernate or Hibernate ORM is an object relational mapping framework and is used by jBPM to persist data to relation databases. For more details, see http://hibernate.org/. Databases Supported Details DB2 http://www-01.ibm.com/software/in/data/db2/ Apache Derby https://db.apache.org/derby/ H2 http://www.h2database.com/html/main.html HSQL Database Engine http://hsqldb.org/ MySQL https://www.mysql.com/ Oracle https://www.oracle.com/database/ PostgreSQL http://www.postgresql.org/ Microsoft SQL Server Database http://www.microsoft.com/en-in/server-cloud/products/sql-server/ For installing the demo, use the following command: ant install.demo The install command would install the web tooling and the Eclipse tooling, required for modeling and operating jBPM. ant start.demo This command will start the application server (JBoss) with the web tooling (the Kie workbench and dashboard) deployed in it and the eclipse tooling with all the plugins installed. That's it for the installation! Now, the JBoss application server should be running with the Kie workbench and dashboard builder deployed. You can now access the Kie workbench demo environment by using the URL and log in by using the demo admin user called admin and the password admin: http://localhost:8080/jbpm-console. Customizing the installation The demo installation is a sandbox environment, which allows for an easy installation and reduces time between you getting the release and being able to play around by using the stack. Even though it is very necessary, when you get the initial stuff done and get serious about jBPM, you may want to install a jBPM environment, which will be closer to a production environment. We can customize the installer for this purpose. The following sections will guide you through the options available for customization. Changing the database vendor The jBPM demo sandbox environment uses an embedded H2 database as the persistence storage. jBPM provides out of the box support for more widely used databases such as MySQL, PostgreSQL, and so on. Follow these steps to achieve a jBPM installation with these databases: Update the build.properties file available in the root folder of the installation to choose the required database instead of H2. By default, configurations for MySQL and PostgreSQL are available. For the support of other databases, check the hibernate documentation before configuring. Update db/jbpm-persistence-JPA2.xml, and update the hibernate.dialect property with an appropriate Hibernate dialect for our database vendor. Install the corresponding JDBC driver in the application server where we intend to deploy jBPM web tooling. Manually installing the database schema By default, the database schema is created automatically by using the Hibernate autogeneration capabilities. However, if we want to manually install the database schemas, the corresponding DDL scripts are available in dbddl-scripts for all major database vendors. Creating your first jBPM project jBPM provides a very structured way of creating a project. The structure considers application creation and maintenance for large organizations with multiple departments. This structure is recommended for use as it is a clean and secure way of manning the business process artifacts. The following image details the organization of a project in jBPM web tooling (or the Kie workbench). The jBPM workbench comes with an assumption of one business process management suite for an organization. An organization can have multiple organization units, which will internally contain multiple projects and form the root of the project, and as the name implies, it represents a fraction of an organization. This categorization can be visualized in any business organization and is sometimes referred as departments. In an ideal categorization, these organization units will be functionally different and thus, will contain different business processes. Using the workbench, we can create multiple organization units. The next categorization is the repository. A repository is a storage of business model artifacts such as business processes, business rules, and data models. A repository can be mapped to a functional classification within an organization, and multiple repositories can be set up if these repositories run multiple projects; the handling of these project artifacts have to be kept secluded from each other (for example, for security). Within a repository, we can create a project, and within a project, we can define and model business process artifacts. This structure and abstraction will be very useful to manage and maintain BPM-based applications. Let us go through the steps in detail now. After installation, you need to log into the Kie workbench. Now, as explained previously, we can create a project. Therefore, the first step is to create an organizational unit: Click through the menu bars, and go to Authoring | Administration | Organizational Units | Manage Organizational Units.This takes you to the Organizational Unit Manager screen; here, we can see a list of organizational units and repositories already present and their associations. Click Add to create an organizational unit, and give the name of the organization unit and the user who is in charge of administering the projects in the organization unit. Now, we can add a repository, navigate through the menus, and go to Authoring | Administration | Repositories | New Repository. Now, provide a name for the repository, choose the organization unit, and create the repository. Creating the repository results in (internally) creating a Git repository. The default location of the Git repository in the workbench is $WORKING_DIRECTORY/.niogit and can be modified by using the following system property: -Dorg.uberfire.nio.git.dir. Now, we can create a project for the organization unit. Go to Authoring | Project Authoring | Project Explorer. Now, choose your organization unit (here, Mastering-jBPM) from the bread crumb of project categorization. Click New Item and choose Project. Now, we can create a project by entering a relevant project name. This gives details like project name and a brief summary of the project, and more importantly, gives the group ID, artifact ID, and version ID for the project. Further, Finish the creation of new project. Those of you who know Maven and its artifact structure, will now have got an insight on how a project is built. Yes! The project created is a Maven module and is deployed as one. Business Process Modeling Therefore, we are ready to create our first business process model by using jBPM. Go to New Item | Business Process: Provide the name for the business process; here, we are trying to create a very primitive process as an example. Now, the workbench will show you the process modeler for modeling the business process. Click the zoom button in the toolbar, if you think you need more real estate for modeling Basically, the workbench can be divided into five parts: Toolbar (on the top): It gives you a large set of tools for visual modeling and saving the model. Object library (on the left side of the canvas): It gives you all the standard BPMN construct stencils, which you can drag and drop to create a model. Workspace (on the center): You get a workspace or canvas on which you can draw the process models. The canvas is very intuitive; if you click on an object, it shows a tool set surrounding it to draw the next one or guide to the next object. Properties (on the right side of the canvas): It gives the property values for all the attributes associated with the business process and each of its constructs. Problems (on the bottom): It gives you the errors on the business process that you are currently modeling. The validations are done on save, and we have provisions to have autosave options. Therefore, we can start modeling out first process. I assume the role of a business analyst who wants to model a simple process of content writing. This is a very simple process with just two tasks, one human task for writing and the other for reviewing. We can attach the actor associated with the task by going to the Properties panel and setting the actor. In this example, I have set it as admin, the default user, for the sake of simplicity. Now, we can save the project by using the Save button; it asks for a check-in comment, which provides the comment for this version of the process that we have just saved. Process modeling is a continuous process, and if properly used, the check-in comment can helps us to keep track on the objectives of process updates. Building and deploying the project Even though the project created is minuscular with just a sample project, this is fully functional! Yes, we have completed a business process, which will be very limited in functionality, but with its limited set of functionalities (if any), it can be deployed and operated. Go to Tools | Project Editor, and click Build & Deploy, as shown in the following screenshot: To see the deployment listed, go to Deploy | Deployments to see Deployment Units This shows the effectiveness of jBPM as a rapid application builder using a business process. We can create, model, and deploy a project within a span of minutes. Running your first process Here, we start the operation management using jBPM. Now, we assume the role of an operational employee. We have deployed a process and have to create a process instance and run it. Go to Process Management | Process Definitions. Click New Instance and start the process. This will start a process instance. Go to Process Management | Process Instances to view the process instance details and perform life cycle actions on process instances.The example writing process consists of two human tasks. Upon the start of the process instance, the Write task is assigned to the admin. The assigned task can be managed by going to the task management functionality. Go to Tasks | Tasks List: In Tasks List, we can view the details of the human tasks and perform human task life cycle operations such as assigning, delegating, completing, and aborting a task. Embedding jBPM in a standalone Java application The core engine of jBPM is a set of lightweight libraries, which can be embedded in any Java standalone application. This gives the enterprise architects the flexibility to include jBPM inside their existing application and leverage the functionalities of BPM. Modeling the business process using Eclipse tooling Upon running the installation script, jBPM installs the web tooling as well as the Eclipse tooling. The Eclipse tooling basically consists of the following: jBPM project wizard: It helps you to create a jBPM project easily jBPM runtime: An easy way of choosing the jBPM runtime version; this associates a set of libraries for the particular version of jBPM to the project BPMN Modeler: It is used to model the BPMN process Drools plugin: It gives you the debugging and operation management capabilities within Eclipse Creating a jBPM project using Eclipse The Eclipse web tooling is available in the installer root folder. Start Eclipse and create a new jBPM Maven project: Go to File | New Project | jBPM Project (Maven). Provide the project name and location details; now, the jBPM project wizard will do the following:     Create a default jBPM project for you with the entire initial configuration setup     Attach all runtime libraries     Create a sample project     Set up a unit testing environment for the business process The Eclipse workbench is considerably similar to the web tooling workbench. Similar to web tooling, it contains the toolbox, workspace, palette showing the BPMN construct stencils, and the property explorer. We can create a new BPMN process by going to New Project Wizard and selecting jBPM | BPMN2 Process. Give the process file name and click Finish; this will create a default BPMN2 template file. The BPMN2 modeler helps to visually model the process by dragging and dropping BPMN constructs from the palette and connecting them using the tool set. Deploying the process programmatically For deploying and running the business process programmatically, you have to follow these steps: KIE is the abbreviation for Knowledge Is Everything. Creating the knowledge base: Create the Kie Services, which is a hub giving access to the services provided by Kie:Using the Kie Service, create Kie Container, which is the container for the knowledge base: KieContainer kContainer = ks.getKieClasspathContainer(); Create and return the knowledge base with the input name: KieBase kbase = kContainer.getKieBase("kbase"); Creating a runtime manager: The runtime manger manages the runtime build with knowledge sessions and Task Service to create an executable environment for processes and user tasks.Create the JPA entity manager factory used for creating the persistence service, for communicating with the storage layer: EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa"); Create the runtime builder, which is the dsl style helper to create the runtime environment: RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder().entityManagerFactory(emf) .knowledgeBase(kbase); Using the runtime environment, create the runtime manager: RuntimeManager RuntimeManager = RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(builder.get(), "com.packt:introductory-sample:1.0"); Creating the runtime engine: Using the runtime manager, creates the runtime engine that is fully initialized and ready for operation: RuntimeEngine engine = manager.getRuntimeEngine(null); Starting the process: Using the runtime manager, create a knowledge session and start the process: KieSession ksession = engine.getKieSession(); ksession.startProcess("com.sample.bpmn.hello"); This creates and starts a process instance. From the runtime manager, we can also access the human task service and interact with its API. Go to Window | Show View | Others | Drools | Process Instances to view the created process instances: Writing automated test cases jBPM runtime comes with a test utility, which serves as the unit testing framework for automated test cases. The unit testing framework uses and extends the capabilities of the JUnit testing framework and basically provides the JUnit life cycle methods and the jBPM runtime environment for testing and tearing down the runtime manager after test execution. Helper methods manage the knowledge base and the knowledge session, getting workitem handlers and assertions to assert process instances and various stages. For creating a JUnit test case, create a class extending org.jbpm.test.JbpmJUnitBaseTestCase We can initialize the jBPM runtime by using the previous steps and assert using the helper methods provided by org.jbpm.test.JbpmJUnitBaseTestCase. For example, we assert the completion of a process as follows: assertProcessInstanceCompleted(processInstance.getId(), ksession); Change management – updating deployed process definitions We have modeled a business process and deployed it; the application end users will create process instances and fulfill their goals by using the business process. Now, as the organization evolves, we need a change in the process; for example, the organization has decided to add one more department. Therefore, we have to update the associated business processes. Technically, in jBPM, we cannot have an update in an already deployed process definition; we need to have a workaround. jBPM suggests three strategies for a process migration. Proceed: We will introduce the new process definition and retire the old definition. Retiring should be taken care of by the application so that all process instance calls for the process are redirected to the new process definition. Abort: The existing process is aborted, and we can restart the process instance with the updated process definition. We have to be very careful in this approach if the changes are not compatible with the state of the process instances. This can show abrupt behaviors depending on how complex your process definition is. Transfer: The process instance is migrated to the new process definition; that is, the states of the process instance and instances of activity should be mapped. The jBPM out-of-the-box support provides a generic process upgrade API, which can be used as an example. Summary This article would have given you the "Hello world" hands-on experience in jBPM. With your jBPM installation ready, we can now dive deep into the details of the functional components of jBPM. Resources for Article: Further resources on this subject: BPMS Components [article] Installing Activiti [article] Participating in a business process (Intermediate) [article]
Read more
  • 0
  • 0
  • 5476
Packt
06 Jul 2015
10 min read
Save for later

Subtitles – tracking the video progression

Packt
06 Jul 2015
10 min read
In this article by Roberto Ulloa, author of the book Kivy – Interactive Applications and Games in Python Second Edition, we will learn how to use the progression of a video to display subtitles at the right moment. (For more resources related to this topic, see here.) Let's add subtitles to our application. We will do this in four simple steps: Create a Subtitle widget (subtitle.kv) derived from the Label class that will display the subtitles Place a Subtitle instance (video.kv) on top of the video widget Create a Subtitles class (subtitles.py) that will read and parse a subtitle file Track the Video progression (video.py) to display the corresponding subtitle The Step 1 involves the creation of a new widget in the subtitle.kv file: 1. # File name: subtitle.kv 2. <Subtitle@Label>: 3.     halign: 'center' 4.     font_size: '20px' 5.     size: self.texture_size[0] + 20, self.texture_size[1] + 20 6.     y: 50 7.     bcolor: .1, .1, .1, 0 8.     canvas.before: 9.         Color: 10.            rgba: self.bcolor 11.         Rectangle: 12.             pos: self.pos 13.             size: self.size There are two interesting elements in this code. The first one is the definition of the size property (line 4). We define it as 20 pixels bigger than the texture_size width and height. The texture_size property indicates the size of the text determined by the font size and text, and we use it to adjust the Subtitles widget size to its content. The texture_size is a read-only property because its value is calculated and dependent on other parameters, such as font size and height for text display. This means that we will read from this property but not write on it. The second element is the creation of the bcolor property (line 7) to store a background color, and how the rgba color of the rectangle has been bound to it (line 10). The Label widget (like many other widgets) doesn't have a background color, and creating a rectangle is the usual way to create such features. We add the bcolor property in order to change the color of the rectangle from outside the instance. We cannot directly modify parameters of the vertex instructions; however, we can create properties that control parameters inside the vertex instructions. Let's move on to Step 2 mentioned earlier. We need to add a Subtitle instance to our current Video widget in the video.kv file: 14. # File name: video.kv 15. ... 16. #:set _default_surl      "http://www.ted.com/talks/subtitles/id/97/lang/en" 18. <Video>: 19.     surl: _default_surl 20.     slabel: _slabel 21.     ... 23.     Subtitle: 24.         id: _slabel 25.         x: (root.width - self.width)/2 We added another constant variable called _default_surl (line 16), which contains the link to the URL with the corresponding subtitle TED video file. We set this value to the surl property (line 19), which we just created to store the subtitles' URL. We added the slabel property (line 20), that references the Subtitle instance through its ID (line 24). Then we made sure that the subtitle is centered (line 25). In order to start Step 3 (parse the subtitle file), we need to take a look at the format of the TED subtitles: 26. { 27.     "captions": [{ 28.         "duration":1976, 29.         "content": "When you have 21 minutes to speak,", 30.         "startOfParagraph":true, 31.         "startTime":0, 32.     }, ... TED uses a very simple JSON format (https://en.wikipedia.org/wiki/JSON) with a list of captions. Each caption contains four keys but we will only use duration, content, and startTime. We need to parse this file, and luckily Kivy provides a UrlRequest class (line 34) that will do most of the work for us. Here is the code for subtitles.py that creates the Subtitles class: 33. # File name: subtitles.py 34. from kivy.network.urlrequest import UrlRequest 36. class Subtitles: 38.     def __init__(self, url): 39.         self.subtitles = [] 40.         req = UrlRequest(url, self.got_subtitles) 42.     def got_subtitles(self, req, results): 43.         self.subtitles = results['captions'] 45.     def next(self, secs): 46.         for sub in self.subtitles: 47.             ms = secs*1000 - 12000 48.             st = 'startTime' 49.             d = 'duration' 50.             if ms >= sub[st] and ms <= sub[st] + sub[d]: 51.                 return sub 52.         return None The constructor of the Subtitles class will receive a URL (line 38) as a parameter. Then, it will make the petition to instantiate the UrlRequest class (line 40). The first parameter of the class instantiation is the URL of the petition, and the second is the method that is called when the result of the petition is returned (downloaded). Once the request returns the result, the method got_subtitles is called(line 42). The UrlRequest extracts the JSON and places it in the second parameter of got_subtitles. All we had to do is put the captions in a class attribute, which we called subtitles (line 43). The next method (line 45) receives the seconds (secs) as a parameter and will traverse the loaded JSON dictionary in order to search for the corresponding subtitle that belongs to that time. As soon as it finds one, the method returns it. We subtracted 12000 microseconds (line 47, ms = secs*1000 - 12000) because the TED videos have an introduction of approximately 12 seconds before the talk starts. Everything is ready for Step 4, in which we put the pieces together in order to see the subtitles working. Here are the modifications to the header of the video.py file: 53. # File name: video.py 54. ... 55. from kivy.properties import StringProperty 56. ... 57. from kivy.lang import Builder 59. Builder.load_file('subtitle.kv') 61. class Video(KivyVideo): 62.     image = ObjectProperty(None) 63.     surl = StringProperty(None) We imported StringProperty and added the corresponding property (line 55). We will use this property by the end of this chapter when we we can switch TED talks from the GUI. For now, we will just use _default_surl defined in video.kv (line 63). We also loaded the subtitle.kv file (line 59). Now, let's analyze the rest of the changes to the video.py file: 64.     ... 65.     def on_source(self, instance, value): 66.         self.color = (0,0,0,0) 67.         self.subs = Subtitles(name, self.surl) 68.         self.sub = None 70.     def on_position(self, instance, value): 71.         next = self.subs.next(value) 72.         if next is None: 73.             self.clear_subtitle() 74.         else: 75.             sub = self.sub 76.             st = 'startTime' 77.             if sub is None or sub[st] != next[st]: 78.                 self.display_subtitle(next) 80.     def clear_subtitle(self): 81.         if self.slabel.text != "": 82.             self.sub = None 83.             self.slabel.text = "" 84.             self.slabel.bcolor = (0.1, 0.1, 0.1, 0) 86.     def display_subtitle(self, sub): 87.         self.sub = sub 88.         self.slabel.text = sub['content'] 89.         self.slabel.bcolor = (0.1, 0.1, 0.1, .8) 90. (...) We introduced a few code lines to the on_source method in order to initialize the subtitles attribute with a Subtitles instance (line 67) using the surl property and initialize the sub attribute that contains the currently displayed subtitle (line 68), if any. Now, let's study how we keep track of the progression to display the corresponding subtitle. When the video plays inside the Video widget, the on_position event is triggered every second. Therefore, we implemented the logic to display the subtitles in the on_position method (lines 70 to 78). Each time the on_position method is called (each second), we ask the Subtitles instance (line 71) for the next subtitle. If nothing is returned, we clear the subtitle with the clear_subtitle method (line 73). If there is already a subtitle in the current second (line 74), then we make sure that there is no subtitle being displayed, or that the returned subtitle is not the one that we already display (line 164). If the conditions are met, we display the subtitle using the display_subtitle method (line 78). Notice that the clear_subtitle (lines 80 to 84) and display_subtitle (lines 86 to 89) methods use the bcolor property in order to hide the subtitle. This is another trick to make a widget invisible without removing it from its parent. Let's take a look at the current result of our videos and subtitles in the following screenshot: Summary In this article, we discussed how to control a video and how to associate the subtitles element of the screen with it. We also discussed how the Video widget incorporates synchronization of subtitles that we receive in a JSON format file with the progression of the video and a responsive control bar. We learned how to control its progression and add subtitles to it. Resources for Article: Further resources on this subject: Moving Further with NumPy Modules [article] Learning Selenium Testing Tools with Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 0
  • 4049

article-image-creating-mobile-dashboards
Packt
06 Jul 2015
13 min read
Save for later

Creating Mobile Dashboards

Packt
06 Jul 2015
13 min read
In this article by Taha M. Mahmoud, author of the book Learning SAP BusinessObjects Dashboards, in the last few decades, the usage of smart devices, such as mobile phones, tablets, and smart watches has increased dramatically. Now, the hardware of smart devices is powerful enough to handle all that we need it to. Indeed, we are now carrying smart devices with us all the time, that act like small, but powerful, computers. Here comes the idea of Mobile BI. We need to select the most important information for the BI end user to track on the go, using their smart devices. According to the Industrial Design Center (IDC: http://www.idc.com/), by 2017, 87 percent of all connected devices sold will be tablets and smartphones. (For more resources related to this topic, see here.) SAP BusinessObjects has different BI tools that handle different user requirements. They have the following BI reporting tools: Web Intelligence (Webi) reports: This tool can be used to help business users to execute their day-to-day reports. The main advantage of Webi reports is that you can schedule them to run and be sent directly to users by e-mail. This is a very powerful tool, because it is very similar to MS Excel, so business users can start using it directly, without a lot of effort and time. Crystal reports: This is one of the most famous report types. We call it a pixel-perfect report because we can control and adjust our report design, up to the pixel level. Normally, we use these reports to create the annual and quarterly reports used and published by an organization's end users. Dashboards: This is the tool that you are learning in this book. Dashboards are a powerful way to deliver information to the top management and executives. Lumira: This tool is a data discovery tool that can help data and business analysts explore data. It is very powerful and can connect to any source data. In minutes, you can come up with neat and wonderful dashboards and charts with this tool. The idea behind this tool is that you don't have initial requirements to implement, but you have data to explore instead. Explorer: This is another powerful data discovery tool, but its main focus is on analyzing data rather than presenting it. You can use it to explore the information you have. Design studio: This is a new dashboard tool. It was released at the end of 2013 for designing and implementing dashboards. It needs coding experience, as it is heavily dependent on JavaScript. Someone with technical skills should use this tool to produce dashboards, and then make them available to the end user. A lay user will not be able to create their own dashboards using this tool, at least at the current stage. SAP is focusing on upgrading this tool to be their main dashboard tool. We can find a matrix that shows the supported content on each device, as follows: BI document* iPad* iPhone* Android tablet* Android phone* Webi Yes Yes Yes Yes Crystal Yes Yes No No Dashboards Yes No Yes No Design studio Yes Yes Yes Yes Explorer Yes Yes No No Lumira Yes No No No * This is as per Q4-2014. SAP BO Dashboard can be viewed only on tablets, and not cell phones (as per the current available release SAP BO BI platform 4.1 SP5). In this article, we will focus on the following topics: Creating dashboards for mobile devices Developing mobile dashboards Publishing mobile dashboards Accessing mobile dashboards Creating dashboards for smart devices Mobility is one of the main enterprise goals for all market leaders. Mobility is a term that refers to providing company services for the customer through smart devices (including mobile devices). So, having a mobile application is one of the factors of an organization's success. Facebook, Google, Twitter, and many other enterprises across the globe are competing with each other to reach people everywhere. You don’t need to use a computer to buy something from Amazon.com. All that you need now is to install the Amazon application on your device and buy anything, at anytime. The fact that we are carrying our smart devices all the time, and using them regularly, makes them a golden place for reaching people. Business Intelligence also found that smart devices are perfect for delivering BI content to the end users, and to achieve the concept of giving the right information to the right person at the right time. We can use SAP BO dashboards to create one of the following dashboard types: Desktop Mobile Desktop and mobile If we are targeting desktop users only, we don’t need to worry about the compatibility of dashboard components, as all components are compatible with desktops; whereas, we need to take care if we are targeting mobile users. We need to avoid unsupported components and find alternatives and workarounds, as we will discuss in this article. Here, we must mention one big difference between desktop and mobile dashboards, which is the technology used in each. The technology used in desktop dashboards is Macromedia Flash, while the technology used in mobile dashboards is HTML5. This is the main reason that all the desktop dashboard components that we discussed throughout this book are not supported in the mobile version. You will learn how to find unsupported components and how to avoid them in the first place, if you are targeting mobile users. The second thing that we need to be aware of is the hardware limitation on mobile devices in comparison with powerful desktop and client tools. We need to consider using lightweight dashboard components and presenting the most important and critical information, which the end user really wants to track and monitor on the go. These types of KPIs need immediate action and can't wait until the user returns to their office. Here are the main steps for creating mobile dashboards: Design phase: We need to consider which KPIs should be displayed in the mobile dashboard version and how they should be displayed Development phase: We need to use supported dashboard components and connections only Publishing phase: We need to publish our dashboard and make it available for end users Accessing dashboard: We need to install the SAP BI application and configure it to connect to our SAP BO system Next, we will discuss each phase. Developing a mobile SAP BO Dashboard We can use SAP BO Dashboards to develop dashboards for desktops as well as mobiles. We just need to consider the following if we are developing for mobile dashboards: Using only supported mobile components Using the recommended canvas size for iPads Using only supported mobile connections Now, we will discuss each of these topics. Using supported mobile components To make sure that we are using only supported mobile dashboard components, we can use the Mobile Only filter from the Components panel. You can see this filter in the following screenshot: You can see a list of all supported Mobile dashboard components and connections in Appendix 3, Supported Mobile Components and Connections. We can also use the Mobile Compatibility panel to highlight unsupported dashboard components. This panel is very useful because it is also used to highlight unsupported functions, such as Entry Effect. Unsupported features will simply be lost when you view the dashboard on a mobile phone. You can see the Mobile Compatibility panel in the following screenshot: Using the recommended canvas size for iPads We need also to take care of the canvas size, as the recommended canvas size is 1024 x 768 pixels if we are developing a dashboard for mobiles. We can change the canvas size from the following places: Document properties Preferences Changing the canvas size from the preferences will make it the default canvas size for all new dashboards, whereas changing it from document properties will change the canvas size for the current dashboard only. If we have selected any canvas size other than the recommended one, we will get the following warning in the Mobile Compatibility panel: Using a supported mobile data connection The next thing we need to take care of is the external data connection type, as only a few of them are supported by mobile dashboards. You can see the Data Manager window, selected via data connections, in the following screenshot: Next, we will see how to preview and publish our mobile dashboard. Publishing mobile dashboards Publishing a dashboard will make it available for end users. After developing our mobile dashboard, we will need to do the following: Preview our dashboard to see what it will look like on a tablet Publish our dashboard on the SAP BO server as a mobile dashboard Add our dashboard to the mobile category to make it available for mobile users Previewing our mobile dashboard There are two modes for previewing mobile dashboards: Mobile (Fit to Screen) Mobile (Original Size) You can see the three available options in the following screenshot. We have already explained how to use the first one. The main difference between the other two options is that Mobile (Fit to Screen) will fit the dashboard to the screen size, and the other will display the original size. We need to note that the previewing option will affect only the preview mode. It will not affect the mobile dashboard after it is published. A mobile preview exactly simulates what we will see on the iPad. You can see a preview of a mobile dashboard in the following screenshot: You may notice that some components, such as a pie chart for example, will create a different user experience on the mobile preview compared to the desktop preview. This is because a desktop preview generates a flash file, whereas a mobile preview generates an HTML5 file. Publishing our mobile dashboard The next step is to publish our dashboard on the SAP BO server. We have the following options: Save to Platform | Mobile Only Save to Platform | Desktop Only Save to Platform | Desktop and Mobile We can access the Save to Platform menu from the File menu and see these options: The options are self-explanatory. The Mobile Only option will publish the dashboard as an HTML5 object only and can be accessed only from mobile devices. The Desktop Only option will generate a flash file and can be accessed only by desktop clients. Finally, the Desktop and Mobile option will generate both HTML5 and desktop, and can be accessed by both clients. Adding our dashboard to the mobile category After publishing our mobile dashboard, we need to make it available for mobile users. By default, any dashboard or report under the Mobile category will be displayed for mobile users. To do this, we should follow the steps: Access the BI Launch Pad (the URL will be <SAP_BO_SERVER_NAME:8080>/BOE/BI). Navigate to the folder that we used to publish our mobile dashboard. Right-click on that dashboard and add it to the Mobile category. You can see these steps in the following screenshot: You can see the last step here: You may need to refer to the SAP BusinessObjects administrator guide to get more information on how to set up and configure a mobile server on the SAP BO server. We used the default configuration settings here. Next, you will learn how to access it from an iPad or an Android tablet. Accessing and using mobile dashboards The first thing we need to do before accessing our mobile dashboard is to download the SAP BusinessObjects BI mobile application from the following links: Android: https://play.google.com/store/apps/details?id=com.sap.mobi&hl=en Mac OS: https://itunes.apple.com/en/app/sap-businessobjects-mobile/id441208302?mt=8 The most strongly recommended mobile device for displaying SAP BO dashboards is the iPad. Starting from SAP BO BI platform 4.1 SP1, we can also view SAP BO dashboards on Android tablets. Then, we need to configure SAP BO mobile application to connect to our server by following these steps: You may need to create a VPN, if you want to access your mobile dashboards from outside your organization. Open the SAP BO Mobile application (SAP BI). Tap on Connect and select Create New Connection. Enter BO Server in the connection name. Enter Mobile Server URL and CMC in the connection details (this information will depend on your SAP BO server information). Fill in Authentication Details (username and password). Establish the connection that you've already created. You should be able to see our eFashion dashboard. Tap it to display it on your tablet. Introducing the main features of the SAP BI application We can use the SAP BI application as a master place to view SAP BI content produced by different SAP BI tools, such as Web Intelligence, dashboards, Lumira, and so on. We can perform the following actions: Viewing document information and adding it to favorites Annotation E-mail For a complete user guide to SAP BI applications, refer to the following links: Mac OS: http://help.sap.com/bomobileios Android: https://help.sap.com/bomobileandroid Viewing document information and adding a document to favorites We can click on the three dots beside any BI document (report or dashboard) to view the document information (metadata), such as who the author is, and what the type of this document is. A small yellow star will appear on top of the document when it's added to favorites. You can see this menu in the following screenshot: Using the Annotation feature We can use this feature to take a screenshot of the current dashboard view and start annotating and adding our comments to it. Then, we can send it to the corresponding person. You can even add voice comments, which make it ideal to communicate results with others. This feature is shown here: E-mailing dashboards We can use this feature to e-mail the BI document to a specific person. It is the same as what we did in the annotation feature, except that it will send a plain image of the current view. Summary In this article, you learned how to create a mobile dashboard using SAP BO Dashboards. Then, we discussed how to find unsupported mobile dashboard components using the mobile compatibility panel. As a best practice, we should use the Mobile Only filter from the Components panel if we are targeting mobile devices for our dashboard. Next, you learned how to preview and publish your dashboard, so that it can be used and accessed by mobile devices. After that, we had an overview of the main features of a SAP BI mobile application, such as annotation and sharing via e-mails. Throughout this article, you learned how to create a dashboard step by step, starting from the analysis phase, right up to design and development. The main challenge that you will face later is how to present your information in a meaningful way, and how to get the maximum value of your information. I hope that you enjoyed reading this article, and I am looking forward to your input and    comments. Resources for Article: Further resources on this subject: Authorizations in SAP HANA [article] SAP HANA integration with Microsoft Excel [article] SAP HANA Architecture [article]
Read more
  • 0
  • 0
  • 3265

article-image-jira-agile-scrum
Packt
03 Jul 2015
24 min read
Save for later

JIRA Agile for Scrum

Packt
03 Jul 2015
24 min read
In this article, by Patrick Li, author of the book, JIRA Agile Essentials, we will learn that Scrum is one of the agile methodologies supported by JIRA Agile. Unlike the old days, when a project manager would use either a spreadsheet or Microsoft project to keep track of the project progress, with JIRA Agile and Scrum, team participation is encouraged, to improve collaboration between different project stakeholders. (For more resources related to this topic, see here.) Roles in Scrum In any Scrum team, there are three primary roles. Although each role has its own specific functions and responsibilities, you need all three to work together as a cohesive team in order to be successful at Scrum. Product owner The product owner is usually the product or project manager, who is responsible for owning the overall vision and the direction of the product that the team is working on. As the product owner, they are in charge of the features that will be added to the backlog list, the priority of each feature, and planning the delivery of these features through sprints. Essentially, the product owner is the person who makes sure that the team is delivering the most value for the stakeholders in each sprint. The Scrum master The Scrum master's job is to make sure that the team is running and using Scrum effectively and efficiently; so, they should be very knowledgeable and experienced with using Scrum. The Scrum master has the following two primary responsibilities: To coach and help everyone on the team to understand Scrum; this includes the product owner, delivery team, as well as external people that the project team interacts with. In the role of a coach, the Scrum master may help the product owner to understand and better manage the backlog and plan for sprints as well as explain the process with the delivery team. To improve the team's Scrum process by removing any obstacles in the way. Obstacles, also known as impediments, are anything that may block or negatively affect the team's adoption of Scrum. These can include things such as poorly-organized product backlog or the lack of support from other teams/management. It is the responsibility of the Scrum master to either directly remove these impediments or work with the team to find a solution. Overall, the Scrum master is the advocate for Scrum, responsible for educating, facilitating, and helping people adopt and realize the advantages of using it. The delivery team The delivery team is primarily responsible for executing and delivering the final product. However, the team is also responsible for providing estimates on tasks and assisting the product owner to better plan sprints and delivery. Ideally, the team should consist of cross-functional members required for the project, such as developers, testers, and business analysts. Since each sprint can be viewed as a mini project by itself, it is critical to have all the necessary resources available at all times, as tasks are being worked on and passed along the workflow. Last but not least, the team is also responsible for retrospectively reviewing their performance at the end of each sprint, along with the product owner and Scrum master. This helps the team review what they have done and reveals how they can improve for the upcoming sprints. Understanding the Scrum process Now, we will give you a brief introduction to Scrum and an overview of the various roles that Scrum prescribes. Let's take a look at how a typical project is run with Scrum and some of the key activities. First, we have the backlog, which is a one-dimensional list of the features and requirements that need to be implemented by the team. The item's backlogs are listed from top to bottom by priority. While the product owner is the person in charge of the backlog, defining the priority based on his vision, everyone in the team can contribute by adding new items to the backlog, discussing priorities, and estimating efforts required for implementation. The team will then start planning their next immediate sprint. During this sprint planning meeting, the team will decide on the scope of the sprint. Usually, top priority items from the backlog will be included. The key here is that by the end of the sprint, the team should have produced a fully tested, potentially shippable product containing all the committed features. During the sprint, the team will have daily Scrum meetings, usually at the start of each day, where every member of the team will give a quick overview of what they have done, plan to do, and any impediments. The goal is to make sure that everyone is on the same page, so meetings should be short and sweet. At the end of the sprint, the team will have a sprint review meeting, where the team will present what they have produced to the stakeholder. During this meeting, new changes will often emerge as the product starts to take shape, and these changes will be added to the backlog, which the team will reprioritize before the next sprint commences. Another meeting called the sprint retrospective meeting will also take place at the end of the sprint, where the team will come together to discuss what they have done right, what they have done wrong, and how they can improve. Throughout this process, the Scrum master will act as the referee, where they will make sure all these activities are done correctly. For example, the Scrum master will guide the product owner and the team during the backlog and sprint planning meetings to make sure the items they have are scoped and described correctly. The Scrum master will also ensure that the meetings stay focused, productive, do not run overtime, and that the team members remain respectful without trying to talk over each other. So, now you have seen some of the advantages of using Scrum, the different roles, as well as a simple Scrum process; let's see how we can use JIRA Agile to run projects with Scrum. Creating a new Scrum board The first step to start using JIRA Agile for Scrum is to create a Scrum board for your project. If you created your project by using the Agile Scrum project template, a Scrum board is automatically created for you along with the project. However, if you want to create a board for existing projects, or if you want your board to span across multiple projects, you will need to create it separately. To create a new board, perform the following steps: Click on the Agile menu item from the top navigation bar and select the Manage Boards option. Click on the Create board button. This will bring up the Create an Agile board dialog. Select the Create a Scrum board option, as shown in the following screenshot: Select the way you want to create your board and click on the Next button. There are three options to choose from, as follows: New project and a new board: This is the same as creating a project using the Scrum Agile project template. A new project will be created along with a new Scrum board dedicated to the project. Board from an existing project: This option allows you to create a new board from your existing projects. The board will be dedicated to only one project. Board from an existing Saved Filter: This option allows you to create a board that can span across multiple projects with the use of a filter. So, in order to use this option, you will first have to create a filter that includes the projects and issues you need. If you have many issues in your project, you can also use filters to limit the number of issues to be included. Fill in the required information for the board. Depending on the option you have selected, you will either need to provide the project details or select a filter to use. The following screenshot shows an example of how to create a board with a filter. Click on the Create board button to finish: Understanding the Scrum board The Scrum board is what you and your team will be using to plan and run your project. It is your backlog as well as your sprint activity board. A Scrum board has the following three major modes: Backlog: The Backlog mode is where you will plan your sprints, organize your backlog, and create issues Active sprints: The Active sprints mode is where your team will be working in a sprint Reports: The Reports mode is where you can track the progress of your sprint The following screenshot shows a typical Scrum board in the Backlog mode. In the center of the page, you have the backlog, listing all the issues. You can drag them up and down to reorder their priorities. On the right-hand side, you have the issue details panel, which will be displayed when you click on an issue in the backlog: During the backlog planning meetings, the product owner and the team will use this Backlog mode to add new items to the backlog as well as decide on their priorities. Creating new issues When a Scrum board is first created, all the issues, if any (called user stories or stories for short), are placed in the backlog. During your sprint planning meetings, you can create more issues and add them to the backlog as you translate requirements into user stories. To create a new issue, perform the following steps: Browse to your Scrum board. Click on the Create button from the navigation bar at the top or press C on your keyboard. This will bring up the Create Issue dialog. Select the type of issue (for example, Story) you want to create from the Issue Type field. Provide additional information for the issue, such as Summary and Description. Click on the Create button to create the issue, as shown in the following screenshot: Once you have created the issue, it will be added to the backlog. You can then assign it to epics or version, and schedule it to be completed by adding it to sprints. When creating and refining your user stories, you will want to break them down as much as possible, so that when it comes to deciding on the scope of a sprint, it will be much easier for the team to provide an estimate. One approach is by using the INVEST characteristics defined by Bill Wake: Independent: It is preferable if each story can be done independently. While this is not always possible, independent tasks make implementation easier. Negotiable: The developers and product owners need to work together so that both parties are fully aware of what the story entails. Valuable: The story needs to provide value to the customer. Estimable: If a story is too big or complicated for the development team to provide an estimate, then it needs to be broken down further. Small: Each story needs to be small, often addressing a single feature that will fit into a single sprint (roughly 2 weeks). Testable: The story needs to describe the expected end result so that after it is implemented, it can be verified. Creating new epics Epics are big user stories that describe major application features. They are then broken down into smaller, more manageable user stories. In JIRA Agile, epics are a convenient way to group similar user stories together. To create a new epic from your Scrum board, perform the following steps: Expand the Epics panel if it is hidden, by clicking on EPICS from the left-hand side panel. Click on the Create Epic link from the Epics panel. The link will appear when you hover your mouse over the panel. This will bring up the Create Epic dialog, with the Project and Issue Type fields already preselected for you: You can also open the Create issue dialog, and select Issue Type as Epic. Provide a name for the epic in the Epic Name field. Provide a quick summary in the Summary field. Click on the Create button. Once you have created the epic, it will be added to the Epics panel. Epics do not show up as cards in sprints or in the backlog. After you have created your epic, you can start adding issues under it. Doing this helps you organize issues that are related to the same functionality or feature. There are two ways in which you can add issues to an epic: By creating new issues directly in the epic, expanding the epic you want, and clicking on the Create issue in epic link By dragging existing issues into the epic, as shown in the following screenshot: Estimating your work Estimation is an art and is a big part of Scrum. Being able to estimate well as a team will directly impact how successful your sprints will be. When it comes to Scrum, estimation means velocity. In other words, it means how much work your team can deliver in a sprint. This is different from the traditional idea of measuring and estimating by man hours. The concept of measuring velocity is to decouple estimation from time tracking. So, instead of estimating the work based on how many hours it will take to complete a story, which will inadvertently make people work long hours trying to keep the estimates accurate, it can be easily done by using an arbitrary number for measurement, which will help us avoid this pitfall. A common approach is to use what are known as story points. Story points are used to measure the complexity or level of effort required to complete a story, not how long it will take to complete it. For example, a complex story may have eight story points, while a simpler story will have only two. This does not mean that the complex story will take 8 hours to complete. It is simply a way to measure its complexity in relation to others. After you have estimated all your issues with story points, you need to figure out how many story points your team can deliver in a sprint. Of course, you will not know this for your first sprint, so you will have to estimate this again. Let's say your team is able to deliver 10 story points worth of work in a one-week sprint, then you can create sprints with any number of issues that add up to 10 story points. As your team starts working on the sprint, you will likely find that the estimate of 10 story points is too much or not enough, so you will need to adjust this for your second sprint. Remember that the goal here is not to get it right the first time but to continuously improve your estimates to a point where the team can consistently deliver the same amount of story points' worth of work, that is, your team's velocity. Once you accurately start predicting your team's velocity, it will become easier to manage the workload for each sprint. Now that you know how estimates work in Scrum, let's look at how JIRA Agile lets you estimate work. JIRA Agile provides several ways for you to estimate issues, and the most common approach is to use story points. Each story in your backlog has a field called Estimate, as shown in the following screenshot. To provide an estimate for the story, you just need to hover over the field, click on it, and enter the story point value: You cannot set estimates once the issue is in active development, that is, the sprint that the issue belongs to is active. Remember that the estimate value you provide here is arbitrary, as long as it can reflect the issues' complexity in relation to each other. Here are a few more points for estimation: Be consistent on how you estimate issues. Involve the team during estimation. If the estimates turn out to be incorrect, it is fine. The goal here is to improve and adjust. Ranking and prioritizing your issues During the planning session, it is important to rank your issues so that the list reflects their importance relative to each other. For those who are familiar with JIRA, there is a priority field, but since it allows you to have more than one issue sharing the same priority value, it becomes confusing when you have two issues both marked as critical. JIRA Agile addresses this issue by letting you simply drag an issue up and down the list according to its importance, with the more important issues at the top and the less important issues at the bottom. This way, you end up with an easy-to-understand list. Creating new versions In a software development team, you will likely be using versions to plan your releases. Using versions allows you to plan and organize issues in your backlog and schedule when they will be completed. You can create multiple versions and plan your roadmap accordingly. To create a new version, follow these steps: Expand the Versions panel if it is hidden, by clicking on VERSIONS from the left-hand side panel. Click on the Create Version link from the Versions panel. The link will appear when you hover your mouse over the panel. This will bring up the Create Version dialog with the Project field preselected for you, as shown in the following screenshot: Provide a name for the version in the Name field. You can also specify the start and release dates for the version. These fields are optional, and you can change them later. Click on the Create button. Once the version is created, it will be added to the Versions panel. Just like epics, you can add issues to a version by dragging and dropping the issue over onto the target version. In Scrum, a version can span across many sprints. Clicking on a version will display the issues that are part of the version. As shown in the following screenshot, Version 2.0 spans across three sprints: Planning sprints The sprint planning meeting is where the project team comes together at the start of each sprint and decides what they should focus and work on next. With JIRA Agile, you will be using the Backlog mode of your board to create and plan the new sprint's scope. Now we illustrate some of the key components during sprint planning: Backlog: This includes all the issues that are not in any sprint yet. In other words, it includes the issues that are not yet scheduled for completion. For a new board, all existing issues will be placed in the backlog. Sprints: These are displayed above the backlog. You can have multiple sprints and plan ahead. Issue details: This is the panel on your right-hand side. It displays details of the issue you are clicking on. Epics: This is one of the panels on your left-hand side. It displays all the epics you have. Versions: This is the other panel on your left-hand side. It displays all the versions you have. The highlighted area in the following screenshot is the new sprint, and the issues inside the sprint are what the team has committed to deliver at the end of the sprint: Starting a sprint Once all the epics and issues have been created, it is time to start preparing a sprint. The first step is to create a new sprint by clicking on the Create Sprint button. There are two ways to add issues to a sprint: By dragging the issues you want from backlog and dropping them into the sprint By dragging the sprint footer down, to include all the issues you want to be part of the sprint You can create multiple sprints and plan beyond the current one by filling each sprint with issues from your backlog. Once you have all the issues you want in the sprint, click on the Start Sprint link. As shown in the following screenshot, you will be asked to set the start and end dates of the sprint. By default, JIRA Agile will automatically set the start date to the current date, and the end date to one week after that. You can change these dates, of course. The general best practices include the following: Keeping your sprints short, usually 1 or 2 weeks long. Keeping the length of your sprints consistent; this way, you will be able to accurately predict your team's velocity: Once you have started your sprint, you will be taken to the active sprints mode for the board. Note that for you to start a sprint, you have to take following things into consideration: There must be no sprint already active. You can only have one active sprint per board at any time. You must have the Administer Projects permission for all projects included in the board. Working on a sprint You will enter the active sprint mode once you have started a sprint; all the issues that are part of the sprint will be displayed. In the active sprint mode, the board will be divided into two major sections. The left section will contain all the issues in the current sprint. You will notice that it is divided into several columns. These columns represent the various states or statuses that an issue can be in, and they should reflect your team's workflow. By default, there are three columns: To Do: The issue is waiting to start In Progress: The issue is currently being worked on Done: The issue has been completed If you are using epics to organize your issues, this section will also be divided into several horizontal swimlanes. Swimlanes help you group similar issues together on the board. Swimlanes group issues by criteria, such as assignee, story, or epic. By default, swimlanes are grouped by stories, so subtasks for the same story will all be placed in one swimlane. So, you can see that columns group issues by statuses, while swimlanes group issues by similarity. As shown in the following screenshot, we have three columns and two swimlanes: The section on the right-hand side displays the currently selected issue's details, such as its summary and description, comments, and attachments. In a typical scenario, at the start of a sprint, all the issues will be in the left-most To Do column. During the daily Scrum meetings, team members will review the current status of the board and decide what to focus on for the day. For example, each member of the team may take on an issue and move it to the In Progress column by simply dragging and dropping the issue cards into the column. Once they have finished working on the issues, they can drag them into the Done column. The team will continue this cycle throughout the sprint until all the issues are completed: During the sprint, the Scrum master as well as the product owner will need to make sure not to interrupt the team unless it is urgent. The Scrum master should also assist with removing impediments that are preventing team members from completing their assigned tasks. The product owner should also ensure that no additional stories are added to the sprint, and any new feature requests are added to the backlog for future sprints instead. JIRA Agile will alert you if you try to add a new issue to the currently active sprint. Completing a sprint On the day the sprint ends, you will need to complete the sprint by performing the following steps: Go to your Scrum board and click on Active sprints. Click on the Complete Sprint link. This will bring up the Complete Sprint dialog, summarizing the current status of the sprint. As shown in the following screenshot, we have a total of six issues in this sprint. Three issues are completed and three are not: Click on the Complete button to complete the sprint. When you complete a sprint, any unfinished issues will be automatically moved back to the top of the backlog. Sometimes, it might be tempting to extend your sprint if you only have one or two issues outstanding, but you should not do this. Remember that the goal here is not to make your estimates appear accurate by extending sprints or to force your team to work harder in order to complete everything. You want to get to a point where the team is consistently completing the same amount of work in each sprint. If you have leftovers from a sprint, it means that your team's velocity should be lowered. Therefore, for the next sprint, you should plan to include less work. Reporting a sprint's progress As your team busily works through the issues in the sprint, you need to have a way to track the progress. JIRA Agile provides a number of useful reports via the Report mode. You can access the Report mode anytime during the sprint. These reports are also very useful during sprint retrospective meetings, as they provide detailed insights on how the sprint progressed. The sprint report The sprint report gives you a quick snapshot of how the sprint is tracking. It includes a burndown chart and a summary table that lists all the issues in the sprint and their statuses, as shown here: As shown in the preceding sprint report, we have completed four issues in the sprint. One issue was not completed and was placed back in the backlog. The burndown chart The burndown chart shows you a graphical representation of the estimated or ideal work left versus actual progress. The gray line acts as a guideline of the projected progress of the project, and the red line is the actual progress. In an ideal world, both lines should be as close to each other as possible, as the sprint progresses each day: The velocity chart The velocity chart shows you the amount of work originally committed to the sprint (the gray bar) versus the actual amount of work completed (the green bar), based on how you decide to estimate, such as in the case of story points. The chart will include past sprints, so you can get an idea of the trend and be able to predict the team's velocity. As shown in the following screenshot, from sprint 1 to 3, we have over-committed the amount of work, and for sprint 4, we have completed all our committed work. So, one way to work out your team's velocity is to calculate the average based on the Completed column, and this should give you an indication of your team's true velocity. Of course, this requires: That your sprints stay consistent in duration That your team members stay consistent That your estimation stays consistent As your team starts using Scrum, you can expect to see improvements to the team's velocity, as you continuously refine your process. Over time, you will get to a point where the team's velocity becomes consistent and can be used as a reliable indicator for work estimation. This will allow you to avoid over and under committing on work delivery, as shown in the following velocity chart: Summary In this article, we looked at how to use JIRA Agile for Scrum. We looked at the Scrum board and how you can use it to organize your issue backlog, plan and run your sprint, and review and track its progress with reports and charts. Remember that the keys for a successfully running sprint are consistency, review, and continuous improvement. It is fine if you find your estimates are incorrect, especially for the first few sprints; just make sure that you review, adjust, and improve. Resources for Article: Further resources on this subject: Advanced JIRA 5.2 Features [article] JIRA – an Overview [article] Gadgets in JIRA [article]
Read more
  • 0
  • 0
  • 3152
article-image-using-rest-api-unity-part-2-extracting-meaningful-json-api
Travis and
01 Jul 2015
5 min read
Save for later

Using a REST API with Unity, Part 2

Travis and
01 Jul 2015
5 min read
One of the main challenges we encountered when first trying to use JSON with C#, was it wasn't the simple world of JavaScript we had grown accustomed to. For anyone that is unsure of what JSON is, it is Javascript Object Notation, a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. For anyone not familiar with JSON, hopefully you have some background in XML. It's for a similar purpose, but JSON uses Javascript Notation, rather than a Markup Language as XML does. This article is not a point of discussion for each of these data types, instead, we will be focusing primarily with JSON, as that is the general standard for REST API's. Extracting JSON As I mentioned in Part 1 of this series, the main tool we use to extract JSON is a library for C# called SimpleJSON. Before we can start extracting data, lets create a fake JSON object, returned from a REST endpoint. { employees: [ { name: "Jim" age: "41", salary: "60000" }, { name: "Sally" age: "33", salary: "60000" } ] } Perfect. This data is just some arbitrary data for our examples. Assuming we queried and collected this data into a "result" variable, let's parse this data with simpleJSON. We think the best course of action is to show the code as a whole, and then discuss what is going on in each step. Some of this code could be trimmed a bit shorter, but we're going to write it out in longer form to help demonstrate what is going on. It's not long code anyway. import SimpleJSON ... private string result; private List<Employee> employees; ... var jsonData = JSON.parse(result); foreach(JSONClass employee in jsonData["employees"].AsArray) { string name = employee.AsObject["name"].Value; string age = employee.AsObject["age"].Value; string salary = employee.AsObject["salary"].Value; employees.Add(new Employee(name, age, salary)); } Now let's step through what we've done here to demonstrate what each piece of code is doing. First, we must import the SimpleJSON library. To get this package, see this link to download. You can import the package to Unity using the file menu, Assets > Import Package > Custom Package. Once you have imported this package, we need to include: import SimpleJSON; This belongs at the stop of our script. Assuming we have completed the GET request earlier, and now have the data in a variable called result, we can move to the next step. var jsonData = JSON.parse(result); As we talked about earlier, JSON is an object made up of Javascript Object Notation. If you come from a background in JavaScript, these sorts of objects are just part of your daily norm. However, objects like these don't exist in C#. (They of course do, but are not written like this, and appear more abstract). So we know these sort of objects are not native to C#, or most languages for that matter, so how do we import the data? Fear not, as JSON's are imported from REST endpoints as strings. This allows each system to import it as they like, and come up with their own solutions to read these files. In our case, SimpleJSON will take the imported string, and make a JSONClass object out of the string. That is what resides in jsonData. Navigating a JSON with Simple JSON Now that we have the JSON parsed, our next step was moving one step inside the returned JSON, and extracting all the employees. The "employees" value is an array of employees. Knowing that this data is an array, we can use this in a foreach loop, extracting each employee as we pass by using a cast. Lets look at the loop first. foreach(JSONClass employee in jsonData["employees"].AsArray) { ... } So we extract each employee from the employees array. Now, the employee is a JSONClass, but we have not told the system it's an object, so we need to do so when we start digging deeper in the json, like so. string name = employee.AsObject["name"].Value; string age = employee.AsObject["age"].Value; string salary = employee.AsObject["salary"].Value; Once we are inside the foreach loop, we will take the JSONClass employee, cast it correctly to an object, and take the string we need in it. The trick is, SimpleJSON still doesn't know what type of object is on the other end, so we need to tell it that we want the value from this return. Since we know the structure of the JSON we can construct our code to handle the JSON. Frequently you will find yourself iterating through a list of data, creating objects out of that piece of data. To handle that, we recommend you create an object and add it to a list. It's a simple way to store the data. employees.Add(new Employee(name, age, salary)); Conclusion We hope this walkthrough of Simple JSON gave you an idea on how to use this library. It's a very simple tool to use. The only frustrating part is working with the AsObject and AsArray methods, as you can sometimes easily mistake which instance you need at a certain time.
Read more
  • 0
  • 0
  • 3837

article-image-how-to-build-12-factor-design-microservices-on-docker-part-2
Cody A.
29 Jun 2015
14 min read
Save for later

How to Build 12 Factor Microservices on Docker - Part 2

Cody A.
29 Jun 2015
14 min read
Welcome back to our how-to on Building and Running 12 Factor Microservices on Docker. In Part 1, we introduced a very simple python flask application which displayed a list of users from a relational database. Then we walked through the first four of these factors, reworking the example application to follow these guidelines. In Part 2, we'll be introducing a multi-container Docker setup as the execution environment for our application. We’ll continue from where we left off with the next factor, number five. Build, Release, Run. A 12-factor app strictly separates the process for transforming a codebase into a deploy into distinct build, release, and run stages. The build stage creates an executable bundle from a code repo, including vendoring dependencies and compiling binaries and asset packages. The release stage combines the executable bundle created in the build with the deploy’s current config. Releases are immutable and form an append-only ledger; consequently, each release must have a unique release ID. The run stage runs the app in the execution environment by launching the app’s processes against the release. This is where your operations meet your development and where a PaaS can really shine. For now, we’re assuming that we’ll be using a Docker-based containerized deploy strategy. We’ll start by writing a simple Dockerfile. The Dockerfile starts with an ubuntu base image and then I add myself as the maintainer of this app. FROM ubuntu:14.04.2 MAINTAINER codyaray Before installing anything, let’s make sure that apt has the latest versions of all the packages. RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list RUN apt-get update Install some basic tools and the requirements for running a python webapp RUN apt-get install -y tar curl wget dialog net-tools build-essential RUN apt-get install -y python python-dev python-distribute python-pip RUN apt-get install -y libmysqlclient-dev Copy over the application to the container. ADD /. /src Install the dependencies. RUN pip install -r /src/requirements.txt Finally, set the current working directory, expose the port, and set the default command. EXPOSE 5000 WORKDIR /src CMD python app.py Now, the build phase consists of building a docker image. You can build and store locally with docker build -t codyaray/12factor:0.1.0 . If you look at your local repository, you should see the new image present. $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE codyaray/12factor 0.1.0 bfb61d2bbb17 1 hour ago 454.8 MB The release phase really depends on details of the execution environment. You’ll notice that none of the configuration is stored in the image produced from the build stage; however, we need a way to build a versioned release with the full configuration as well. Ideally, the execution environment would be responsible for creating releases from the source code and configuration specific to that environment. However, if we’re working from first principles with Docker rather than a full-featured PaaS, one possibility is to build a new docker image using the one we just built as a base. Each environment would have its own set of configuration parameters and thus its own Dockerfile. It could be something as simple as FROM codyaray/12factor:0.1.0 MAINTAINER codyaray ENV DATABASE_URL mysql://sa:mypwd@mydbinstance.abcdefghijkl.us-west-2.rds.amazonaws.com/mydb This is simple enough to be programmatically generated given the environment-specific configuration and the new container version to be deployed. For the demonstration purposes, though, we’ll call the above file Dockerfile-release so it doesn’t conflict with the main application’s Dockerfile. Then we can build it with docker build -f Dockerfile-release -t codyaray/12factor-release:0.1.0.0 . The resulting built image could be stored in the environment’s registry as codyaray/12factor-release:0.1.0.0. The images in this registry would serve as the immutable ledger of releases. Notice that the version has been extended to include a fourth level which, in this instance, could represent configuration version “0” applied to source version “0.1.0”. The key here is that these configuration parameters aren’t collated into named groups (sometimes called “environments”). For example, these aren’t static files named like Dockerfile.staging or Dockerfile.dev in a centralized repo. Rather, the set of parameters is distributed so that each environment maintains its own environment mapping in some fashion. The deployment system would be setup such that a new release to the environment automatically applies the environment variables it has stored to create a new Docker image. As always, the final deploy stage depends on whether you’re using a cluster manager, scheduler, etc. If you’re using standalone Docker, then it would boil down to docker run -P -t codyaray/12factor-release:0.1.0.0 Processes. A 12-factor app is executed as one or more stateless processes which share nothing and are horizontally partitionable. All data which needs to be stored must use a stateful backing service, usually a database. This means no sticky sessions and no in-memory or local disk-based caches. These processes should never daemonize or write their own PID files; rather, they should rely on the execution environment’s process manager (such as Upstart). This factor must be considered up-front, in line with the discussions on antifragility, horizontal scaling, and overall application design. As the example app delegates all stateful persistence to a database, we’ve already succeeded on this point. However, it is good to note that a number of issues have been found using the standard ubuntu base image for Docker, one of which is its process management (or lack thereof). If you would like to use a process manager to automatically restart crashed daemons, or to notify a service registry or operations team, check out baseimage-docker. This image adds runit for process supervision and management, amongst other improvements to base ubuntu for use in Docker such as obsoleting the need for pid files. To use this new image, we have to update the Dockerfile to set the new base image and use its init system instead of running our application as the root process in the container. FROM phusion/baseimage:0.9.16 MAINTAINER codyaray RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list RUN apt-get update RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential RUN apt-get install -y python python-dev python-distribute python-pip RUN apt-get install -y libmysqlclient-dev ADD /. /src RUN pip install -r /src/requirements.txt EXPOSE 5000 WORKDIR /src RUN mkdir /etc/service/12factor ADD 12factor.sh /etc/service/12factor/run # Use baseimage-docker's init system. CMD ["/sbin/my_init"]  Notice the file 12factor.sh that we’re now adding to /etc/service. This is how we instruct runit to run our application as a service. Let’s add the new 12factor.sh file. #!/bin/sh python /src/app.py Now the new containers we deploy will attempt to be a little more fault-tolerant by using an OS-level process manager. Port Binding. A 12-factor app must be self-contained and bind to a port specified as an environment variable. It can’t rely on the injection of a web container such as tomcat or unicorn; instead it must embed a server such as jetty or thin. The execution environment is responsible for routing requests from a public-facing hostname to the port-bound web process. This is trivial with most embedded web servers. If you’re currently using an external web server, this may require more effort to support an embedded server within your application. For the example python app (which uses the built-in flask web server), it boils down to port = int(os.environ.get("PORT", 5000)) app.run(host='0.0.0.0', port=port) Now the execution environment is free to instruct the application to listen on whatever port is available. This obviates the need for the application to tell the environment what ports must be exposed, as we’ve been required to do with Docker. Concurrency. Because a 12-factor exclusively uses stateless processes, it can scale out by adding processes. A 12-factor app can have multiple process types, such as web processes, background worker processes, or clock processes (for cron-like scheduled jobs). As each process type is scaled independently, each logical process would become its own Docker container as well. We’ve already seen building a web process; other processes are very similar. In most cases, scaling out simply means launching more instances of the container. (Its usually not desirable to scale out the clock processes, though, as they often generate events that you want to be scheduled singletons within your infrastructure.) Disposability. A 12-factor app’s processes can be started or stopped (with a SIGTERM) anytime. Thus, minimizing startup time and gracefully shutting down is very important. For example, when a web service receives a SIGTERM, it should stop listening on the HTTP port, allow in-flight requests to finish, and then exit. Similar, processes should be robust against sudden death; for example, worker processes should use a robust queuing backend. You want to ensure the web server you select can gracefully shutdown. The is one of the trickier parts of selecting a web server, at least for many of the common python http servers that I’ve tried.  In theory, shutting down based on receiving a SIGTERM should be as simple as follows. import signal signal.signal(signal.SIGTERM, lambda *args: server.stop(timeout=60)) But often times, you’ll find that this will immediately kill the in-flight requests as well as closing the listening socket. You’ll want to test this thoroughly if dependable graceful shutdown is critical to your application. Dev/Prod Parity. A 12-factor app is designed to keep the gap between development and production small. Continuous deployment shrinks the amount of time that code lives in development but not production. A self-serve platform allows developers to deploy their own code in production, just like they do in their local development environments. Using the same backing services (databases, caches, queues, etc) in development as production reduces the number of subtle bugs that arise in inconsistencies between technologies or integrations. As we’re deploying this solution using fully Dockerized containers and third-party backing services, we’ve effectively achieved dev/prod parity. For local development, I use boot2docker on my Mac which provides a Docker-compatible VM to host my containers. Using boot2docker, you can start the VM and setup all the env variables automatically with boot2docker up $(boot2docker shellinit) Once you’ve initialized this VM and set the DOCKER_HOST variable to its IP address with shellinit, the docker commands given above work exactly the same for development as they do for production. Logs. Consider logs as a stream of time-ordered events collected from all running processes and backing services. A 12-factor app doesn’t concern itself with how its output is handled. Instead, it just writes its output to its `stdout` stream. The execution environment is responsible for collecting, collating, and routing this output to its final destination(s). Most logging frameworks either support logging to stderr/stdout by default or easily switching from file-based logging to one of these streams. In a 12-factor app, the execution environment is expected to capture these streams and handle them however the platform dictates. Because our app doesn’t have specific logging yet, and the only logs are from flask and already to stderr, we don’t have any application changes to make.  However, we can show how an execution environment which could be used handle the logs. We’ll setup a Docker container which collects the logs from all the other docker containers on the same host. Ideally, this would then forward the logs to a centralized service such as Elasticsearch. Here we’ll demo using Fluentd to capture and collect the logs inside the log collection container; a simple configuration change would allow us to switch from writing these logs to disk as we demo here and instead send them from Fluentd to a local Elasticsearch cluster. We’ll create a Dockerfile for our new logcollector container type. For more detail, you can find a Docker fluent tutorial here. We can call this file Dockerfile-logcollector. FROM kiyoto/fluentd:0.10.56-2.1.1 MAINTAINER kiyoto@treasure-data.com RUN mkdir /etc/fluent ADD fluent.conf /etc/fluent/ CMD "/usr/local/bin/fluentd -c /etc/fluent/fluent.conf" We use an existing fluentd base image with a specific fluentd configuration. Notably this tails all the log files in /var/lib/docker/containers/<container-id>/<container-id>-json.log, adds the container ID to the log message, and then writes to JSON-formatted files inside /var/log/docker. <source> type tail path /var/lib/docker/containers/*/*-json.log pos_file /var/log/fluentd-docker.pos time_format %Y-%m-%dT%H:%M:%S tag docker.* format json </source> <match docker.var.lib.docker.containers.*.*.log> type record_reformer container_id ${tag_parts[5]} tag docker.all </match> <match docker.all> type file path /var/log/docker/*.log format json include_time_key true </match> As usual, we create a Docker image. Don’t forget to specify the logcollector Dockerfile. docker build -f Dockerfile-logcollector -t codyaray/docker-fluentd . We’ll need to mount two directories from the Docker host into this container when we launch it. Specifically, we’ll mount the directory containing the logs from all the other containers as well as the directory to which we’ll be writing the consolidated JSON logs. docker run -d -v /var/lib/docker/containers:/var/lib/docker/containers -v /var/log/docker:/var/log/docker codyaray/docker-fluentd Now if you check in the /var/log/docker directory, you’ll see the collated JSON log files. Note that this is on the docker host rather than in any container; if you’re using boot2docker, you can ssh into the docker host with boot2docker ssh and then check /var/log/docker. Admin Processes. Any admin or management tasks for a 12-factor app should be run as one-off processes within a deploy’s execution environment. This process runs against a release using the same codebase and configs as any process in that release and uses the same dependency isolation techniques as the long-running processes. This is really a feature of your app's execution environment. If you’re running a Docker-like containerized solution, this may be pretty trivial. docker run -i -t --entrypoint /bin/bash codyaray/12factor-release:0.1.0.0 The -i flag instructs docker to provide interactive session, that is, to keep the input and output ttys attached. Then we instruct docker to run the /bin/bash command instead of another 12factor app instance. This creates a new container based on the same docker image, which means we have access to all the code and configs for this release. This will drop us into a bash terminal to do whatever we want. But let’s say we want to add a new “friends” table to our database, so we wrote a migration script add_friends_table.py. We could run it as follows: docker run -i -t --entrypoint python codyaray/12factor-release:0.1.0.0 /src/add_friends_table.py As you can see, following the few simple rules specified in the 12 Factor manifesto really allows your execution environment to manage and scale your application. While this may not be the most feature-rich integration within a PaaS, it is certainly very portable with a clean separation of responsibilities between your app and its environment. Much of the tools and integration demonstrated here were a do-it-yourself container approach to the environment, which would be subsumed by an external vertically integrated PaaS such as Deis. If you’re not familiar with Deis, its one of several competitors in the open source platform-as-a-service space which allows you to run your own PaaS on a public or private cloud. Like many, Deis is inspired by Heroku. So instead of Dockerfiles, Deis uses a buildpack to transform a code repository into an executable image and a Procfile to specify an app’s processes. Finally, by default you can use a specialized git receiver to complete a deploy. Instead of having to manage separate build, release, and deploy stages yourself like we described above, deploying an app to Deis could be a simple as git push deis-prod While it can’t get much easier than this, you’re certainly trading control for simplicity. It's up to you to determine which works best for your business. Find more Docker tutorials alongside our latest releases on our dedicated Docker page. About the Author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that’s changing the service model underlying the Internet.
Read more
  • 0
  • 1
  • 29008
Modal Close icon
Modal Close icon