Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-making-most-your-hadoop-data-lake-part-1-data-compression
Kristen Hardwick
30 Jun 2014
6 min read
Save for later

Making the Most of Your Hadoop Data Lake, Part 1: Data Compression

Kristen Hardwick
30 Jun 2014
6 min read
In the world of big data, the Data Lake concept reigns supreme. Hadoop users are encouraged to keep all data in order to prepare for future use cases and as-yet-unknown data integration points. This concept is part of what makes Hadoop and HDFS so appealing, so it is important to make sure that the data is being stored in a way that prolongs that behavior. In the first part of this two-part series, “Making the Most of Your Hadoop Data Lake”, we will address one important factor in improving manageability—data compression. Data compression is an area that is often overlooked in the context of Hadoop. In many cluster environments, compression is disabled by default, putting the burden on the user. In this post, we will discuss the tradeoffs involved in deciding how to take advantage of compression techniques and the advantages and disadvantages of specific compression codec options with respect to Hadoop. To compress or not to compress Whenever data is converted to something other than its raw data format, that naturally implies some overhead involved in completing the conversion process. When data compression is being discussed, it is important to take that overhead into account with respect to the benefits of reducing the data footprint. One obvious benefit is that compressed data will reduce the amount of disk space that is required for storage of a particular dataset. In the big data environment, this benefit is especially significant—either the Hadoop cluster will be able to keep data for a larger time range, or storing data for the same time range will require fewer nodes, or the disk usage ratios will remain lower for longer. In addition, the smaller file sizes will mean lower data transfer times—either internally for MapReduce jobs or when performing exports of data results. The cost of these benefits, however, is that the data must be decompressed at every point where the data needs to be read, and compressed before being inserted into HDFS. With respect to MapReduce jobs, this processing overhead at both the map phase and the reduce phase will increase the CPU processing time. Fortunately, by making informed choices about the specific compression codecs used at any given phase in the data transformation process, the cluster administrator or user can ensure that the advantages of compression outweigh the disadvantages. Choosing the right codec for each phase Hadoop provides the user with some flexibility on which compression codec is used at each step of the data transformation process. It is important to realize that certain codecs are optimal for some stages, and non-optimal for others. In the next sections, we will cover some important notes for each choice. zlib The major benefit of using this codec is that it is the easiest way to get the benefits of data compression from a cluster and job configuration standpoint—the zlib codec is the default compression option. From the data transformation perspective, this codec will decrease the data footprint on disk, but will not provide much of a benefit in terms of job performance. gzip The gzip codec available in Hadoop is the same one that is used outside of the Hadoop ecosystem. It is common practice to use this as the codec for compressing the final output from a job, simply for the benefit of being able to share the compressed result with others (possibly outside of Hadoop) using a standard file format. bzip2 There are two important benefits for the bzip2 codec. First, if reducing the data footprint is a high priority, this algorithm will compress the data more than the default zlib option. Second, this is the only supported codec that produces “splittable” compressed data. A major characteristic of Hadoop is the idea of splitting the data so that they can be handled on each node independently. With the other compression codecs, there is an initial requirement to gather all parts of the compressed file in order to have all information necessary to decompress the data. With this format, the data can be decompressed in parallel. This splittable quality makes this format ideal for compressing data that will be used as input to a map function, either in a single step or as part of a series of chained jobs. LZO, LZ4, Snappy These three codecs are ideal for compressing intermediate data—the data output from the mappers that will be immediately read in by the reducers. All three codecs heavily favor compression speed over file size ratio, but the detailed specifications for each algorithm should be examined based on the specific licensing, cluster, and job requirements. Enabling compression Once the appropriate compression codec for any given transformation phase has been selected, there are a few configuration properties that need to be adjusted in order to have the changes take effect in the cluster. Intermediate data to reducer mapreduce.map.output.compress = true (Optional) mapreduce.map.output.compress.codec = org.apache.hadoop.io.compress.SnappyCodec Final output from a job mapreduce.output.fileoutputformat.compress = true (Optional) mapreduce.output.fileoutputformat.compress.codec = org.apache.hadoop.io.compress.BZip2Codec These compression codecs are also available within some of the ecosystem tools like Hive and Pig. In most cases, the tools will default to the Hadoop-configured values for particular codecs, but the tools also provide the option to compress the data generated between steps. Pig pig.tmpfilecompression = true (Optional) pig.tmpfilecompression.codec = snappy Hive hive.exec.compress.intermediate = true hive.exec.compress.output = true Conclusion This post detailed the benefits and disadvantages of data compression, along with some helpful guidelines on how to choose a codec and enable it at various stages in the data transformation workflow. In the next post, we will go through some additional techniques that can be used to ensure that users can make the most of the Hadoop Data Lake. For more Big Data and Hadoop tutorials and insight, visit our dedicated Hadoop page.  About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 6671

article-image-part-1-managing-multiple-apps-and-environments-capistrano-3-and-chef-solo
Rodrigo Rosenfeld
30 Jun 2014
8 min read
Save for later

Part 1: Managing Multiple Apps and Environments with Capistrano 3 and Chef Solo

Rodrigo Rosenfeld
30 Jun 2014
8 min read
In my previous two posts, I explored how to use Capistrano to deploy multiple applications in different environments and servers. This, however, is only one part of our deployment procedures. It just takes care of the applications themselves, but we still rely on the server being properly set up so that our Capistrano recipes work. In these two posts I'll explain how to use Chef to manage servers, and how to integrate it with Capistrano and perform all of your deployment procedures from a single project. Introducing the sample deployment project After I wrote the previous two posts, I realized I was not fully happy with a few issues of our company's deployment strategy: Duplicate settings: This was the main issue that was puzzling me. I didn't like the fact that we had to duplicate some settings like the application's binding port in both Chef and Capistrano projects. Too many required files (45 to support 3 servers, 5 environments, and 3 applications): While the files were really small, I felt that this situation could be further improved by the use of some conventions. So, I decided to work in a proof-of-concept project that would integrate both Chef and Capistrano and fix these issues. After a weekend working (almost) full time on it, I came up with a sample project so that you can fork it and adapt it to your deployment scenario. The main goal of this project hasn't changed from my previous article. We want to be able to support new environments and servers very quickly by simply adding some settings to the project. Go ahead and clone it. Follow the instructions on the README and it should deploy the Rails Devise sample application into a VirtualBox Virtual Machine (VM) using Vagrant. The following sections will explain how it works and the reasons behind its design. The overall idea While it's possible to accomplish all of your deployment tasks with either Chef or Capistrano alone, I feel that they are more suitable for different tasks. There are many existing recipes that you can take advantage of for both projects, but they usually don't overlap much. There are Chef community cookbooks available to help you install nginx, apache2, java, databases, and much more. You probably want to use Chef to perform administrative tasks like managing services, server backup, installing software, and so on. Capistrano, on the other hand, will help you by deploying the applications itself after the server is ready to go, and after running your Chef recipes. This includes creating releases of your application, which allows you to easily rollback to a previous working version, for example. You'll find existing Capistrano recipes to help you with several application-related tasks like running Bundler, switching between Ruby versions with either rbenv, rvm or chruby, running Rails migrations and assets precompilation, and so on. Capistrano recipes are well integrated with the Capistrano deploy flow. For instance, the capistrano-puma recipe will automatically generate a settings file if it is missing and start puma after the remaining deployment tasks have finished by including this in its recipes: after 'deploy:check', 'puma:check' after 'deploy:finished', 'puma:smart_restart' Another difference between sysadmin and deployment tasks is that usually the former will require superuser privileges while the latter is recommended to be accomplished by a regular user. This way, you can feel safer when deploying Capistrano recipes, since you know it won't affect the server itself, except for the applications managed by that user account. And deploying an application is way more common than installing and configuring programs or changing the proxy's settings. Some of the settings required by Chef and Capistrano recipes overlap. One example is a Chef recipe that generates an nginx settings file that will proxy requests to a Rails application listening on a local port. In this scenario, the binding address used by the Capistrano puma recipe needs to coincide with the port declared in the proxy settings for the nginx configuration file. Managing deployment settings Capistrano and Chef provide different built-in ways of managing their settings. Capistrano will use a Domain Specific Language (DSL) like set/fetch, while Chef will read the attributes following a well described precedence. I strongly advise you to keep with those approaches for settings that are specific for each project. To enable you to remove any duplication by overlapping deployment settings, I introduced another configuration declaration framework for the shared settings using the configatron gem, by taking advantage of the fact that both Chef and Capistrano are written in Ruby. Take a look at the settings directory in the sample project: settings/ ├── applications │ └── rails-devise.rb ├── common.rb ├── environments │ ├── development.rb │ └── production.rb └── servers     └── vagrant.rb The settings are split in common, along with those specific for each application, environment, and servers. As you would expect, the Rails Devise application deployed to the production environment in the vagrant server will read the settings from common.rb, servers/vagrant.rb, environments/production.rb, and applications/rails-devise.rb. If some of your settings apply to the Rails Devise running on a given server or environment (or both), it's possible to override the specific settings in other files like rails-devise_production.rb, vagrant_production.rb, or vagrant_production_rails-devise.rb. Here's the definition of load_app_settings in common_helpers/settings_loader.rb: def load_app_settings(app_name, app_server, app_env) cfg.app_name = app_name cfg.app_server = app_server cfg.app_env = app_env [ 'common', "servers/#{app_server}", "environments/#{app_env}", "applications/#{app_name}", "#{app_server}_#{app_env}", "#{app_server}_#{app_name}", "#{app_name}_#{app_env}", "#{app_server}_#{app_env}_#{app_name}", ].each{|s| load_settings s } cfg.lock! end Feel free to change the load path order. The latest settings take precedence over the first ones. So if the binding port is usually 3000 for production but 4000 for your ec2 server, you can add a cfg.my_app.binding_port = 3000 to environments/production.rb and override it on ec2_production.rb. Once those settings are loaded, they are locked and can't be changed by the deployment recipes. As a final note, the settings can also be set using a hash notation, which can be useful if you’re using a dynamic setting attribute. Here’s an example: cfg[:my_app][“binding_#{‘port’}”] = 3000. This is not really useful in this case, but it illustrates the setting capabilities. Calculated settings Two types of calculated settings are supported on this project: delayed and dynamic. Delayed are lazily evaluated the first time they are requested, while dynamic are always evaluated. They are useful for providing default values for some settings that could be overridden by other settings files. I prefer to use delayed attributes for those that are meant to be overridden and dynamic ones for those that are meant to be calculated, even though delayed ones would be suitable for both cases. Here's the common.rb from the sample project to illustrate the idea: require 'set' cfg.chef_runlist = Set.new cfg.deploy_user = 'deploy' cfg.deployment_repo_url = 'git@github.com:rosenfeld/capistrano-chef-deployment.git' cfg.deployment_repo_host = 'github.com' cfg.deployment_repo_symlink = false cfg.nginx.default = false # Delayed attributes: they are set to the block values unless explicitly set to other value cfg.database_name = delayed_attr{ "app1_#{cfg.app_env}" } cfg.nginx.subdomain = delayed_attr{ cfg.app_env } # Dynamic/calculated attributes: those are always evaluated by the block # Those attributes are not meant to be overrideable cfg.nginx.host = dyn_attr{ "#{cfg.nginx.subdomain}.mydomain.com" } cfg.nginx.host in this instance is not meant to be overridden by any other settings file and follows the company's policy. But it would be okay to override the production database name to app1 instead of using the default app1_production. This is just a guideline, but it should give you a good idea of some ways that Chef and Capistrano can be used together. Conclusion I hope you found this post as useful as I did. Being able to fully deploy the whole application stack from a single repository saves us a lot of time and simplifies our deployment a lot, and in the next post, Part 2, I will walk you through that deployment. About The Author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems. For the past 5 years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems including active_record_migrations, rails-web-console, the JS specs runner oojspec, sequel-devise, and the Linux X11 utility ktrayshortcut. Rodrigo was hired by e-Core (Porto Alegre - RS, Brazil) to work from home, building and maintaining software for Matterhorn Transactions Inc. with a team of great developers. Matterhorn's main product, the Market Tracker, is used by LexisNexis clients.
Read more
  • 0
  • 0
  • 3034

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 3235

article-image-hunt-data
Packt
25 Jun 2014
10 min read
Save for later

The Hunt for Data

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) Examining a JSON file with the aeson package JavaScript Object Notation (JSON) is a way to represent key-value pairs in plain text. The format is described extensively in RFC 4627 (http://www.ietf.org/rfc/rfc4627). In this recipe, we will parse a JSON description about a person. We often encounter JSON in APIs from web applications. Getting ready Install the aeson library from hackage using Cabal. Prepare an input.json file representing data about a mathematician, such as the one in the following code snippet: $ cat input.json {"name":"Gauss", "nationality":"German", "born":1777, "died":1855} We will be parsing this JSON and representing it as a usable data type in Haskell. How to do it... Use the OverloadedStrings language extension to represent strings as ByteString, as shown in the following line of code: {-# LANGUAGE OverloadedStrings #-} Import aeson as well as some helper functions as follows: import Data.Aeson import Control.Applicative import qualified Data.ByteString.Lazy as B Create the data type corresponding to the JSON structure, as shown in the following code: data Mathematician = Mathematician { name :: String , nationality :: String , born :: Int , died :: Maybe Int } Provide an instance for the parseJSON function, as shown in the following code snippet: instance FromJSON Mathematician where parseJSON (Object v) = Mathematician <$> (v .: "name") <*> (v .: "nationality") <*> (v .: "born") <*> (v .:? "died") Define and implement main as follows: main :: IO () main = do Read the input and decode the JSON, as shown in the following code snippet: input <- B.readFile "input.json" let mm = decode input :: Maybe Mathematician case mm of Nothing -> print "error parsing JSON" Just m -> (putStrLn.greet) m Now we will do something interesting with the data as follows: greet m = (show.name) m ++ " was born in the year " ++ (show.born) m We can run the code to see the following output: $ runhaskell Main.hs "Gauss" was born in the year 1777 How it works... Aeson takes care of the complications in representing JSON. It creates native usable data out of a structured text. In this recipe, we use the .: and .:? functions provided by the Data.Aeson module. As the Aeson package uses ByteStrings instead of Strings, it is very helpful to tell the compiler that characters between quotation marks should be treated as the proper data type. This is done in the first line of the code which invokes the OverloadedStrings language extension. We use the decode function provided by Aeson to transform a string into a data type. It has the type FromJSON a => B.ByteString -> Maybe a. Our Mathematician data type must implement an instance of the FromJSON typeclass to properly use this function. Fortunately, the only required function for implementing FromJSON is parseJSON. The syntax used in this recipe for implementing parseJSON is a little strange, but this is because we're leveraging applicative functions and lenses, which are more advanced Haskell topics. The .: function has two arguments, Object and Text, and returns a Parser a data type. As per the documentation, it retrieves the value associated with the given key of an object. This function is used if the key and the value exist in the JSON document. The :? function also retrieves the associated value from the given key of an object, but the existence of the key and value are not mandatory. So, we use .:? for optional key value pairs in a JSON document. There's more… If the implementation of the FromJSON typeclass is too involved, we can easily let GHC automatically fill it out using the DeriveGeneric language extension. The following is a simpler rewrite of the code: {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE DeriveGeneric #-} import Data.Aeson import qualified Data.ByteString.Lazy as B import GHC.Generics data Mathematician = Mathematician { name :: String , nationality :: String , born :: Int , died :: Maybe Int } deriving Generic instance FromJSON Mathematician main = do input <- B.readFile "input.json" let mm = decode input :: Maybe Mathematician case mm of Nothing -> print "error parsing JSON" Just m -> (putStrLn.greet) m greet m = (show.name) m ++" was born in the year "++ (show.born) m Although Aeson is powerful and generalizable, it may be an overkill for some simple JSON interactions. Alternatively, if we wish to use a very minimal JSON parser and printer, we can use Yocto, which can be downloaded from http://hackage.haskell.org/package/yocto. Reading an XML file using the HXT package Extensible Markup Language (XML) is an encoding of plain text to provide machine-readable annotations on a document. The standard is specified by W3C (http://www.w3.org/TR/2008/REC-xml-20081126/). In this recipe, we will parse an XML document representing an e-mail conversation and extract all the dates. Getting ready We will first set up an XML file called input.xml with the following values, representing an e-mail thread between Databender and Princess on December 18, 2014 as follows: $ cat input.xml <thread> <email> <to>Databender</to> <from>Princess</from> <date>Thu Dec 18 15:03:23 EST 2014</date> <subject>Joke</subject> <body>Why did you divide sin by tan?</body> </email> <email> <to>Princess</to> <from>Databender</from> <date>Fri Dec 19 3:12:00 EST 2014</date> <subject>RE: Joke</subject> <body>Just cos.</body> </email> </thread> Using Cabal, install the HXT library which we use for manipulating XML documents: $ cabal install hxt How to do it... We only need one import, which will be for parsing XML, using the following line of code: import Text.XML.HXT.Core Define and implement main and specify the XML location. For this recipe, the file is retrieved from input.xml. Refer to the following code: main :: IO () main = do input <- readFile "input.xml" Apply the readString function to the input and extract all the date documents. We filter items with a specific name using the hasName :: String -> a XmlTree XmlTree function. Also, we extract the text using the getText :: a XmlTree String function, as shown in the following code snippet: dates <- runX $ readString [withValidate no] input //> hasName "date" //> getText We can now use the list of extracted dates as follows: print dates By running the code, we print the following output: $ runhaskell Main.hs ["Thu Dec 18 15:03:23 EST 2014", "Fri Dec 19 3:12:00 EST 2014"] How it works... The library function, runX, takes in an Arrow. Think of an Arrow as a more powerful version of a Monad. Arrows allow for stateful global XML processing. Specifically, the runX function in this recipe takes in IOSArrow XmlTree String and returns an IO action of the String type. We generate this IOSArrow object using the readString function, which performs a series of operations to the XML data. For a deep insight into the XML document, //> should be used whereas /> only looks at the current level. We use the //> function to look up the date attributes and display all the associated text. As defined in the documentation, the hasName function tests whether a node has a specific name, and the getText function selects the text of a text node. Some other functions include the following: isText: This is used to test for text nodes isAttr: This is used to test for an attribute tree hasAttr: This is used to test whether an element node has an attribute node with a specific name getElemName: This is used to select the name of an element node All the Arrow functions can be found on the Text.XML.HXT.Arrow.XmlArrow documentation at http://hackage.haskell.org/package/hxt/docs/Text-XML-HXT-Arrow-XmlArrow.html. Capturing table rows from an HTML page Mining Hypertext Markup Language (HTML) is often a feat of identifying and parsing only its structured segments. Not all text in an HTML file may be useful, so we find ourselves only focusing on a specific subset. For instance, HTML tables and lists provide a strong and commonly used structure to extract data whereas a paragraph in an article may be too unstructured and complicated to process. In this recipe, we will find a table on a web page and gather all rows to be used in the program. Getting ready We will be extracting the values from an HTML table, so start by creating an input.html file containing a table as shown in the following figure: The HTML behind this table is as follows: $ cat input.html <!DOCTYPE html> <html> <body> <h1>Course Listing</h1> <table> <tr> <th>Course</th> <th>Time</th> <th>Capacity</th> </tr> <tr> <td>CS 1501</td> <td>17:00</td> <td>60</td> </tr> <tr> <td>MATH 7600</td> <td>14:00</td> <td>25</td> </tr> <tr> <td>PHIL 1000</td> <td>9:30</td> <td>120</td> </tr> </table> </body> </html> If not already installed, use Cabal to set up the HXT library and the split library, as shown in the following command lines: $ cabal install hxt $ cabal install split How to do it... We will need the htx package for XML manipulations and the chunksOf function from the split package, as presented in the following code snippet: import Text.XML.HXT.Core import Data.List.Split (chunksOf) Define and implement main to read the input.html file. main :: IO () main = do input <- readFile "input.html" Feed the HTML data into readString, thereby setting withParseHTML to yes and optionally turning off warnings. Extract all the td tags and obtain the remaining text, as shown in the following code: texts <- runX $ readString [withParseHTML yes, withWarnings no] input //> hasName "td" //> getText The data is now usable as a list of strings. It can be converted into a list of lists similar to how CSV was presented in the previous CSV recipe, as shown in the following code: let rows = chunksOf 3 texts print $ findBiggest rows By folding through the data, identify the course with the largest capacity using the following code snippet: findBiggest :: [[String]] -> [String] findBiggest [] = [] findBiggest items = foldl1 (a x -> if capacity x > capacity a then x else a) items capacity [a,b,c] = toInt c capacity _ = -1 toInt :: String -> Int toInt = read Running the code will display the class with the largest capacity as follows: $ runhaskell Main.hs {"PHIL 1000", "9:30", "120"} How it works... This is very similar to XML parsing, except we adjust the options of readString to [withParseHTML yes, withWarnings no].
Read more
  • 0
  • 0
  • 1848

article-image-exact-inference-using-graphical-models
Packt
25 Jun 2014
7 min read
Save for later

Exact Inference Using Graphical Models

Packt
25 Jun 2014
7 min read
(For more resources related to this topic, see here.) Complexity of inference A graphical model can be used to answer both probability queries and MAP queries. The most straightforward way to use this model is to generate the joint distribution and sum out all the variables, except the ones we are interested in. However, we need to determine and specify the joint distribution where an exponential blowup happens. In worst-case scenarios, we need to determine the exact inference in NP-hard. By the word exact, we mean specifying the probability values with a certain precision (say, five digits after the decimals). Suppose we tone down our precision requirements (for example, only up to two digits after the decimals). Now, is the (approximate) inference task any easier? Unfortunately not—even approximate inference is NP-hard, that is, getting values is far better than random guessing (50 percent or a probability of 0.5), which takes exponential time. It might seem like inference is a hopeless task, but that is only in the worst case. In general cases, we can use exact inference to solve certain classes of real-world problems (such as Bayesian networks that have a small number of discrete random variables). Of course, for larger problems, we have to resort to approximate inference. Real-world issues Since inference is a task that is NP-hard, inference engines are written in languages that are as close to bare metal as possible; usually in C or C++. Use Python implementations of inference algorithms. Complete and mature packages for these are uncommon. Use inference engines that have a Python interface, such as Stan (mc-stan.org). This choice serves a good balance between running the Python code and a fast inference implementation. Use inference engines that do not have a Python interface, which is true for majority of the inference engines out there. A fairly comprehensive list can be found at http://en.wikipedia.org/wiki/Bayesian_network#Software. The use of Python here is limited to creating a file that describes the model in a format that the inference engine can consume. In the article on inference, we will stick to the first two choices in the list. We will use native Python implementations (of inference algorithms) to peek into the interiors of the inference algorithms while running toy-sized problems, and then use an external inference engine with Python interfaces to try out a more real-world problem. The tree algorithm We will now look at another class of exact inference algorithms based on message passing. Message passing is a general mechanism, and there exist many variations of message passing algorithms. We shall look at a short snippet of the clique tree-message passing algorithm (which is sometimes called the junction tree algorithm too). Other versions of the message passing algorithm are used in approximate inference as well. We initiate the discussion by clarifying some of the terms used. A cluster graph is an arrangement of a network where groups of variables are placed in the cluster. It is similar to a factor where each cluster has a set of variables in its scope. The message passing algorithm is all about passing messages between clusters. As an analogy, consider the gossip going on at a party, where Shelly and Clair are in a conversation. If Shelly knows B, C, and D, and she is chatting with Clair who knows D, E, and F (note that the only person they know in common is D), they can share information (or pass messages) about their common friend D. In the message passing algorithm, two clusters are connected by a Separation Set (sepset), which contains variables common to both clusters. Using the preceding example, the two clusters and are connected by the sepset , which contains the only variable common to both clusters. In the next section, we shall learn about the implementation details of the junction tree algorithm. We will first understand the four stages of the algorithm and then use code snippets to learn about it from an implementation perspective. The four stages of the junction tree algorithm In this section, we will discuss the four stages of the junction tree algorithm. In the first stage, the Bayes network is converted into a secondary structure called a join tree (alternate names for this structure in the literature are junction tree, cluster tree, or a clique tree). The transformation from the Bayes network to junction tree proceeds as per the following steps: We will construct a moral graph by changing all the directed edges to undirected edges. All nodes that have V-structures that enter the said node have their parents connected with an edge. We have seen an example of this process (in the VE algorithm) called moralization, which is a possible reference to connect (apparently unmarried) parents that have a child (node). Then, we will selectively add edges to the moral graph to create a triangulated graph. A triangulated graph is an undirected graph where the maximum cycle length between the nodes is 3. From the triangulated graph, we will identify the subsets of nodes (called cliques). Starting with the cliques as clusters, we will arrange the clusters to form an undirected tree called the join tree, which satisfies the running intersection property. This property states that if a node appears in two cliques, it should also appear in all the nodes on the path that connect the two cliques. In the second stage, the potentials at each cluster are initialized. The potentials are similar to a CPD or a table. They have a list of values against each assignment to a variable in their scope. Both clusters and sepsets contain a set of potentials. The term potential is used as opposed to probabilities because in Markov networks, unlike probabilities, the values of the potentials are not obliged to sum to 1. This stage consists of message passing or belief propagation between neighboring clusters. Each message consists of a belief the cluster has about a particular variable. Each message can be passed asynchronously, but it has to wait for information from other clusters before it collates that information and passes it to the next cluster. It can be useful to think of a tree-structured cluster graph, where the message passing happens in two stages: an upward pass stage and a downward pass stage. Only after a node receives messages from the leaf nodes, will it send the message to its parent (in the "upward pass"), and only after the node receives a message from its parents will it send a message to its children (in the "downward pass"). The message passing stage completes when each cluster sepset has consistent beliefs. Recall that a cluster connected to a sepset has common variables. For example, cluster C and sepset S have and variables in its scope. Then, the potential against obtained from either the cluster or the sepset has the same value, which is why it is said that the cluster graph has consistent beliefs or that the cliques are calibrated. Once the whole cluster graph has consistent beliefs, the fourth stage is marginalization, where we can query the marginal distribution for any variable in the graph. Summary We first explored the inference problem where we studied the types of inference. We then learned that inference is NP-hard and understood that, for large networks, exact inference is infeasible. Resources for Article: Further resources on this subject: Getting Started with Spring Python [article] Python Testing: Installing the Robot Framework [article] Discovering Python's parallel programming tools [article]
Read more
  • 0
  • 0
  • 3946

article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 3429
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-enterprise-geodatabase
Packt
25 Jun 2014
5 min read
Save for later

Enterprise Geodatabase

Packt
25 Jun 2014
5 min read
(For more resources related to this topic, see here.) Creating a connection to the enterprise geodatabase A geodatabase connection is a channel that is established between ArcGIS and the enterprise geodatabase. To create a connection, we need to specify the database server and the user credentials. Without this information, we will not be able to create a connection. To create a geodatabase connection using the SDE user, perform the following steps: Open ArcCatalog and expand the Database Connections dialog from the Catalog Tree window. Double-click on Add Database Connection. From the Database Platform drop-down list, select the database; ours is SQL Server. In the Instance field, type the name of the server; here, it is GDBServer. Select the Database authentication option from the Authentication Type drop-down list and type in the SDE credentials. Click on the Database drop-down list. This should be populated automatically as you leave the password field. Select your geodatabase. Click on OK and rename the connection to sde@gdbserver. This is illustrated in the following screenshot: The type of geodatabase connection depends on the roles assigned to the user. Connecting with the sde user will grant you full access to the geodatabase, where you can copy, delete, and change almost anything. Create four more database connections with the users Robb, Joffrey, Tyrion, and Dany. Give them proper names so we can use them later. Migrating a file geodatabase to an enterprise geodatabase We have our enterprise geodatabase. You might have created a few feature classes and tables. But eventually, our clients at Belize need to start working on the new geodatabase. So, we need to migrate the Bestaurants_new.gdb file to this enterprise geodatabase. This can be done with a simple copy and paste operation. Note that these steps work in the exact same way on any other DBMS once it is set up. You can copy and paste from a file geodatabase to any enterprise geodatabase using the following steps: Open ArcCatalog and browse to your Bestaurants_new.gdb geodatabase. Right-click on the Food_and_Drinks feature class and select Copy, as seen in the following screenshot: Now, browse and connect to sde@gdbserver; right-click on an empty area and click on Paste, as seen in the following screenshot: You will be prompted with a list of datasets that will be copied as shown in the following screenshot. Luckily, all the configurations will be copied. This includes domains, subtypes, feature classes, and related tables as follows: After the datasets and configurations have been copied, you will see all your data in the new geodatabase. Note that in an SQL Server enterprise geodatabase, there are two prefixes added to each dataset. First, the database is added, which is sdedb, followed by the schema, which is SDE, and finally the dataset name, as shown in the following screenshot: Assigning privileges Have you tried to connect as Robb or Tyrion to your new geodatabase? If you haven't, try it now. You will see that none of the users you created have access to the Food_and_Drinks feature class or any other dataset. You might have guessed why. That is because SDE has created this data, and only this user can allow other users to see this data. So, how do we allow users to see other users' datasets? This is simple just perform the following steps: From ArcCatalog, connect as sde@gdbserver. Right-click on the sdedb.SDE.Food_and_Drinks feature class, point the cursor to Manage, and then click on Privileges as shown in the following screenshot: In the Privileges... dialog, click on Add. Select all four users, Robb, Joffrey, Tyrion, and Dany, and click on OK. Make sure that the Select checkbox is checked for all four users, which means they can see and read this feature class. For Dany, assign Insert, Update, and Delete so that she can also edit this feature class, as shown in the following screenshot. Apply the same privileges to all other datasets as follows and click on OK. Try connecting with Robb; you will now be able to view all datasets. You can use Dany's account to edit your geodatabase using ArcMap. You can create more viewer users who have read-only access to your geodatabase but cannot edit or modify it in any way. Summary Enterprise geodatabases are an excellent choice when you have a multiuser environment. In this article, you learned how to create a geodatabase connection using ArcCatalog to the new enterprise geodatabase. You also learned to migrate your file geodatabase into a fresh enterprise geodatabase. Finally, you learned to assign different privileges to each user and access control to your new enterprise geodatabase. While setting up and configuring an enterprise geodatabase is challenging, working with the enterprise geodatabases in ArcCatalog and ArcMap is similar to working with file geodatabases. Thus, in this article, we took a leap by using an upgraded version of a geodatabase, which is called an enterprise geodatabase. Resources for Article: Further resources on this subject: Server Logs [Article] Google Earth, Google Maps and Your Photos: a Tutorial [Article] Including Google Maps in your Posts Using Apache Roller 4.0 [Article]
Read more
  • 0
  • 0
  • 2723

article-image-sparrow-ios-game-framework-basics-our-game
Packt
25 Jun 2014
10 min read
Save for later

Sparrow iOS Game Framework - The Basics of Our Game

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) Taking care of cross-device compatibility When developing an iOS game, we need to know which device to target. Besides the obvious technical differences between all of the iOS devices, there are two factors we need to actively take care of: screen size and texture size limit. Let's take a closer look at how to deal with the texture size limit and screen sizes. Understanding the texture size limit Every graphics card has a limit for the maximum size texture it can display. If a texture is bigger than the texture size limit, it can't be loaded and will appear black on the screen. A texture size limit has power-of-two dimensions and is a square such as 1024 pixels in width and in height or 2048 x 2048 pixels. When loading a texture, they don't need to have power-of-two dimensions. In fact, the texture does not have to be a square. However, it is a best practice for a texture to have power-of-two dimensions. This limit holds for big images as well as a bunch of small images packed into a big image. The latter is commonly referred to as a sprite sheet. Take a look at the following sample sprite sheet to see how it's structured: How to deal with different screen sizes While the screen size is always measured in pixels, the iOS coordinate system is measured in points. The screen size of an iPhone 3GS is 320 x 480 pixels and also 320 x 480 points. On an iPhone 4, the screen size is 640 x 960 pixels, but is still 320 by 480 points. So, in this case, each point represents four pixels: two in width and two in height. A 100-point wide rectangle will be 200 pixels wide on an iPhone 4 and 100 pixels on an iPhone 3GS. It works similarly for the devices with large display screens, such as the iPhone 5. Instead of 480 points, it's 568 points. Scaling the viewport Let's explain the term viewport first: the viewport is the visible portion of the complete screen area. We need to be clear about which devices we want our game to run on. We take the biggest resolution that we want to support and scale it down to a smaller resolution. This is the easiest option, but it might not lead to the best results; touch areas and the user interface scale down as well. Apple recommends for touch areas to be at least a 40-point square; so, depending on the user interface, some elements might get scaled down so much that they get harder to touch. Take a look at the following screenshot, where we choose the iPad Retina resolution (2048 x 1536 pixels) as our biggest resolution and scale down all display objects on the screen for the iPad resolution (1024 x 768 pixels): Scaling is a popular option for non-iOS environments, especially for PC and Mac games that support resolutions from 1024 x 600 pixels to full HD. Sparrow and the iOS SDK provide some mechanisms that will facilitate handling Retina and non-Retina iPad devices without the need to scale the whole viewport. Black borders Some games in the past have been designed for a 4:3 resolution display but then made to run on a widescreen device that had more screen space. So, the option was to either scale a 4:3 resolution to widescreen, which will distort the whole screen, or put some black borders on either side of the screen to maintain the original scale factor. Showing black borders is something that is now considered as bad practice, especially when there are so many games out there which scale quite well across different screen sizes and platforms. Showing non-interactive screen space If our pirate game is a multiplayer, we may have a player on an iPad and another on an iPhone 5. So, the player with the iPad has a bigger screen and more screen space to maneuver their ship. The worst case will be if the player with the iPad is able to move their ship outside the visual range for the iPhone player to see, which will result in a serious advantage for the iPad player. Luckily for us, we don't require competitive multiplayer functionality. Still, we need to keep a consistent screen space for players to move their ship in for game balance purposes. We wouldn't want to tie the difficulty level to the device someone is playing on. Let's compare the previous screenshot to the black border example. Instead of the ugly black borders, we just show more of the background. In some cases, it's also possible to move some user interface elements to the areas which are not visible on other devices. However, we will need to consider whether we want to keep the same user experience across devices and whether moving these elements will result in a disadvantage for users who don't have this extra screen space on their devices. Rearranging screen elements Rearranging screen elements is probably the most time-intensive and sophisticated way of solving this issue. In this example, we have a big user interface at the top of the screen in the portrait mode. Now, if we were to leave it like this in the landscape mode, the top of the screen will be just the user interface, leaving very little room for the game itself. In this case, we have to be deliberate about what kind of elements we need to see on the screen and which elements are using up too much screen estate. Screen real estate (or screen estate) is the amount of space available on a display for an application or a game to provide output. We will then have to reposition them, cut them up in to smaller pieces, or both. The most prominent example of this technique is Candy Crush (a popular trending game) by King. While this concept applies particularly to device rotation, this does not mean that it can't be used for universal applications. Choosing the best option None of these options are mutually exclusive. For our purposes, we are going to show non-interactive screen space, and if things get complicated, we might also resort to rearranging screen elements depending on our needs. Differences between various devices Let's take a look at the differences in the screen size and the texture size limit between the different iOS devices: Device Screen size (in pixels) Texture size limit (in pixels) iPhone 3GS 480 x 360 2048 x 2048 iPhone 4 (including iPhone 4S) and iPod Touch 4th generation 960 x 640 2048 x 2048 iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation 1136 x 640 2048 x 2048 iPad 2 1024 x 768 2048 x 2048 iPad (3rd and 4th generations) and iPad Air 2048 x 1536 4096 x 4096 iPad Mini 1024 x 768 4096 x 4096 Utilizing the iOS SDK Both the iOS SDK and Sparrow can aid us in creating a universal application. Universal application is the term for apps that target more than one device, especially for an app that targets the iPhone and iPad device family. The iOS SDK provides a handy mechanism for loading files for specific devices. Let's say we are developing an iPhone application and we have an image that's called my_amazing_image.png. If we load this image on our devices, it will get loaded—no questions asked. However, if it's not a universal application, we can only scale the application using the regular scale button on iPad and iPhone Retina devices. This button appears on the bottom-right of the screen. If we want to target iPad, we have two options: The first option is to load the image as is. The device will scale the image. Depending on the image quality, the scaled image may look bad. In this case, we also need to consider that the device's CPU will do all the scaling work, which might result in some slowdown depending on the app's complexity. The second option is to add an extra image for iPad devices. This one will use the ~ipad suffix, for example, my_amazing_image~ipad.png. When loading the required image, we will still use the filename my_amazing_image.png. The iOS SDK will automatically detect the different sizes of the image supplied and use the correct size for the device. Beginning with Xcode 5 and iOS 7, it is possible to use asset catalogs. Asset catalogs can contain a variety of images grouped into image sets. Image sets contain all the images for the targeted devices. These asset catalogs don't require files with suffixes any more. These can only be used for splash images and application icons. We can't use asset catalogs for textures we load with Sparrow though. The following table shows which suffix is needed for which device: Device Retina File suffix iPhone 3GS No None iPhone 4 (including iPhone 4S) and iPod Touch (4th generation) Yes @2x @2x~iphone iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch (5th generation) Yes -568h@2x iPad 2 No ~ipad iPad (3rd and 4th generations) and iPad Air Yes @2x~ipad iPad Mini No ~ipad How does this affect the graphics we wish to display? The non-Retina image will be 128 pixels in width and 128 pixels in height. The Retina image, the one with the @2x suffix, will be exactly double the size of the non-Retina image, that is, 256 pixels in width and 256 pixels in height. Retina and iPad support in Sparrow Sparrow supports all the filename suffixes shown in the previous table, and there is a special case for iPad devices, which we will take a closer look at now. When we take a look at AppDelegate.m in our game's source, note the following line: [_viewController startWithRoot:[Game class] supportHighResolutions:YES doubleOnPad:YES]; The first parameter, supportHighResolutions, tells the application to load Retina images (with the @2x suffix) if they are available. The doubleOnPad parameter is the interesting one. If this is set to true, Sparrow will use the @2x images for iPad devices. So, we don't need to create a separate set of images for iPad, but we can use the Retina iPhone images for the iPad application. In this case, the width and height are 512 and 384 points respectively. If we are targeting iPad Retina devices, Sparrow introduces the @4x suffix, which requires larger images and leaves the coordinate system at 512 x 384 points. App icons and splash images If we are talking about images of different sizes for the actual game content, app icons and splash images are also required to be in different sizes. Splash images (also referred to as launch images) are the images that show up while the application loads. The iOS naming scheme applies for these images as well, so for Retina iPhone devices such as iPhone 4, we will name an image as Default@2x.png, and for iPhone 5 devices, we will name an image as Default-568h@2x.png. For the correct size of app icons, take a look at the following table: Device Retina App icon size iPhone 3GS No 57 x 57 pixels iPhone 4 (including iPhone 4S) and iPod Touch 4th generation Yes 120 x 120 pixels iPhone 5 (including iPhone 5C and iPhone 5S) and iPod Touch 5th generation Yes 120 x 120 pixels iPad 2 No 76 x 76 pixels iPad (3rd and 4th generation) and iPad Air Yes 152 x 152 pixels iPad Mini No 76 x 76 pixels The bottom line The more devices we want to support, the more graphics we need, which directly increases the application file size, of course. Adding iPad support to our application is not a simple task, but Sparrow does some groundwork. One thing we should keep in mind though: if we are only targeting iOS 7.0 and higher, we don't need to include non-Retina iPhone images any more. Using @2x and @4x will be enough in this case, as support for non-Retina devices will soon end. Summary This article deals with setting up our game to work on iPhone, iPod Touch, and iPad in the same manner. Resources for Article: Further resources on this subject: Mobile Game Design [article] Bootstrap 3.0 is Mobile First [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 5386

article-image-uploading-multiple-files
Packt
24 Jun 2014
8 min read
Save for later

Uploading multiple files

Packt
24 Jun 2014
8 min read
(For more resources related to this topic, see here.) Regarding the first task, the multiple selection can be activated using an HTML5 input file attribute (multiple) and the JSF 2.2 pass-through attribute feature. When this attribute is present and its value is set to multiple, the file chooser can select multiple files. So, this task requires some minimal adjustments: <html > ... <h:form id="uploadFormId" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" required="true" f5:multiple="multiple" requiredMessage="No file selected ..." value="#{uploadBean.file}"/> <h:commandButton value="Upload" action="#{uploadBean.upload()}"/> </h:form> The second task is a little bit tricky, because when multiple files are selected, JSF will overwrite the previous Part instance with each file in the uploaded set. This is normal, since you use an object of type Part, but you need a collection of Part instances. Fixing this issue requires us to focus on the renderer of the file component. This renderer is named FileRenderer (an extension of TextRenderer), and the decode method implementation is the key for our issue (the bold code is very important for us), as shown in the following code: @Override public void decode(FacesContext context, UIComponent component) { rendererParamsNotNull(context, component); if (!shouldDecode(component)) { return; } String clientId = decodeBehaviors(context, component); if (clientId == null) { clientId = component.getClientId(context); } assert(clientId != null); ExternalContext externalContext = context.getExternalContext(); Map<String, String> requestMap = externalContext.getRequestParameterMap(); if (requestMap.containsKey(clientId)) { setSubmittedValue(component, requestMap.get(clientId)); } HttpServletRequest request = (HttpServletRequest) externalContext.getRequest(); try { Collection<Part> parts = request.getParts(); for (Part cur : parts) { if (clientId.equals(cur.getName())) { component.setTransient(true); setSubmittedValue(component, cur); } } } catch (IOException ioe) { throw new FacesException(ioe); } catch (ServletException se) { throw new FacesException(se); } } The highlighted code causes the override Part issue, but you can easily modify it to submit a list of Part instances instead of one Part, as follows: try { Collection<Part> parts = request.getParts(); List<Part> multiple = new ArrayList<>(); for (Part cur : parts) { if (clientId.equals(cur.getName())) { component.setTransient(true); multiple.add(cur); } } this.setSubmittedValue(component, multiple); } catch (IOException | ServletException ioe) { throw new FacesException(ioe); } Of course, in order to modify this code, you need to create a custom file renderer and configure it properly in faces-config.xml. Afterwards, you can define a list of Part instances in your bean using the following code: ... private List<Part> files; public List<Part> getFile() { return files; } public void setFile(List<Part> files) { this.files = files; } ... Each entry in the list is a file; therefore, you can write them on the disk by iterating the list using the following code: ... for (Part file : files) { try (InputStream inputStream = file.getInputStream(); FileOutputStream outputStream = new FileOutputStream("D:" + File.separator + "files"+ File.separator + getSubmittedFileName())) { int bytesRead = 0; final byte[] chunck = new byte[1024]; while ((bytesRead = inputStream.read(chunck)) != -1) { outputStream.write(chunck, 0, bytesRead); } FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Upload successfully ended: " + file.getSubmittedFileName())); } catch (IOException e) { FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Upload failed !")); } } ... Upload and the indeterminate progress bar When users upload small files, the process happens pretty fast; however, when large files are involved, it may take several seconds, or even minutes, to end. In this case, it is a good practice to implement a progress bar that indicates the upload status. The simplest progress bar is known as an indeterminate progress bar, because it shows that the process is running, but it doesn't provide information for estimating the time left or the amount of processed bytes. In order to implement a progress bar, you need to develop an AJAX-based upload. The JSF AJAX mechanism allows us to determine when the AJAX request begins and when it completes. This can be achieved on the client side; therefore, an indeterminate progress bar can be easily implemented using the following code: <script type="text/javascript"> function progressBar(data) { if (data.status === "begin") { document.getElementById("uploadMsgId").innerHTML=""; document.getElementById("progressBarId"). setAttribute("src", "./resources/progress_bar.gif"); } if (data.status === "complete") { document.getElementById("progressBarId").removeAttribute("src"); } } </script> ... <h:body> <h:messages id="uploadMsgId" globalOnly="true" showDetail="false" showSummary="true" style="color:red"/> <h:form id="uploadFormId" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" required="true" requiredMessage="No file selected ..." value="#{uploadBean.file}"/> <h:message showDetail="false" showSummary="true" for="fileToUpload" style="color:red"/> <h:commandButton value="Upload" action="#{uploadBean.upload()}"> <f:ajax execute="fileToUpload" onevent="progressBar" render=":uploadMsgId @form"/> </h:commandButton> </h:form> <div> <img id="progressBarId" width="250px;" height="23"/> </div> </h:body> A possible output is as follows: Upload and the determinate progress bar A determinate progress bar is much more complicated. Usually, such a progress bar is based on a listener capable to monitor the transferred bytes (if you have worked with Apache Commons' FileUpload, you must have had the chance to implement such a listener). In JSF 2.2, FacesServlet was annotated with @MultipartConfig for dealing multipart data (upload files), but there is no progress listener interface for it. Moreover, FacesServlet is declared final; therefore, we cannot extend it. Well, the possible approaches are pretty limited by these aspects. In order to implement a server-side progress bar, we need to implement the upload component in a separate class (servlet) and provide a listener. Alternatively, on the client side, we need a custom POST request that tricks FacesServlet that the request is formatted by jsf.js. In this article, you will see a workaround based on HTML5 XMLHttpRequest Level 2 (can upload/download streams as Blob, File, and FormData), HTML5 progress events (for upload it returns total transferred bytes and uploaded bytes), HTML5 progress bar, and a custom Servlet 3.0. If you are not familiar with these HTML5 features, then you have to check out some dedicated documentation. After you get familiar with these HTML5 features, it will be very easy to understand the following client-side code. First we have the following JavaScript code: <script type="text/javascript"> function fileSelected() { hideProgressBar(); updateProgress(0); document.getElementById("uploadStatus").innerHTML = ""; var file = document.getElementById('fileToUploadForm: fileToUpload').files[0]; if (file) { var fileSize = 0; if (file.size > 1048576) fileSize = (Math.round(file.size * 100 / (1048576)) / 100).toString() + 'MB'; else fileSize = (Math.round(file.size * 100 / 1024) / 100).toString() + 'KB'; document.getElementById('fileName').innerHTML = 'Name: ' + file.name; document.getElementById('fileSize').innerHTML = 'Size: ' + fileSize; document.getElementById('fileType').innerHTML = 'Type: ' + file.type; } } function uploadFile() { showProgressBar(); var fd = new FormData(); fd.append("fileToUpload", document.getElementById('fileToUploadForm: fileToUpload').files[0]); var xhr = new XMLHttpRequest(); xhr.upload.addEventListener("progress", uploadProgress, false); xhr.addEventListener("load", uploadComplete, false); xhr.addEventListener("error", uploadFailed, false); xhr.addEventListener("abort", uploadCanceled, false); xhr.open("POST", "UploadServlet"); xhr.send(fd); } function uploadProgress(evt) { if (evt.lengthComputable) { var percentComplete = Math.round(evt.loaded * 100 / evt.total); updateProgress(percentComplete); } } function uploadComplete(evt) { document.getElementById("uploadStatus").innerHTML = "Upload successfully completed!"; } function uploadFailed(evt) { hideProgressBar(); document.getElementById("uploadStatus").innerHTML = "The upload cannot be complete!"; } function uploadCanceled(evt) { hideProgressBar(); document.getElementById("uploadStatus").innerHTML = "The upload was canceled!"; } var updateProgress = function(value) { var pBar = document.getElementById("progressBar"); document.getElementById("progressNumber").innerHTML=value+"%"; pBar.value = value; } function hideProgressBar() { document.getElementById("progressBar").style.visibility = "hidden"; document.getElementById("progressNumber").style.visibility = "hidden"; } function showProgressBar() { document.getElementById("progressBar").style.visibility = "visible"; document.getElementById("progressNumber").style.visibility = "visible"; } </script> Further, we have the upload component that uses the preceding JavaScript code: <h:body> <hr/> <div id="fileName"></div> <div id="fileSize"></div> <div id="fileType"></div> <hr/> <h:form id="fileToUploadForm" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" onchange="fileSelected();"/> <h:commandButton type="button" onclick="uploadFile()" value="Upload" /> </h:form> <hr/> <div id="uploadStatus"></div> <table> <tr> <td> <progress id="progressBar" style="visibility: hidden;" value="0" max="100"></progress> </td> <td> <div id="progressNumber" style="visibility: hidden;">0 %</div> </td> </tr> </table> <hr/> </h:body> A possible output can be seen in the following screenshot: The servlet behind this solution is UploadServlet that was presented earlier. For multiple file uploads and progress bars, you can extend this example, or choose a built-in solution, such as PrimeFaces Upload, RichFaces Upload, or jQuery Upload Plugin. Summary In this article, we saw how to upload multiple files using JSF 2.2 and the concepts of indeterminate and determinate progress bars. Resources for Article: Further resources on this subject: The Business Layer (Java EE 7 First Look) [article] The architecture of JavaScriptMVC [article] Differences in style between Java and Scala code [article]
Read more
  • 0
  • 0
  • 9347

article-image-serving-and-processing-forms
Packt
24 Jun 2014
13 min read
Save for later

Serving and processing forms

Packt
24 Jun 2014
13 min read
(For more resources related to this topic, see here.) Spring supports different view technologies, but if we are using JSP-based views, we can make use of the Spring tag library tags to make up our JSP pages. These tags provide many useful, common functionalities such as form binding, evaluating errors outputting internationalized messages, and so on. In order to use these tags, we must add references to this tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> The data transfer took place from model to view via the controller. The following line is a typical example of how we put data into the model from a controller: model.addAttribute(greeting,"Welcome") Similarly the next line shows how we retrieve that data in the view using the JSTL expression: <p> ${greeting} </p> JavaServer Pages Standard Tag Library (JSTL) is also a tag library provided by Oracle. And it is a collection of useful JSP tags that encapsulates the core functionality common to many JSP pages. We can add a reference to the JSTL tag library in our JSP pages as <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>. However, what if we want to put data into the model from the view? How do we retrieve that data from the controller? For example, consider a scenario where an admin of our store wants to add new product information in our store by filling and submitting an HTML form. How can we collect the values filled in the HTML form elements and process it in the controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form-backing bean in the model. Later, the controller can retrieve the form-backing bean from the model using the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Form-backing beans (sometimes called form beans) are used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields on the form and the properties on our domain object. Another approach is to create separate classes for form beans, which are sometimes called Data Transfer Objects (DTOs). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags that are more or less similar to HTML form and input tags, but it has some special attributes to bind the form elements data with the form-backing bean. Let's create a Spring web form in our application to add new products to our product list by performing the following steps: We open our ProductRepository interface and add one more method declaration in it as follows: void addProduct(Product product); We then add an implementation for this method in the InMemoryProductRepository class as follows: public void addProduct(Product product) { listOfProducts.add(product); } We open our ProductService interface and add one more method declaration in it as follows: void addProduct(Product product); And, we add an implementation for this method in the ProductServiceImpl class as follows: public void addProduct(Product product) { productRepository.addProduct(product); } We open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/products"; } Finally, we add one more JSP view file called addProduct.jsp under src/main/webapp/WEB-INF/views/ and add the following tag reference declaration in it as the very first line: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now, we add the following code snippet under the tag declaration line and save addProduct.jsp (note that I have skipped the <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage that you add binding tags for the skipped fields when you try out this exercise): <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet"href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now, we run our application and enter the URL http://localhost:8080/webstore/products/add. We will be able to see a web page that displays a web form where we can add the product information as shown in the following screenshot: Add the product's web form Now, we enter all the information related to the new product that we want to add and click on the Add button; we will see the new product added in the product listing page under the URL http://localhost:8080/webstore/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. I will give you a brief note on what we have done in steps 1 to 4. In step 1, we created a method declaration addProduct in our ProductRepository interface to add new products. In step 2, we implemented the addProduct method in our InMemoryProductRepository class; the implementation is just to update the existing listOfProducts by adding a new product to the list. Steps 3 and 4 are just a service layer extension for ProductRepository. In step 3, we declared a similar method, addProduct, in our ProductService interface and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; we have done nothing but added two request mapping methods, namely, getAddNewProductForm and processAddNewProductForm, in step 5 as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/products"; } If you observe these methods carefully, you will notice a peculiar thing, which is that both the methods have the same URL mapping value in their @RequestMapping annotation (value = "/add"). So, if we enter the URL http://localhost:8080/webstore/products/add in the browser, which method will Spring MVC map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). If you will notice again, even though both methods have the same URL mapping, they differ in request method. So, what is happening behind the screen is that when we enter the URL http://localhost:8080/webstore/products/add in the browser, it is considered as a GET request. So, Spring MVC maps this request to the getAddNewProductForm method, and within this method, we simply attach a new empty Product domain object to the model under the attribute name, newProduct. Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); So in the view addproduct.jsp, we can access this model object, newProduct. Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp view file for some time so that we are able to understand the form processing flow without confusion. In addproduct.jsp, we have just added a <form:form> tag from the Spring tag library using the following line of code: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is acquired from the Spring tag library, we need to add a reference to this tag library in our JSP file. That's why we have added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of modelAttribute in the <form:form> tag. If you recall correctly, you will notice that this value of modelAttribute and the attribute name we used to store the newProduct object in the model from our getAddNewProductForm method are the same. So, the newProduct object that we attached to the model in the controller method (getAddNewProductForm) is now bound to the form. This object is called the form-backing bean in Spring MVC. Okay, now notice each <form:input> tag inside the <form:form> tag shown in the following code. You will observe that there is a common attribute in every tag. This attribute name is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to the form-backing bean. So, the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now is the time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button of our form. Yes, since every form submission is considered as a POST request, this time the browser will send a POST request to the same URL, that is, http://localhost:8080/webstore/products/add. So, this time, the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply call the service method addProduct to add the new product to the repository, as follows: productService.addProduct(productToBeAdded); However, the interesting question here is, how is the productToBeAdded object populated with the data that we entered in the form? The answer lies within the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Note the method signature of the processAddNewProductForm method shown in the following line of code: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here, if you notice the value attribute of the @ModelAttribute annotation, you will observe a pattern. The values of the @ModelAttribute annotation and modelAttribute from the <form:form> tag are the same. So, Spring MVC knows that it should assign the form-bound newProduct object to the productToBeAdded parameter of the processAddNewProductForm method. The @ModelAttribute annotation is not only used to retrieve an object from a model, but if we want to, we can even use it to add objects to the model. For instance, we rewrite our getAddNewProductForm method to something like the following code with the use of the @ModelAttribute annotation: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can notice that we haven't created any new empty Product domain object and attached it to the model. All we have done was added a parameter of the type Product and annotated it with the @ModelAttribute annotation so that Spring MVC would know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical view name, redirect:/products, that it returns. So, what are we trying to tell Spring MVC by returning a string redirect:/products? To get the answer, observe the logical view name string carefully. If we split this string with the : (colon) symbol, we will get two parts; the first part is the prefix redirect and the second part is something that looks like a request path, /products. So, instead of returning a view name, we simply instruct Spring to issue a redirect request to the request path, /products, which is the request path for the list method of our ProductController class. So, after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring uses a special view object, RedirectView (org.springframework.web.servlet.view.RedirectView), to issue a redirect command behind the screen. Instead of landing in a web page after the successful submission of a web form, we are spawning a new request to the request path /products with the help of RedirectView. This pattern is called Redirect After Post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form; sometimes, if we press the browser's refresh button or back button after submitting the form, there are chances that the same form will be resubmitted. Summary This article introduced you to Spring and Spring form tag libraries in web form handling. You also learned how to bind domain objects with views and how to use message bundles to externalize label caption texts. Resources for Article: Further resources on this subject: Spring MVC - Configuring and Deploying the Application [article] Getting Started With Spring MVC - Developing the MVC components [article] So, what is Spring for Android? [article]
Read more
  • 0
  • 0
  • 1848
article-image-introducing-variables
Packt
24 Jun 2014
6 min read
Save for later

Introducing variables

Packt
24 Jun 2014
6 min read
(For more resources related to this topic, see here.) In order to store data, you have to store data in the right kind of variables. We can think of variables as boxes, and what you put in these boxes depends on what type of box it is. In most native programming languages, you have to declare a variable and its type. Number variables Let's go over some of the major types of variables. The first type is number variables. These variables store numbers and not letters. That means, if you tried to put a name in, let's say "John Bura", then the app simply won't work. Integer variables There are numerous different types of number variables. Integer variables, called Int variables, can be positive or negative whole numbers—you cannot have a decimal at all. So, you could put -1 as an integer variable but not 1.2. Real variables Real variables can be positive or negative, and they can be decimal numbers. A real variable can be 1.0, -40.4, or 100.1, for instance. There are other kinds of number variables as well. They are used in more specific situations. For the most part, integer and real variables are the ones you need to know—make sure you don't get them mixed up. If you were to run an app with this kind of mismatch, chances are it won't work. String variables There is another kind of variable that is really important. This type of variable is called a string variable. String variables are variables that comprise letters or words. This means that if you want to record a character's name, then you will have to use a string variable. In most programming languages, string variables have to be in quotes, for example, "John Bura". The quote marks tell the computer that the characters within are actually strings that the computer can use. When you put a number 1 into a string, is it a real number 1 or is it just a fake number? It's a fake number because strings are not numbers—they are strings. Even though the string shows the number 1, it isn't actually the number 1. Strings are meant to display characters, and numbers are meant to do math. Strings are not meant to do math—they just hold characters. If you tried to do math with a string, it wouldn't work (except in JavaScript, which we will talk about shortly). Strings shouldn't be used for calculations—they are meant to hold and display characters. If we have a string "1", it will be recorded as a character rather than an integer that can be used for calculations. Boolean variables The last main type of variable that we need to talk about is Boolean variables. Boolean variables are either true or false, and they are very important when it comes to games. They are used where there can only be two options. The following are some examples of Boolean variables: isAlive isShooting isInAppPurchaseCompleted isConnectedToInternet Most of these variables start off with the word is. This is usually done to signify that the variable that we are using is a Boolean. When you make games, you tend to use a lot of Boolean variables because there are so many states that game objects can be in. Often, these states have only two options, and the best thing to do is use a Boolean. Sometimes, you need to use an integer instead of a Boolean. Usually, 0 equals false and 1 equals true. Other variables When it comes to game production, there are a lot of specific variables that differ from environment to environment. Sometimes, there are GameObject variables, and there can also be a whole bunch of more specific variables. Declaring variables If you want to store any kind of data in variables, you have to declare them first. In the backend of Construct 2, there are a lot of variables that are already declared for you. This means that Construct 2 takes out the work of declaring variables. The variables that are taken care of for you include the following: Keyboard Mouse position Mouse angle Type of web browser Writing variables in code When we use Construct 2, a lot of the backend busywork has already been done for us. So, how do we declare variables in code? Usually, variables are declared at the top of the coding document, as shown in the following code: Int score; Real timescale = 1.2; Bool isDead; Bool isShooting = false; String name = "John Bura"; Let's take a look at all of them. The type of variable is listed first. In this case, we have the Int, Real, Bool (Boolean), and String variables. Next, we have the name of the variable. If you look carefully, you can see that certain variables have an = (equals sign) and some do not. When we have a variable with an equals sign, we initialize it. This means that we set the information in the variable right away. Sometimes, you need to do this and at other times, you do not. For example, a score does not need to be initialized because we are going to change the score as the game progresses. As you already know, you can initialize a Boolean variable to either true or false—these are the only two states a Boolean variable can be in. You will also notice that there are quotes around the string variable. Let's take a look at some examples that won't work: Int score = -1.2; Bool isDead = "false"; String name = John Bura; There is something wrong with all these examples. First of all, the Int variable cannot be a decimal. Second, the Bool variable has quotes around it. Lastly, the String variable has no quotes. In most environments, this will cause the program to not work. However, in HTML5 or JavaScript, the variable is changed to fit the situation. Summary In this article, we learned about the different types of variables and even looked at a few correct and incorrect variable declarations. If you are making a game, get used to making and setting lots of variables. The best part is that Construct 2 makes handling variables really easy. Resources for Article: Further resources on this subject: 2D game development with Monkey [article] Microsoft XNA 4.0 Game Development: Receiving Player Input [article] Flash Game Development: Making of Astro-PANIC! [article]
Read more
  • 0
  • 0
  • 11113

article-image-sprites
Packt
24 Jun 2014
6 min read
Save for later

Sprites

Packt
24 Jun 2014
6 min read
The goal of this article is to learn how to work with sprites and get to know their main properties. After reading this article, you will be able to add sprites to your games. In this article, we will cover the following topics: Setting up the initial project Sprites and their main properties Adding sprites to the scene Adding sprites as a child node of another sprite Manipulating sprites (moving, flipping, and so on) Performance considerations when working with many sprites Creating spritesheets and using the sprite batch node to optimize performance Using basic animation Creating the game project We could create many separate mini projects, each demonstrating a single Cocos2D aspect, but this way we won't learn how to make a complete game. Instead, we're going to create a game that will demonstrate every aspect of Cocos2D that we learn. The game we're going to make will be about hunting. Not that I'm a fan of hunting, but taking into account the material we need to cover and practically use in the game's code, a hunting game looks like the perfect candidate. The following is a screenshot from the game we're going to develop. It will have several levels demonstrating several different aspects of Cocos2D in action: Time for action – creating the Cocohunt Xcode project Let's start creating this game by creating a new Xcode project using the Cocos2D template, just as we did with HelloWorld project, using the following steps: Start Xcode and navigate to File | New | Project… to open the project creation dialog. Navigate to the iOS | cocos2d v3.x category on the left of the screen and select the cocos2d iOS template on the right. Click on the Next button. In the next dialog, fill out the form as follows: Product Name: Cocohunt Organization Name: Packt Publishing Company Identifier: com.packtpub Device Family: iPhone Click on the Next button and pick a folder where you'd like to save this project. Then, click on the Create button. Build and run the project to make sure that everything works. After running the project, you should see the already familiar Hello World screen, so we won't show it here. Make sure that you select the correct simulator version to use. This project will support iPhone, iPhone Retina (3.5-inch), iPhone Retina (4-inch), and iPhone Retina (4-inch, 64-bit) simulators, or an actual iPhone 3GS or newer device running iOS 5.0 or higher. What just happened? Now, we have a project that we'll be working on. The project creation part should be very similar to the process of creating the HelloWorld project, so let's keep the tempo and move on. Time for action – creating GameScene As we're going to work on this project for some time, let's keep everything clean and tidy by performing the following steps: First of all, let's remove the following files as we won't need them: HelloWorldScene.h HelloWorldScene.m IntroScene.h IntroScene.m We'll use groups to separate our classes. This will allow us to keep things organized. To create a group in Xcode, you should right-click on the root project folder in Xcode, Cocohunt in our case, and select the New Group menu option (command + alt + N). Refer to the following sceenshot: Go ahead and create a new group and name it Scenes. After the group is created, let's place our first scene in it. We're going to create a new Objective-C class called GameScene and make it a subclass of CCScene. Right-click on the Scenes group that we've just created and select the New File option. Right-clicking on the group and selecting New File instead of using File | New | File will place our new file in the selected group after creation. Select Cocoa Touch category on the left of the screen and the Objective-C class on the right. Then click on the Next button. In the next dialog, name the the class as GameScene and make it a subclass of the CCScene class. Then click on the Next button. Make sure that you're in the Cocohunt project folder to save the file and click on the Create button. You can create the Scenes folder while in the save dialog using New Folder button and save the GameScene class there. This way, the hierarchy of groups in Xcode will match the physical folders hierarchy on the disk. This is the way I'm going to do this so that you can easily find any file in the book's supporting file's projects. However, the groups and files organization within groups will be identical, so you can always just open the Cocohunt.xcodeproj project and review the code in Xcode. This should create the GameScene.h and GameScene.m files in the Scenes group, as you can see in the following screenshot: Now, switch to the AppDelegate.m file and remove the following header imports at the top: #import "IntroScene.h" #import "HelloWorldScene.h" It is important to remove these #import directives or we will get errors as we removed the files they are referencing. Import the GameScene.h header as follows: #import "GameScene.h" Then find the startScene: method and replace it with following: -(CCScene *)startScene { return [[GameScene alloc] init]; } Build and run the game. After the splash screen, you should see the already familiar black screen as follows: What just happened? We've just created another project using the Cocos2D template. Most of the steps should be familiar as we have already done them in the past. After creating the project, we removed the unneeded files generated by the Cocos2D template, just as you will do most of the time when creating a new project, since most of the time you don't need those example scenes in your project. We're going to work on this project for some time and it is best to start organizing things well right away. This is why we've created a new group to contain our game scene files. We'll add more groups to the project later. As a final step, we've created our GameScene scene and displayed it on the screen at the start of the game. This is very similar to what we did in our HelloWorld project, so you shouldn't have any difficulties with it.
Read more
  • 0
  • 0
  • 5747

article-image-end-user-transactions
Packt
23 Jun 2014
13 min read
Save for later

End User Transactions

Packt
23 Jun 2014
13 min read
(For more resources related to this topic, see here.) End user transaction code or simply T-code is a functionality provided by SAP that calls a new screen to carry out day-to-day operational activities. A transaction code is a four-character command entered in SAP by the end user to perform routine tasks. It can also be a combination of characters and numbers, for example, FS01. Each module has a different T-code that is uniquely named. For instance, the FICO module's T-code is FI01, while the Project Systems module's T-code will be CJ20. The T-code, as we will call it throughout the article, is a technical name that is entered in the command field to initiate a new GUI window. In this article, we will cover all the important T-codes that end users or administrators use on a daily basis. Further, you will also learn more about the standard reports that SAP has delivered to ease daily activities. Daily transactional codes On a daily basis, an end user needs to access the T-code to perform daily transactions. All the T-code is entered in a command field. A command field is a space designed by SAP for entering T-codes. There are multiple ways to enter a T-code; we will gradually learn about the different approaches. The first approach is to enter the T-code in the command field, as shown in the following screenshot: Second, the T-codes can be accessed via SAP Easy Access. By double-clicking on a node, the associated application is called and the start of application message is populated at the bottom of the screen. SAP Easy Access is the first screen you see when you log on. The following screenshot shows the SAP Easy Access window: We don't have to remember any T-codes. SAP has given a functionality to store the T-codes by adding it under Favorites. To add a T-code to Favorites, navigate to Favorites | Insert transaction, as shown in the following screenshot, or simply press Ctrl + Shift + F4 and then enter the T-code that we wish to add as favorite: There are different ways to call a technical screen using a T-code. They are shown in the following table: Command+T-code Description /n+T-code, for example, /nPA20 If we wish to call the technical screen in the same session, we may use the /n+T-code function. /o+T-code, for example, /oFS01 If we wish to call the screen in a different session, we may use the /n+T-code function. Frequently used T-codes Let's look closely at the important or frequently used T-codes for administration or transactional purposes. The Recruitment submodule The following are the essential T-codes in the Recruitment submodule: T-code Description PB10 This T-code is used for initial data entry. It performs actions similar to the PB40T-code. The mandatory fields ought to be filled by the user to proceed to the next infotype. PB20 This T-code is used for display purposes only. PB30 This T-code is used to make changes to an applicant's data, for example, changing a wrongly entered date of birth or incorrect address. PBA1 This T-code provides the functionality to bulk process an applicants' data. Multiple applicants can be processed at the same time unlike the PB30 T-code, which processes every applicant's data individually. Applicants' IDs along with their names are fetched using this T-code for easy processing. PBA2 This T-code is useful when listing applicants based on their advertising medium for bulk processing. It helps to filter applicants based on a particular advertising channel such as a portal. PBAW It's used to maintain the advertisements used by the client to process an applicants' data. PBAY All the vacant positions can be listed using this T-code. If positions are not flagged as vacant in the Organizational Management (OM) submodule, they can be maintained via this T-code. PBAA A recruitment medium, such as job portal sites, that is linked with an advertisement medium is evaluated using this T-code. PBA7 This is an important T-code to transfer an applicant to employee. Applicant gets converted to an employee using this T-code. The integration between Recruitment and Personnel Administration submodules come into picture. PBA8 To confirm whether an applicant has been transferred to employee, PBA8 needs to be executed. The system throws a message that processing has been carried out successfully for the applicants. After PBA8 T-code is executed, we will see a message similar to the one shown in the following screenshot: The Organization Management submodule We will cover some of the important T-codes used to design and develop the organization structure in the following table: T-code Description PPOCE This T-code is used to create an organizational structure. It is a graphically supported interface with icons to easily differentiate between object types such as org unit and position. PPOC_OLD SAP provides multiple interfaces to create a structure. This T-code is one such interface that is pretty simple and easy to use. PP01 This is also referred to as the Expert Mode, because one needs to know the object types like SPOCK, where S represents position, O represents organization unit, and relationships A/B, where A is the bottom-up approach and B is the top-down approach, in depth to work in this interface. PO10 This T-code is used to build structures using object types individually based on SPOCK. This is used to create an Org unit; this T-code creates the object type O, organization unit. PO13 This is used to create the position object type. PO03 This T-code is used to create the job object type. PP03 This is an action-based T-code that helps infotypes get populated one after another. All of the infotypes such as 1000-object, 1001-relationships, and 1002-description can be created using this interface. PO14 Tasks, which are the day-to-day activities performed by the personnel, can be maintained using this T-code. The Personnel Administration submodule The Personnel Administration submodule deals with everything related to the master data of employees. Some of the frequently used T-codes are listed as follows: T-code Description PA20 The master data of an employee is displayed using this T-code. PA30 The master data is maintained via this T-code. Employee details such as address and date of birth can be edited using this T-code. PA40 Personnel actions are performed using this T-code. Personnel actions such as hiring and promotions, known as the action type, are executed for employees. PA42 This T-code, known as the fast entry for action solution, helps a company maintain large amount of data. The information captured using this solution is highly accurate. PA70 This T-code, known as the fast entry functionality, allows the maintenance of master data for multiple employees at the same time. For example, the recurring payments and deduction (0014) infotype can be maintained for multiple employees. The usage of the PA70 T-code is shown in the following screenshot. Multiple employees can be entered, and the corresponding wage type, amount, currency, and so on can be provided for these employees. Using this functionality saves the administrator's time. The Time Management submodule The Time Management submodule is used to capture the time an employee has spent at their work place or make a note of their absenteeism. The important T-codes that maintain time data are covered in the following table: T-code Description PT01 The work schedule of the employee is created using this T-code. The work schedule is simply the duration of work, say, for instance, 9 a.m. to 6 p.m. PTMW The time manager's workplace action allows us to have multiple views such as one-day view and multiday view. It is used to administer and manage time. PP61 This T-code is used to change a shift plan for the employee. PA61 This T-code, known as maintain time data, is used to maintain time data for the employees. Only time-related infotypes such as Absences, Attendances, and Overtime are maintained via this T-code. PA71 This T-code, known as the fast entry time data action, is used to capture multiple employees' time-related data. PT50 This T-code, known as quota overview, is used to display the quota entitlements and leave balances of an employee. PT62 The attendance check T-code is used to create a list of employees who are absent, along with their reasons and the attendance time. PT60 This T-code is used for time evaluation. It is a program that evaluates the time data of employee. Also, the wage types are processed using this program. PT_ERL00 Time evaluation messages are displayed using this T-code. PT_CLSTB2 Time evaluation results can be accessed via this T-code. CAC1 Using this T-code, data entry profile is created. Data entry profiles are maintained for employees to capture their daily working hours, absence, and so on. CATA This T-code is used to transfer data to target components such as PS, HR, and CO. The Payroll Accounting submodule The gross and net calculations of wages are performed using this submodule. We will cover all the important T-codes that are used on a daily basis in the following table: T-code Description PU03 This T-code can be used to change the payroll status of an employee if necessary. It lets us change the master data that already exists, for example, locking a personnel's number. One must exercise caution when working on this T-code. It's a sensitive T-code because it is related to an employee's pay. Also, time data for the employees is controlled using this T-code. PA03 The control record is accessed via this T-code. The control record has key characteristics of how a payroll is processed. This T-code is normally not authorized by administrators. PC00_MXX_SIMU This is the T-code used for the simulation run of a payroll. The test is automatically flagged when this T-code is executed. PC00_MXX_CALC A live payroll run can be performed using this T-code. The test flag is still available to be used if required. PC00_MXX_PA03_RELEA This T-code is used normally by end users to release the control record. Master data and time data is normally locked when this T-code is executed. Changes cannot be made when this T-code is executed. PC00_MXX_PA03_CORR This T-code is used to make any changes to the master data or time data. The status has to be reverted to "release" to run a payroll for the payroll period. PC00_MXX_PA03_END Once all the activities are performed for the payroll period, the control record must be exited in order to proceed for the subsequent periods. PC00_MXX_CEDT The remuneration statement or payslip can be displayed using this T-code. PE51 The payslip is designed using this T-code. The payments, deductions, and net can be designed using this T-code. PC00_MXX_CDTA The data medium exchange for banks can be achieved using this tool. PUOC_99 The off-cycle payroll or on-demand payroll, as it's called in SAP, is used to make payments or deductions in a nonregular pay period such as in the middle of the payroll period. PC00_M99_CIPE The payroll results are posted to the finance department using this T-code. PCP0 The payroll posting runs are displayed using this T-code. The release of posting documents is controlled using this T-code. PC00_M99_CIPC The completeness check is performed using this T-code. We can find the pay results that are not posted using this T-code. OH11/PU30 The wage type maintenance tool is useful when creating wage type or pay components such as housing, dearness allowance. PE01 The schema, which is the warehouse of logic, is accessed and/or maintained via this T-code. PE02 The Personnel Calculation Rule is accessed via this T-code. The PCR is used to perform small calculations. PE04 The function and operations used can be accessed via this T-code. The documentation of most of these functions and operations can also be accessed via this T-code. PC00_M99_DLGA20 This shows the wage types used and their process class and cummulation class assignment. The wage type used in a payroll is analyzed using this T-code. PC00_M99_DKON The wage type mapped to general ledgers for FICO integration can be analyzed using this T-code PCXX Country-specific payroll can be accessed via this T-code. PC00 Payroll of all the countries, such as Europe, Americas, and so on, can be accessed via this T-code. PC_Payresult The payroll results of the employee can be analyzed via this T-code. The following screenshot shows how the payroll results are shown when the T-code is executed. The "XX" part in PCXX denotes the country grouping. For example, its 10 for USA, 01 for Germany, and so on. SAP has localized country-specific payroll solution, and hence, each country has a specific number. The country-specific settings are enabled using MOLGA, which is a technical name for the country, and it needs to be activated. It is the foundation of the SAP HCM solution. It's always 99 for Offcyle run for any country grouping. It's the same for posting as well. The following screenshot shows the output of the PC_Payresult T-code: The Talent Management submodule The Talent Management module deals with assessing the performance of the employees, such as feedback from supervisors, peers, and so on. We will explore all the T-codes used in this submodule. They are described in the following table: T-code Description PHAP_CATALOG This is used to create an appraisal template that can be filled by the respective persons, based on the Key Result Areas (KRA) such as attendance, certification, and performance. PPEM Career and succession planning for an entire org unit can be performed via this T-code. PPCP Career planning for a person can be performed via this T-code. The qualifications and preferences can be checked, based on which suitable persons can be shortlisted. PPSP Succession planning can be performed via this T-code. The successor for a particular position can be determined using this T-code. Different object types such as position and job can be used to plan the successor. OOB1 The form of appraisals is accessed via this T-code. The possible combination of appraiser and appraisee is determined based on the evaluation path. APPSEARCH This T-code is used to evaluate the appraisal template based on different statuses such as "in preparation" and "completed". PHAP_CATALOG_PA This is used to create an appraisal template that can be filled in by the respective persons based on the KRAs such as attendance, certification, and performance. The appraisers and appraisee allowed can be defined. OOHAP_SETTINGS_PA The integration check-related switches can be accessed via this T-code. APPCREATE Once the created appraisal template is released, we would be able to find the template in this T-code.
Read more
  • 0
  • 0
  • 4594
Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 5304

article-image-upgrading-previous-versions
Packt
23 Jun 2014
8 min read
Save for later

Upgrading from Previous Versions

Packt
23 Jun 2014
8 min read
(For more resources related to this topic, see here.) This article is about guiding you through the requirements and steps necessary to upgrade your VMM 2008 R2 SP1 to VMM 2012 R2. There is no direct upgrade path from VMM 2008 R2 SP1 to VMM 2012 R2. You must first upgrade to VMM 2012 and then to VMM 2012 R2. VMM 2008 R2 SP1-> VMM 2012-> SCVMM 2012 SP1 -> VMM 2012 R2 is the correct upgrade path. Upgrade notes: VMM 2012 cannot be upgraded directly to VMM 2012 R2. Upgrading it to VMM 2012 SP1 is required VMM 2012 can be installed on a Windows 2008 Server VMM 2012 SP1 requires Windows 2012 VMM 2012 R2 requires minimum Windows 2012 (Windows 2012 R2 is recommended) Windows 2012 hosts can be managed by VMM 2012 SP1 Windows 2012 R2 hosts require VMM 2012 R2 System Center App Controller versions must match the VMM version To debug a VMM installation, the logs are located in %ProgramData%VMMLogs, and you can use the CMTrace.exe tool to monitor the content of the files in real time, including SetupWizard.log and vmmServer.log. VMM 2012 Architecture, VMM 2012 is a huge product upgrade, and there have been many improvements. This article only covers the VMM upgrade. If you have a previous version of System Center family components installed on your environment, make sure you follow the upgrade and installation. System Center 2012 R2 has some new components, in which the installation order is also critical. It is critical that you take the steps documented by Microsoft in Upgrade Sequencing for System Center 2012 R2 at http://go.microsoft.com/fwlink/?LinkId=328675 and use the following upgrade order: Service Management Automation Orchestrator Service Manager Data Protection Manager (DPM) Operations Manager Configuration Manager Virtual Machine Manager (VMM) App Controller Service Provider Foundation Windows Azure Pack for Windows Server Service Bus Clouds Windows Azure Pack Service Reporting Reviewing the upgrade options This recipe will guide you through the upgrade options for VMM 2012 R2. Keep in mind that there is no direct upgrade path from VMM 2008 R2 to VMM 2012 R2. How to do it... Read through the following recommendations in order to upgrade your current VMM installation. In-place upgrade from VMM 2008 R2 SP1 to VMM 2012 Use this method if your system meets the requirements for a VMM 2012 upgrade and you want to deploy it on the same server. The supported VMM version to upgrade from is VMM 2008 R2 SP1. If you need to upgrade VMM 2008 R2 to VMM 2008 R2 SP1, refer to http://go.microsoft.com/fwlink/?LinkID=197099. In addition, keep in mind that if you are running the SQL Server Express version, you will need to upgrade SQL Server to a fully supported version beforehand as the Express version is not supported in VMM 2012. Once the system requirements are met and all of the prerequisites are installed, the upgrade process is straightforward. To follow the detailed recipe, refer to the Upgrading to VMM 2012 R2 recipe. Upgrading from 2008 R2 SP1 to VMM 2012 on a different computer Sometimes, you may not be able to do an in-place upgrade to VMM 2012 or even to VMM 2012 SP1. In this case, it is recommended that you use the following instructions: Uninstall the current VMM that retains the database and then restore the database on a supported version of SQL Server. Next, install the VMM 2012 prerequisites on a new server (or on the same server, as long it meets the hardware and OS requirements). Finally, install VMM 2012, providing the retained database information on the Database configuration dialog, and the VMM setup will upgrade the database. When the install process is finished, upgrade the Hyper-V hosts with the latest VMM agents. The following figure illustrates the upgrade process from VMM 2008 R2 SP1 to VMM 2012: When performing an upgrade from VMM 2008 R2 SP1 with a local VMM database to a different server, the encrypted data will not be preserved as the encryption keys are stored locally. The same rule applies when upgrading from VMM 2012 to VMM 2012 SP1 and from VMM 2012 SP1 to VMM 2012 R2 and not using Distributed Key Management (DKM) in VMM 2012. Upgrading from VMM 2012 to VMM 2012 SP1 To upgrade to VMM 2012 SP1, you should already have VMM 2012 up and running. VMM 2012 SP1 requires a Windows Server 2012 and Windows ADK 8.0. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 SP1 and App Controller. Upgrading from VMM 2012 SP1 to VMM 2012 R2 To upgrade to VMM 2012 R2, you should already have VMM 2012 SP1 up and running. VMM 2012 R2 requires minimum Windows Server 2012 as the OS (Windows 2012 R2 is recommended) and Windows ADK 8.1. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 SP1 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 R2 and App Controller. Some more planning considerations are as follows: Virtual Server 2005 R2: VMM 2012 does not support Microsoft Virtual Server 2005 R2 anymore. If you have Virtual Server 2005 R2 or an unsupported ESXi version running and have not removed these hosts before the upgrade, they will be removed automatically during the upgrade process. VMware ESX and vCenter: For VMM 2012, the supported versions of VMware are from ESXi 3.5 to ESXi 4.1 and vCenter 4.1. For VMM 2012 SP1/R2, the supported VMware versions are from ESXi 4.1 to ESXi 5.1, and vCenter 4.1 to 5.0. SQL Server Express: This is not supported since VMM 2012. A full version is required. Performance and Resource Optimization (PRO): The PRO configurations are not retained during an upgrade to VMM 2012. If you have an Operations Manager (SCOM) integration configured, it will be removed during the upgrade process. Once the upgrade process is finished, you can integrate SCOM with VMM. Library server: Since VMM 2012, VMM does not support a library server on Windows Server 2003. If you have it running and continue with the upgrade, you will not be able to use it. To use the same library server in VMM 2012, move it to a server running a supported OS before starting the upgrade. Choosing a service account and DKM settings during an upgrade: During an upgrade to VMM 2012, on the Configure service account and distributed key management page of the setup, you are required to create a VMM service account (preferably a domain account) and choose whether you want to use DKM to store the encryption keys in Active Directory (AD). Make sure to log on with the same account that was used during the VMM 2008 R2 installation: This needs to be done because, in some situations after the upgrade, the encrypted data (for example, the passwords in the templates) may not be available depending on the selected VMM service account, and you will be required to re-enter it manually. For the service account, you can use either the Local System account or a domain account: This is the recommended option, but when deploying a highly available VMM management server, the only option available is a domain account. Note that DKM is not available with the versions prior to VMM 2012. Upgrading to a highly available VMM 2012: If you're thinking of upgrading to a High Available (HA) VMM, consider the following: Failover Cluster: You must deploy the failover cluster before starting the upgrade. VMM database: You cannot deploy the SQL Server for the VMM database on highly available VMM management servers. If you plan on upgrading the current VMM Server to an HA VMM, you need to first move the database to another server. As a best practice, it is recommended that you have the SQL Server cluster separated from the VMM cluster. Library server: In a production or High Available environment, you need to consider all of the VMM components to be High Available as well, and not only the VMM management server. After upgrading to an HA VMM management server, it is recommended, as a best practice, that you relocate the VMM library to a clustered file server. In order to keep the custom fields and properties of the saved VMs, deploy those VMs to a host and save them to a new VMM 2012 library. VMM Self-Service Portal: This is not supported since VMM 2012 SP1. It is recommended that you install System Center App Controller instead. How it works... There are two methods to upgrade to VMM 2012 from VMM 2008 R2 SP1: an in-place upgrade and upgrading to another server. Before starting, review the initial steps and the VMM 2012 prerequisites and perform a full backup of the VMM database. Uninstall VMM 2008 R2 SP1 (retaining the data) and restore the VMM database to another SQL Server running a supported version. During the installation, point to that database in order to have it upgraded. After the upgrade is finished, upgrade the host agents. VMM will be rolled back automatically in the event of a failure during the upgrade process and reverted to its original installation/configuration. There's more... The names of the VMM services have been changed in VMM 2012. If you have any applications or scripts that refer to these service names, update them accordingly as shown in the following table: VMM version VMM service display name Service name 2008 R2 SP1 Virtual Machine Manager vmmservice   Virtual Machine Manager Agent vmmagent 2012 / 2012 SP1/ 2012 R2 System Center Virtual Machine Manager scvmmservice   System Center Virtual Machine Manager Agent scvmmagent See also To move the file-based resources (for example, ISO images, scripts, and VHD/VHDX), refer to http://technet.microsoft.com/en-us/library/hh406929 To move the virtual machine templates, refer to Exporting and Importing Service Templates in VMM at http://go.microsoft.com/fwlink/p/?LinkID=212431
Read more
  • 0
  • 0
  • 8226
Modal Close icon
Modal Close icon