Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-extending-elasticsearch-scripting
Packt
06 Feb 2015
21 min read
Save for later

Extending ElasticSearch with Scripting

Packt
06 Feb 2015
21 min read
In article by Alberto Paro, the author of ElasticSearch Cookbook Second Edition, we will cover about the following recipes: (For more resources related to this topic, see here.) Installing additional script plugins Managing scripts Sorting data using scripts Computing return fields with scripting Filtering a search via scripting Introduction ElasticSearch has a powerful way of extending its capabilities with custom scripts, which can be written in several programming languages. The most common ones are Groovy, MVEL, JavaScript, and Python. In this article, we will see how it's possible to create custom scoring algorithms, special processed return fields, custom sorting, and complex update operations on records. The scripting concept of ElasticSearch can be seen as an advanced stored procedures system in the NoSQL world; so, for an advanced usage of ElasticSearch, it is very important to master it. Installing additional script plugins ElasticSearch provides native scripting (a Java code compiled in JAR) and Groovy, but a lot of interesting languages are also available, such as JavaScript and Python. In older ElasticSearch releases, prior to version 1.4, the official scripting language was MVEL, but due to the fact that it was not well-maintained by MVEL developers, in addition to the impossibility to sandbox it and prevent security issues, MVEL was replaced with Groovy. Groovy scripting is now provided by default in ElasticSearch. The other scripting languages can be installed as plugins. Getting ready You will need a working ElasticSearch cluster. How to do it... In order to install JavaScript language support for ElasticSearch (1.3.x), perform the following steps: From the command line, simply enter the following command: bin/plugin --install elasticsearch/elasticsearch-lang-javascript/2.3.0 This will print the following result: -> Installing elasticsearch/elasticsearch-lang-javascript/2.3.0... Trying http://download.elasticsearch.org/elasticsearch/elasticsearch-lang-javascript/ elasticsearch-lang-javascript-2.3.0.zip... Downloading ....DONE Installed lang-javascript If the installation is successful, the output will end with Installed; otherwise, an error is returned. To install Python language support for ElasticSearch, just enter the following command: bin/plugin -install elasticsearch/elasticsearch-lang-python/2.3.0 The version number depends on the ElasticSearch version. Take a look at the plugin's web page to choose the correct version. How it works... Language plugins allow you to extend the number of supported languages to be used in scripting. During the ElasticSearch startup, an internal ElasticSearch service called PluginService loads all the installed language plugins. In order to install or upgrade a plugin, you need to restart the node. The ElasticSearch community provides common scripting languages (a list of the supported scripting languages is available on the ElasticSearch site plugin page at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-plugins.html), and others are available in GitHub repositories (a simple search on GitHub allows you to find them). The following are the most commonly used languages for scripting: Groovy (http://groovy.codehaus.org/): This language is embedded in ElasticSearch by default. It is a simple language that provides scripting functionalities. This is one of the fastest available language extensions. Groovy is a dynamic, object-oriented programming language with features similar to those of Python, Ruby, Perl, and Smalltalk. It also provides support to write a functional code. JavaScript (https://github.com/elasticsearch/elasticsearch-lang-javascript): This is available as an external plugin. The JavaScript implementation is based on Java Rhino (https://developer.mozilla.org/en-US/docs/Rhino) and is really fast. Python (https://github.com/elasticsearch/elasticsearch-lang-python): This is available as an external plugin, based on Jython (http://jython.org). It allows Python to be used as a script engine. Considering several benchmark results, it's slower than other languages. There's more... Groovy is preferred if the script is not too complex; otherwise, a native plugin provides a better environment to implement complex logic and data management. The performance of every language is different; the fastest one is the native Java. In the case of dynamic scripting languages, Groovy is faster, as compared to JavaScript and Python. In order to access document properties in Groovy scripts, the same approach will work as in other scripting languages: doc.score: This stores the document's score. doc['field_name'].value: This extracts the value of the field_name field from the document. If the value is an array or if you want to extract the value as an array, you can use doc['field_name'].values. doc['field_name'].empty: This returns true if the field_name field has no value in the document. doc['field_name'].multivalue: This returns true if the field_name field contains multiple values. If the field contains a geopoint value, additional methods are available, as follows: doc['field_name'].lat: This returns the latitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lats method. doc['field_name'].lon: This returns the longitude of a geopoint. If you need the value as an array, you can use the doc['field_name'].lons method. doc['field_name'].distance(lat,lon): This returns the plane distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].distanceInKm(lat,lon) method. doc['field_name'].arcDistance(lat,lon): This returns the arc distance, in miles, from a latitude/longitude point. If you need to calculate the distance in kilometers, you should use the doc['field_name'].arcDistanceInKm(lat,lon) method. doc['field_name'].geohashDistance(geohash): This returns the distance, in miles, from a geohash value. If you need to calculate the same distance in kilometers, you should use doc['field_name'] and the geohashDistanceInKm(lat,lon) method. By using these helper methods, it is possible to create advanced scripts in order to boost a document by a distance that can be very handy in developing geolocalized centered applications. Managing scripts Depending on your scripting usage, there are several ways to customize ElasticSearch to use your script extensions. In this recipe, we will see how to provide scripts to ElasticSearch via files, indexes, or inline. Getting ready You will need a working ElasticSearch cluster populated with the populate script (chapter_06/populate_aggregations.sh), available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... To manage scripting, perform the following steps: Dynamic scripting is disabled by default for security reasons; we need to activate it in order to use dynamic scripting languages such as JavaScript or Python. To do this, we need to turn off the disable flag (script.disable_dynamic: false) in the ElasticSearch configuration file (config/elasticseach.yml) and restart the cluster. To increase security, ElasticSearch does not allow you to specify scripts for non-sandbox languages. Scripts can be placed in the scripts directory inside the configuration directory. To provide a script in a file, we'll put a my_script.groovy script in the config/scripts location with the following code content: doc["price"].value * factor If the dynamic script is enabled (as done in the first step), ElasticSearch allows you to store the scripts in a special index, .scripts. To put my_script in the index, execute the following command in the command terminal: curl -XPOST localhost:9200/_scripts/groovy/my_script -d '{ "script":"doc["price"].value * factor" }' The script can be used by simply referencing it in the script_id field; use the following command: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script_id" : "my_script",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,      "params" : {        "factor" : 1.1      },      "order" : "asc"    } } }' How it works... ElasticSearch allows you to load your script in different ways; each one of these methods has their pros and cons. The most secure way to load or import scripts is to provide them as files in the config/scripts directory. This directory is continuously scanned for new files (by default, every 60 seconds). The scripting language is automatically detected by the file extension, and the script name depends on the filename. If the file is put in subdirectories, the directory path becomes part of the filename; for example, if it is config/scripts/mysub1/mysub2/my_script.groovy, the script name will be mysub1_mysub2_my_script. If the script is provided via a filesystem, it can be referenced in the code via the "script": "script_name" parameter. Scripts can also be available in the special .script index. These are the REST end points: To retrieve a script, use the following code: GET http://<server>/_scripts/<language>/<id"> To store a script use the following code: PUT http://<server>/_scripts/<language>/<id> To delete a script use the following code: DELETE http://<server>/_scripts/<language>/<id> The indexed script can be referenced in the code via the "script_id": "id_of_the_script" parameter. The recipes that follow will use inline scripting because it's easier to use it during the development and testing phases. Generally, a good practice is to develop using the inline dynamic scripting in a request, because it's faster to prototype. Once the script is ready and no changes are needed, it can be stored in the index since it is simpler to call and manage. In production, a best practice is to disable dynamic scripting and store the script on the disk (generally, dumping the indexed script to disk). See also The scripting page on the ElasticSearch website at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Sorting data using script ElasticSearch provides scripting support for the sorting functionality. In real world applications, there is often a need to modify the default sort by the match score using an algorithm that depends on the context and some external variables. Some common scenarios are given as follows: Sorting places near a point Sorting by most-read articles Sorting items by custom user logic Sorting items by revenue Getting ready You will need a working ElasticSearch cluster and an index populated with the script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to sort using scripting, perform the following steps: If you want to order your documents by the price field multiplied by a factor parameter (that is, sales tax), the search will be as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "doc["price"].value * factor",      "lang" : "groovy",      "type" : "number",      "ignore_unmapped" : true,    "params" : {        "factor" : 1.1      },            "order" : "asc"        }    } }' In this case, we have used a match_all query and a sort script. If everything is correct, the result returned by ElasticSearch should be as shown in the following code: { "took" : 7, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : null,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "161",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.0278578661440021 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "634",      "_score" : null, "_source" : … truncated …,     "sort" : [ 0.08131364254827411 ]    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "465",      "_score" : null, "_source" : … truncated …,      "sort" : [ 0.1094966959069832 ]    } ] } } How it works... The sort scripting allows you to define several parameters, as follows: order (default "asc") ("asc" or "desc"): This determines whether the order must be ascending or descending. script: This contains the code to be executed. type: This defines the type to convert the value. params (optional, a JSON object): This defines the parameters that need to be passed. lang (by default, groovy): This defines the scripting language to be used. ignore_unmapped (optional): This ignores unmapped fields in a sort. This flag allows you to avoid errors due to missing fields in shards. Extending the sort with scripting allows the use of a broader approach to score your hits. ElasticSearch scripting permits the use of every code that you want. You can create custom complex algorithms to score your documents. There's more... Groovy provides a lot of built-in functions (mainly taken from Java's Math class) that can be used in scripts, as shown in the following table: Function Description time() The current time in milliseconds sin(a) Returns the trigonometric sine of an angle cos(a) Returns the trigonometric cosine of an angle tan(a) Returns the trigonometric tangent of an angle asin(a) Returns the arc sine of a value acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value toRadians(angdeg) Converts an angle measured in degrees to an approximately equivalent angle measured in radians toDegrees(angrad) Converts an angle measured in radians to an approximately equivalent angle measured in degrees exp(a) Returns Euler's number raised to the power of a value log(a) Returns the natural logarithm (base e) of a value log10(a) Returns the base 10 logarithm of a value sqrt(a) Returns the correctly rounded positive square root of a value cbrt(a) Returns the cube root of a double value IEEEremainder(f1, f2) Computes the remainder operation on two arguments, as prescribed by the IEEE 754 standard ceil(a) Returns the smallest (closest to negative infinity) value that is greater than or equal to the argument and is equal to a mathematical integer floor(a) Returns the largest (closest to positive infinity) value that is less than or equal to the argument and is equal to a mathematical integer rint(a) Returns the value that is closest in value to the argument and is equal to a mathematical integer atan2(y, x) Returns the angle theta from the conversion of rectangular coordinates (x,y_) to polar coordinates (r,_theta) pow(a, b) Returns the value of the first argument raised to the power of the second argument round(a) Returns the closest integer to the argument random() Returns a random double value abs(a) Returns the absolute value of a value max(a, b) Returns the greater of the two values min(a, b) Returns the smaller of the two values ulp(d) Returns the size of the unit in the last place of the argument signum(d) Returns the signum function of the argument sinh(x) Returns the hyperbolic sine of a value cosh(x) Returns the hyperbolic cosine of a value tanh(x) Returns the hyperbolic tangent of a value hypot(x,y) Returns sqrt(x^2+y^2) without an intermediate overflow or underflow acos(a) Returns the arc cosine of a value atan(a) Returns the arc tangent of a value If you want to retrieve records in a random order, you can use a script with a random method, as shown in the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "sort": {    "_script" : {      "script" : "Math.random()",      "lang" : "groovy",      "type" : "number",      "params" : {}    } } }' In this example, for every hit, the new sort value is computed by executing the Math.random() scripting function. See also The official ElasticSearch documentation at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-scripting.html Computing return fields with scripting ElasticSearch allows you to define complex expressions that can be used to return a new calculated field value. These special fields are called script_fields, and they can be expressed with a script in every available ElasticSearch scripting language. Getting ready You will need a working ElasticSearch cluster and an index populated with the script (chapter_06/populate_aggregations.sh), which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to compute return fields with scripting, perform the following steps: Return the following script fields: "my_calc_field": This concatenates the text of the "name" and "description" fields "my_calc_field2": This multiplies the "price" value by the "discount" parameter From the command line, execute the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/ _search?&pretty=true&size=3' -d '{ "query": {    "match_all": {} }, "script_fields" : {    "my_calc_field" : {      "script" : "doc["name"].value + " -- " + doc["description"].value"    },    "my_calc_field2" : {      "script" : "doc["price"].value * discount",      "params" : {       "discount" : 0.8      }    } } }' If everything works all right, this is how the result returned by ElasticSearch should be: { "took" : 4, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 1000,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "4",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "entropic -- accusantium",        "my_calc_field2" : 5.480038242170081      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "frankie -- accusantium",        "my_calc_field2" : 34.79852410178313      }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "11",      "_score" : 1.0,      "fields" : {        "my_calc_field" : "johansson -- accusamus",        "my_calc_field2" : 11.824173084636591      }    } ] } } How it works... The scripting fields are similar to executing an SQL function on a field during a select operation. In ElasticSearch, after a search phase is executed and the hits to be returned are calculated, if some fields (standard or script) are defined, they are calculated and returned. The script field, which can be defined with all the supported languages, is processed by passing a value to the source of the document and, if some other parameters are defined in the script (in the discount factor example), they are passed to the script function. The script function is a code snippet; it can contain everything that the language allows you to write, but it must be evaluated to a value (or a list of values). See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting using script recipe to have a reference of the extra built-in functions in Groovy scripts Filtering a search via scripting ElasticSearch scripting allows you to extend the traditional filter with custom scripts. Using scripting to create a custom filter is a convenient way to write scripting rules that are not provided by Lucene or ElasticSearch, and to implement business logic that is not available in the query DSL. Getting ready You will need a working ElasticSearch cluster and an index populated with the (chapter_06/populate_aggregations.sh) script, which is available at https://github.com/aparo/ elasticsearch-cookbook-second-edition. How to do it... In order to filter a search using a script, perform the following steps: Write a search with a filter that filters out a document with the value of age less than the parameter value: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' In this example, all the documents in which the value of age is greater than param1 are qualified to be returned. If everything works correctly, the result returned by ElasticSearch should be as shown here: { "took" : 30, "timed_out" : false, "_shards" : {    "total" : 5,    "successful" : 5,    "failed" : 0 }, "hits" : {    "total" : 237,    "max_score" : 1.0,    "hits" : [ {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "9",      "_score" : 1.0, "_source" :{ … "age": 83, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "23",      "_score" : 1.0, "_source" : { … "age": 87, … }    }, {      "_index" : "test-index",      "_type" : "test-type",      "_id" : "47",      "_score" : 1.0, "_source" : {…. "age": 98, …}    } ] } } How it works... The script filter is a language script that returns a Boolean value (true/false). For every hit, the script is evaluated, and if it returns true, the hit passes the filter. This type of scripting can only be used as Lucene filters, not as queries, because it doesn't affect the search (the exceptions are constant_score and custom_filters_score). These are the scripting fields: script: This contains the code to be executed params: These are optional parameters to be passed to the script lang (defaults to groovy): This defines the language of the script The script code can be any code in your preferred and supported scripting language that returns a Boolean value. There's more... Other languages are used in the same way as Groovy. For the current example, I have chosen a standard comparison that works in several languages. To execute the same script using the JavaScript language, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"javascript",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' For Python, use the following code: curl -XGET 'http://127.0.0.1:9200/test-index/test-type/_search?&pretty=true&size=3' -d '{ "query": {    "filtered": {      "filter": {        "script": {          "script": "doc["age"].value > param1",          "lang":"python",          "params" : {            "param1" : 80          }        }      },      "query": {        "match_all": {}      }    } } }' See also The Installing additional script plugins recipe in this article to install additional languages for scripting The Sorting data using script recipe in this article to get a reference of the extra built-in functions in Groovy scripts Summary In this article you have learnt the ways you can use scripting to extend the ElasticSearch functional capabilities using different programming languages. Resources for Article: Further resources on this subject: Indexing the Data [Article] Low-Level Index Control [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 8475

article-image-mobile-administration
Packt
06 Feb 2015
17 min read
Save for later

Mobile Administration

Packt
06 Feb 2015
17 min read
In this article by Paul Goodey, author of the book Salesforce CRM – The Definitive Admin Handbook - Third Edition, we will look at the administration of Salesforce Mobile solutions that can significantly improve productivity and user satisfaction and help them access data and application functionality out of the office. (For more resources related to this topic, see here.) In the past, mobile devices that were capable of accessing software applications were very expensive. Often, these devices were regarded as a nice to have accessory by management and were seen as a company perk by field-based teams. Today, mobile devices are far more prevalent within the business environment, and organizations are increasingly realizing the benefits of using mobile phones and devices to access business applications. Salesforce has taken the lead in recognizing how mobiles have become the new standard for being connected in people's personal and professional lives. It has also highlighted how increasingly, the users of their apps are living lives connected to the Internet, but rather than sitting at a desk in the office, they are in between meetings, on the road, in planes, in trains, in cabs, or even in the queue for lunch. As a result, Salesforce has developed innovative mobile solutions that help you and your users embrace this mobile-first world in Salesforce CRM. Accessing Salesforce Mobile products Salesforce offers two varieties of mobile solutions, namely mobile browser apps and downloadable apps. Mobile browser apps, as the name suggests, are accessed using a web browser that is available on a mobile device. Downloadable apps are accessed by first downloading the client software from, say, the Apple App Store or Google Play and then installing it onto the mobile device. Mobile browser apps and downloadable apps offer various features and benefits and, as we'll see, are available for various Salesforce mobile products and device combinations. Most mobile devices these days have some degree of web browser capability, which can be used to access Salesforce CRM; however, some Salesforce mobile products are optimized for use with certain devices. By accessing a Salesforce mobile browser app, your users do not require anything to be installed. Supported mobile browsers for Salesforce are generally available on Android, Apple, BlackBerry, and Microsoft Windows 8.1 devices. Downloadable apps, on the other hand, will require the app to be first downloaded from the App Store for Apple® devices or from Google Play™ for Android™ devices and then installed on the mobile device. Salesforce mobile products' overview Salesforce has provided certain mobile products as downloadable apps only, while others have been provided as both downloadable and mobile browser-based. The following list outlines the various mobile app products, features, and capabilities used to access Salesforce CRM on mobile devices: SalesforceA Salesforce Touch Salesforce1 Salesforce Classic Salesforce Touch is no longer available and is mentioned here for completeness as this product has been recently incorporated into the Salesforce1 product. SalesforceA SalesforceA is a downloadable system administration app that allows you to manage your organization's users and view certain information for your Salesforce organization from your mobile device. Salesforce A is intended to be used by system administrators, as it is restricted to users with the Manage Users permission. The SalesforceA app provides the facilities to carry out user tasks, such as deactivating or freezing users, resetting passwords, unlocking users, editing user details, calling and emailing users, and assigning permission sets. These user task buttons are displayed as action icons, as shown in the following screenshot: These icons are presented in the action bar at the bottom of the mobile device screen, as shown in the following screenshot: In addition to the user tasks, you can view the system status and also switch between your user accounts in multiple organizations. This allows you to access different organizations and communities without having to log out and log back in to each user account. By staying logged in to multiple accounts in different organizations, you will save time by easily switching to the particular organization user account that you need to access. SalesforceA supported devices At the time of writing, the following devices are supported by Salesforce for use with the SalesforceA downloadable app: Android phones Apple iPhone Apple iPod Touch SalesforceA can be installed from Google Play™ for Android™ phones and the Apple® App Store for Apple devices. Salesforce Touch Salesforce Touch is the name of an earlier Salesforce mobile product and is no longer available. With the Spring 2014 release, Salesforce Touch was incorporated into the Salesforce1 app. Hence, both the Salesforce Touch mobile browser and Salesforce Touch downloadable apps are no longer available; however, the functionality that they once offered is available in Salesforce1, which is covered in this article. Salesforce1 Salesforce1 is Salesforce's next-generation mobile CRM platform that has been designed for Salesforce's customers, developers, and ISVs (independent software vendors) to connect mobile apps, browser apps, and third-party app services. Salesforce1 has been developed for a mobile-first environment and demonstrates how Salesforce's focus as a platform provider aims to connect enterprises with systems that can be programmed through APIs, along with mobile apps and services that can be utilized by marketing, sales, and customer service. There are two ways to use Salesforce1: either using a mobile browser app that users can access by logging into Salesforce from a supported mobile browser or downloadable apps that users can install from the App Store or Google Play. Either way, Salesforce1 allows users to access and update Salesforce data from an interface that has been optimized to navigate and work on their touchscreen mobile devices. Using Salesforce1, records can be viewed, edited, and created. Users can manage their activities, view their dashboards, and use Chatter. Salesforce1 also supports many standard objects and list views, all custom objects, plus the integration of other mobile apps and many of your organization's Salesforce customizations, including Visualforce tabs and pages. Salesforce1 supported devices At the time of writing this, the following devices are supported by Salesforce for the Salesforce1 mobile browser app: Android phones Apple iPad Apple iPhone BlackBerry Z10 Windows 8.1 phones (Beta support) Also, at the time of writing this, Salesforce specifies the following devices as being supported for the Salesforce1 downloadable app: Android phones Apple iPad Apple iPhone Salesforce1 data availability Your organization edition, the user's license type, along with the user's profile and any permission sets, determines the data that is available to the user within Salesforce1. Generally, users have the same visibility of objects, record types, fields, and page layouts that they have while accessing the full Salesforce browser app. However, at the time of writing this, not all data is available in the current release of the Salesforce1 app. In Winter 2015, these key objects are fully accessible from the Salesforce1 navigation menu: Accounts; Campaigns; Cases; Contacts; Contracts; Leads; Opportunities; Tasks; and Users. Dashboards and Events, however, are restricted to being viewable from only the Salesforce1 navigation menu. Custom objects are fully accessible if they have a tab that the user can access. For new users who are yet to build a history of recent objects, they initially see a set of default objects in the Recent section in the Salesforce1 navigation menu. The majority of standard and custom fields, and most of the related lists for the supported objects, are available on these records; however, at the time of writing this, the following exceptions exist: Rich text area field support varies (detailed shortly) Links on formula fields are not supported State and country picklist fields are not supported Related lists in Salesforce1 are restricted (detailed shortly) Rich text area field support varies Support for rich text area fields varies by the version of Salesforce1 and the type of device. For Android's downloadable apps, you can view and edit rich text area fields. However, for Android's mobile browser apps, you can only view rich text area fields; editing is not supported currently. For iOS's downloadable apps, you can view but not edit rich text area fields. However, for iOS's mobile browser apps, you can view and also edit rich text area fields. Finally, for both BlackBerry and Windows 8.1 mobile browser apps, you can neither view nor edit rich text area fields. Related lists in Salesforce1 Related lists in Salesforce1 are restricted and display the first four fields that are defined on the page layout for that object. The number of fields shown cannot be increased. If Chatter is enabled, users can also access feeds, people, groups, and Salesforce Files. When users are working with records in the full Salesforce app, it can take up to 15 days for this data to appear in the Recent section; thus, to make records appear under the Recent section sooner, ask users to pin them from their search results in the full Salesforce site. Salesforce1 administration You can manage your organization's access to Salesforce1 apps; there are two areas of administration: the mobile browser app that users can access by logging in to Salesforce from a supported mobile browser and the downloadable app that users can install from the App Store or Google Play. The upcoming sections describe the ways to control user access to each of these mobile apps. Salesforce1 mobile browser app access You can control whether users can access the Salesforce1 mobile browser app when they log into Salesforce from a mobile browser. To select or deselect this feature, navigate to Setup | Mobile Administration | Salesforce1 | Settings, as shown in the following screenshot: By selecting the Enable the Salesforce1 mobile browser app checkbox, all users are activated to access Salesforce1 from their mobile browsers. Deselecting this option turns off the mobile browser app, which means that users will automatically access the full Salesforce site from their mobile browser. By default, the mobile browser app is turned on in all Salesforce organizations. Salesforce1 desktop browser access Selecting the Enable the Salesforce1 mobile browser app checkbox, as described in the previous section, permits users who are activated to access Salesforce1 from their desktop browsers. Users can navigate to the Salesforce1 app within their desktop browser by appending “/one/one.app” to the end of the Salesforce URL. As an example, for the following Salesforce URL accessed from the server na10, you would enter the https://na10.salesforce.com/one/one.app desktop browser URL. Salesforce1 downloadable app access The Salesforce1 app is distributed as a managed package, and within Salesforce, it is implemented as a connected app. You might already see the Salesforce1 connected app in your list of installed apps as it might have been automatically installed in your organization. The list of included apps can change with each Salesforce release but, to simplify administration, each package is asynchronously installed in Salesforce organizations whenever any user in that organization first accesses Salesforce1. However, to manually install or reinstall the Salesforce1 package for connected apps, you can install it from the AppExchange. To view the details for the Salesforce1 app in the connected app settings, navigate to Setup | Manage Apps | Connected Apps. The apps that connect to your Salesforce organization are then listed as shown in the following screenshot: Salesforce1 notifications Notifications allow all users in your organization to receive mobile notifications in Salesforce1, for example, whenever they are mentioned in Chatter or whenever they receive approval requests. To activate mobile notifications, navigate to Setup | Mobile Administration | Notifications | Settings, as shown in the following screenshot: The settings for notifications can be set as follows: Enable in-app notifications: Set this option to keep users notified about relevant Salesforce activity while they are using Salesforce1. Enable push notifications: Set this option to keep users notified of relevant Salesforce activity when they are not using the Salesforce1 downloadable app. Include full content in push notifications: Keep this checkbox unchecked if you do not want users to receive full content in push notifications. This can prevent users from receiving potentially sensitive data that might be in comments, for example. If you set this option, a pop-up dialog appears, displaying terms and conditions where you must click on OK or Cancel. Salesforce1 branding This option allows you to customize the appearance of the Salesforce1 app so that it complies with any company branding requirements that might be in place. Salesforce1 branding is supported in downloadable apps' Version 5.2 or higher and also in the mobile browser app. To specify Salesforce1 branding, navigate to Setup | Mobile Administration | Salesforce1 | Branding, as shown in the following screenshot: Salesforce1 compact layouts In Salesforce1, compact layouts are used to display the key fields on a record and are specifically designed to view records on touchscreen mobile devices. As space is limited on mobile devices and quick recognition of records is important, the first four fields that you assign to a compact layout are displayed. If a mobile user does not have the required access to one of the first four fields that have been assigned to a compact layout, the next field, if more than four fields have been set on the layout, is used. If you are yet to create custom compact layouts, the records will be displayed using a read-only, predefined system default compact layout, and after you have created a custom compact layout, you can then set it as the primary compact layout for that object. As with the full Salesforce CRM site, if you have record types associated with an object, you can alter the primary compact layout assignment and assign specific compact layouts to different record types. You can also clone a compact layout from its detail page. The upcoming field types cannot be included on compact layouts: text area, long text area, rich text area, and multiselect picklists. Salesforce1 offline access In Salesforce1, the mechanism to handle offline access is determined by users' most recently used records. These records are cached for offline access; at the time of writing this, they are read-only. The cached data is encrypted and secured through persistent storage by Salesforce1's downloadable apps. Offline access is available in Salesforce1's downloadable apps Version 6.0 and higher and was first released in Summer 2014. Offline access is enabled by default when Salesforce1's downloadable app is installed. To manage these settings, navigate to Setup | Mobile Administration | Offline. Now, check or uncheck Enable Offline Sync for Salesforce1, as shown in the following screenshot: When offline access is enabled, data based on the objects is downloaded to each user's mobile device and presented in the Recent section of the Salesforce1 navigation menu and on the user's most recently viewed records. The data is encrypted and stored in a secure, persistent cache on the mobile device. Setting up Salesforce1 with the Salesforce1 Wizard The Salesforce1 Wizard simplifies the setting up of the Salesforce1 mobile app. The wizard offers a visual tour of the key setup steps and is useful if you are new to Salesforce1 or need to quickly set up the core Salesforce1 settings. The Salesforce1 Wizard guides you through the setting up of the following Salesforce1 configuration steps: Choose which items appear in the navigation menu Configure global actions Create a contact custom compact layout Optionally, invite users to start using the Salesforce1 app To access the Salesforce1 Wizard, navigate to Setup | Salesforce1 Setup. Now, click on Launch Quick Start Wizard within the Salesforce1 Setup page, as shown in the following screenshot: Upon clicking on the Let's Get Started section link (shown in the following screenshot), you will be presented with the Salesforce1 Setup visual tour, as shown in the next section. The Quick Start Wizard The Quick Start Wizard guides you through the minimum configuration steps required to set up Salesforce1. By clicking on the Launch Quick Start Wizard button, the process to complete the essential setup tasks for Salesforce1 is initiated and provides a step-by-step wizard guide. The five steps are: Customize the Navigation Menu: This step results in the setup of the navigation menu for all users in your organization. To reorder items, drag them up and down. To remove items, drag them to the Available Items list, as shown in the following screenshot: Arrange Global Actions: Global actions provide users with quick access to Salesforce functions and in this step, you will choose and arrange the Salesforce1 global actions, as shown in the following screenshot: Actions might might have a different appearance, depending upon your version of Salesforce1. Create a Custom Compact Layout for Contacts: Compact layouts are used to show the key fields on a record in the highlights area at the top of the record detail. In this step, you are able to create a custom compact layout for contacts to set, for example, a contact's name, e-mail, and phone number, as shown in the following screenshot: However, after you have completed the Quick Start Wizard, you can create compact layouts for other objects as required. Review: In this step, you are given the chance to preview the changes to verify the results of the changes, as shown in the following screenshot: The review step screen gives you a live preview that uses your current access as the logged-in user. Send Invitations: This is the final step of the Quick Start Wizard, which will provide you with a basic setup of Salesforce1 and allow you to get feedback on what you have implemented. In this step, you can invite your users to start using the Salesforce1 app, as shown in the following screenshot: This step can be skipped and you can always send invitations later from the Salesforce1 setup page. You can also implement additional options to customize the app, such as incorporating your own branding. Differences between Salesforce1 and the full Salesforce CRM browser app In the Winter 2015 release and at the time of writing this, Salesforce1 does not have all of the features of the full Salesforce CRM site; moreover, in some areas, it includes functionality that is not available in, or is different from, the complete Salesforce site. As an example, on the full Salesforce CRM site, compact layouts determine which fields appear in the Chatter feed item and which appear after a user creates a record via a publisher action. However, compact layouts in Salesforce1 are used to display the key fields on a record. For details about the features that differ between the full Salesforce CRM site and Salesforce1, refer to Salesforce1 Limits and Differences from the Full Salesforce Site within the Salesforce Help menu sections. Summary In this article, we looked at ways in which mobile has become the new normal way to stay connected in both our personal and professional lives. Salesforce has recognized this well; we are all spending time being connected to the cloud and using business applications. However, instead of sitting at a desk, users are often on the go. To try and help their customers become successful businesses of this mobile-first world, Salesforce has produced mobile solutions that can help user get things done regardless of where they are and what they are doing. We looked at SalesforceA, which is an admin specific app that can help you manage users and monitor the status of Salesforce while on the move. We discussed Salesforce Touch, which is being replaced with Salesforce1, and we also spoke about the features and benefits of Salesforce1, which is available as a downloadable app and a browser app. Resources for Article: Further resources on this subject: Customization in Microsoft Dynamics CRM [Article] Getting Started with Microsoft Dynamics CRM 2013 Marketing [Article] Diagnostic leveraging of the Accelerated POC with the CRM Online service [Article]
Read more
  • 0
  • 0
  • 2036

article-image-threejs-materials-and-texture
Packt
06 Feb 2015
11 min read
Save for later

Three.js - Materials and Texture

Packt
06 Feb 2015
11 min read
In this article by Jos Dirksen author of the book Three.js Cookbook, we will learn how Three.js offers a large number of different materials and supports many different types of textures. These textures provide a great way to create interesting effects and graphics. In this article, we'll show you recipes that allow you to get the most out of these components provided by Three.js. (For more resources related to this topic, see here.) Using HTML canvas as a texture Most often when you use textures, you use static images. With Three.js, however, it is also possible to create interactive textures. In this recipe, we will show you how you can use an HTML5 canvas element as an input for your texture. Any change to this canvas is automatically reflected after you inform Three.js about this change in the texture used on the geometry. Getting ready For this recipe, we need an HTML5 canvas element that can be displayed as a texture. We can create one ourselves and add some output, but for this recipe, we've chosen something else. We will use a simple JavaScript library, which outputs a clock to a canvas element. The resulting mesh will look like this (see the 04.03-use-html-canvas-as-texture.html example): The JavaScript used to render the clock was based on the code from this site: http://saturnboy.com/2013/10/html5-canvas-clock/. To include the code that renders the clock in our page, we need to add the following to the head element: <script src="../libs/clock.js"></script> How to do it... To use a canvas as a texture, we need to perform a couple of steps: The first thing we need to do is create the canvas element: var canvas = document.createElement('canvas'); canvas.width=512; canvas.height=512; Here, we create an HTML canvas element programmatically and define a fixed width. Now that we've got a canvas, we need to render the clock that we use as the input for this recipe on it. The library is very easy to use; all you have to do is pass in the canvas element we just created: clock(canvas); At this point, we've got a canvas that renders and updates an image of a clock. What we need to do now is create a geometry and a material and use this canvas element as a texture for this material: var cubeGeometry = new THREE.BoxGeometry(10, 10, 10); var cubeMaterial = new THREE.MeshLambertMaterial(); cubeMaterial.map = new THREE.Texture(canvas); var cube = new THREE.Mesh(cubeGeometry, cubeMaterial); To create a texture from a canvas element, all we need to do is create a new instance of THREE.Texture and pass in the canvas element we created in step 1. We assign this texture to the cubeMaterial.map property, and that's it. If you run the recipe at this step, you might see the clock rendered on the sides of the cubes. However, the clock won't update itself. We need to tell Three.js that the canvas element has been changed. We do this by adding the following to the rendering loop: cubeMaterial.map.needsUpdate = true; This informs Three.js that our canvas texture has changed and needs to be updated the next time the scene is rendered. With these four simple steps, you can easily create interactive textures and use everything you can create on a canvas element as a texture in Three.js. How it works... How this works is actually pretty simple. Three.js uses WebGL to render scenes and apply textures. WebGL has native support for using HTML canvas element as textures, so Three.js just passes on the provided canvas element to WebGL and it is processed as any other texture. Making part of an object transparent You can create a lot of interesting visualizations using the various materials available with Three.js. In this recipe, we'll look at how you can use the materials available with Three.js to make part of an object transparent. This will allow you to create complex-looking geometries with relative ease. Getting ready Before we dive into the required steps in Three.js, we first need to have the texture that we will use to make an object partially transparent. For this recipe, we will use the following texture, which was created in Photoshop: You don't have to use Photoshop; the only thing you need to keep in mind is that you use an image with a transparent background. Using this texture, in this recipe, we'll show you how you can create the following (04.08-make-part-of-object-transparent.html): As you can see in the preceeding, only part of the sphere is visible, and you can look through the sphere to see the back at the other side of the sphere. How to do it... Let's look at the steps you need to take to accomplish this: The first thing we do is create the geometry. For this recipe, we use THREE.SphereGeometry: var sphereGeometry = new THREE.SphereGeometry(6, 20, 20); Just like all the other recipes, you can use whatever geometry you want. In the second step, we create the material: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.side = THREE.DoubleSide; mat.depthWrite = false; mat.color = new THREE.Color(0xff0000); As you can see in this fragment, we create THREE.MeshPhongMaterial and load the texture we saw in the Getting ready section of this recipe. To render this correctly, we also need to set the side property to THREE.DoubleSide so that the inside of the sphere is also rendered, and we need to set the depthWrite property to false. This will tell WebGL that we still want to test our vertices against the WebGL depth buffer, but we don't write to it. Often, you need to set this to false when working with more complex transparent objects or particles. Finally, add the sphere to the scene: var sphere = new THREE.Mesh(sphereGeometry, mat); scene.add(sphere); With these simple steps, you can create really interesting effects by just experimenting with textures and geometries. There's more With Three.js, it is possible to repeat textures (refer to the Setup repeating textures recipe). You can use this to create interesting-looking objects such as this: The code required to set a texture to repeat is the following: var mat = new THREE.MeshPhongMaterial(); mat.map = new THREE.ImageUtils.loadTexture( "../assets/textures/partial-transparency.png"); mat.transparent = true; mat.map.wrapS = mat.map.wrapT = THREE.RepeatWrapping; mat.map.repeat.set( 4, 4 ); mat.depthWrite = false; mat.color = new THREE.Color(0x00ff00); By changing the mat.map.repeat.set values, you define how often the texture is repeated. Using a cubemap to create reflective materials With the approach Three.js uses to render scenes in real time, it is difficult and very computationally intensive to create reflective materials. Three.js, however, provides a way you can cheat and approximate reflectivity. For this, Three.js uses cubemaps. In this recipe, we'll explain how to create cubemaps and use them to create reflective materials. Getting ready A cubemap is a set of six images that can be mapped to the inside of a cube. They can be created from a panorama picture and look something like this: In Three.js, we map such a map on the inside of a cube or sphere and use that information to calculate reflections. The following screenshot (example 04.10-use-reflections.html) shows what this looks like when rendered in Three.js: As you can see in the preceeding screenshot, the objects in the center of the scene reflect the environment they are in. This is something often called a skybox. To get ready, the first thing we need to do is get a cubemap. If you search on the Internet, you can find some ready-to-use cubemaps, but it is also very easy to create one yourself. For this, go to http://gonchar.me/panorama/. On this page, you can upload a panoramic picture and it will be converted to a set of pictures you can use as a cubemap. For this, perform the following steps: First, get a 360 degrees panoramic picture. Once you have one, upload it to the http://gonchar.me/panorama/ website by clicking on the large OPEN button:  Once uploaded, the tool will convert the panorama picture to a cubemap as shown in the following screenshot:  When the conversion is done, you can download the various cube map sites. The recipe in this book uses the naming convention provided by Cube map sides option, so download them. You'll end up with six images with names such as right.png, left.png, top.png, bottom.png, front.png, and back.png. Once you've got the sides of the cubemap, you're ready to perform the steps in the recipe. How to do it... To use the cubemap we created in the previous section and create reflecting material,we need to perform a fair number of steps, but it isn't that complex: The first thing you need to do is create an array from the cubemap images you downloaded: var urls = [ '../assets/cubemap/flowers/right.png', '../assets/cubemap/flowers/left.png', '../assets/cubemap/flowers/top.png', '../assets/cubemap/flowers/bottom.png', '../assets/cubemap/flowers/front.png', '../assets/cubemap/flowers/back.png' ]; With this array, we can create a cubemap texture like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls); cubemap.format = THREE.RGBFormat; From this cubemap, we can use THREE.BoxGeometry and a custom THREE.ShaderMaterial object to create a skybox (the environment surrounding our meshes): var shader = THREE.ShaderLib[ "cube" ]; shader.uniforms[ "tCube" ].value = cubemap; var material = new THREE.ShaderMaterial( { fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms, depthWrite: false, side: THREE.DoubleSide }); // create the skybox var skybox = new THREE.Mesh( new THREE.BoxGeometry( 10000, 10000, 10000 ), material ); scene.add(skybox); Three.js provides a custom shader (a piece of WebGL code) that we can use for this. As you can see in the code snippet, to use this WebGL code, we need to define a THREE.ShaderMaterial object. With this material, we create a giant THREE.BoxGeometry object that we add to scene. Now that we've created the skybox, we can define the reflecting objects: var sphereGeometry = new THREE.SphereGeometry(4,15,15); var envMaterial = new THREE.MeshBasicMaterial( {envMap:cubemap}); var sphere = new THREE.Mesh(sphereGeometry, envMaterial); As you can see, we also pass in the cubemap we created as a property (envmap) to the material. This informs Three.js that this object is positioned inside a skybox, defined by the images that make up cubemap. The last step is to add the object to the scene, and that's it: scene.add(sphere); In the example in the beginning of this recipe, you saw three geometries. You can use this approach with all different types of geometries. Three.js will determine how to render the reflective area. How it works... Three.js itself doesn't really do that much to render the cubemap object. It relies on a standard functionality provided by WebGL. In WebGL, there is a construct called samplerCube. With samplerCube, you can sample, based on a specific direction, which color matches the cubemap object. Three.js uses this to determine the color value for each part of the geometry. The result is that on each mesh, you can see a reflection of the surrounding cubemap using the WebGL textureCube function. In Three.js, this results in the following call (taken from the WebGL shader in GLSL): vec4 cubeColor = textureCube( tCube, vec3( -vReflect.x, vReflect.yz ) ); A more in-depth explanation on how this works can be found at http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/#cubemap-lookup. There's more... In this recipe, we created the cubemap object by providing six separate images. There is, however, an alternative way to create the cubemap object. If you've got a 360 degrees panoramic image, you can use the following code to directly create a cubemap object from that image: var texture = THREE.ImageUtils.loadTexture( 360-degrees.png', new THREE.UVMapping()); Normally when you create a cubemap object, you use the code shown in this recipe to map it to a skybox. This usually gives the best results but requires some extra code. You can also use THREE.SphereGeometry to create a skybox like this: var mesh = new THREE.Mesh( new THREE.SphereGeometry( 500, 60, 40 ), new THREE.MeshBasicMaterial( { map: texture })); mesh.scale.x = -1; This applies the texture to a sphere and with mesh.scale, turns this sphere inside out. Besides reflection, you can also use a cubemap object for refraction (think about light bending through water drops or glass objects): All you have to do to make a refractive material is load the cubemap object like this: var cubemap = THREE.ImageUtils.loadTextureCube(urls, new THREE.CubeRefractionMapping()); And define the material in the following way: var envMaterial = new THREE.MeshBasicMaterial({envMap:cubemap}); envMaterial.refractionRatio = 0.95; Summary In this article, we learned about the different textures and materials supported by Three.js Resources for Article:  Further resources on this subject: Creating the maze and animating the cube [article] Working with the Basic Components That Make Up a Three.js Scene [article] Mesh animation [article]
Read more
  • 0
  • 0
  • 25991

article-image-working-webstart-and-browser-plugin
Packt
06 Feb 2015
12 min read
Save for later

Working with WebStart and the Browser Plugin

Packt
06 Feb 2015
12 min read
 In this article by Alex Kasko, Stanislav Kobyl yanskiy, and Alexey Mironchenko, authors of the book OpenJDK Cookbook, we will cover the following topics: Building the IcedTea browser plugin on Linux Using the IcedTea Java WebStart implementation on Linux Preparing the IcedTea Java WebStart implementation for Mac OS X Preparing the IcedTea Java WebStart implementation for Windows Introduction For a long time, for end users, the Java applets technology was the face of the whole Java world. For a lot of non-developers, the word Java itself is a synonym for the Java browser plugin that allows running Java applets inside web browsers. The Java WebStart technology is similar to the Java browser plugin but runs remotely on loaded Java applications as separate applications outside of web browsers. The OpenJDK open source project does not contain the implementations for the browser plugin nor for the WebStart technologies. The Oracle Java distribution, otherwise matching closely to OpenJDK codebases, provided its own closed source implementation for these technologies. The IcedTea-Web project contains free and open source implementations of the browser plugin and WebStart technologies. The IcedTea-Web browser plugin supports only GNU/Linux operating systems and the WebStart implementation is cross-platform. While the IcedTea implementation of WebStart is well-tested and production-ready, it has numerous incompatibilities with the Oracle WebStart implementation. These differences can be seen as corner cases; some of them are: Different behavior when parsing not well-formed JNLP descriptor files: The Oracle implementation is generally more lenient for malformed descriptors. Differences in JAR (re)downloading and caching behavior: The Oracle implementation uses caching more aggressively. Differences in sound support: This is due to differences in sound support between Oracle Java and IcedTea on Linux. Linux historically has multiple different sound providers (ALSA, PulseAudio, and so on) and IcedTea has more wide support for different providers, which can lead to sound misconfiguration. The IcedTea-Web browser plugin (as it is built on WebStart) has these incompatibilities too. On top of them, it can have more incompatibilities in relation to browser integration. User interface forms and general browser-related operations such as access from/to JavaScript code should work fine with both implementations. But historically, the browser plugin was widely used for security-critical applications like online bank clients. Such applications usually require security facilities from browsers, such as access to certificate stores or hardware crypto-devices that can differ from browser to browser, depending on the OS (for example, supports only Windows), browser version, Java version, and so on. Because of that, many real-world applications can have problems running the IcedTea-Web browser plugin on Linux. Both WebStart and the browser plugin are built on the idea of downloading (possibly untrusted) code from remote locations, and proper privilege checking and sandboxed execution of that code is a notoriously complex task. Usually reported security issues in the Oracle browser plugin (most widely known are issues during the year 2012) are also fixed separately in IcedTea-Web. Building the IcedTea browser plugin on Linux The IcedTea-Web project is not inherently cross-platform; it is developed on Linux and for Linux, and so it can be built quite easily on popular Linux distributions. The two main parts of it (stored in corresponding directories in the source code repository) are netx and plugin. NetX is a pure Java implementation of the WebStart technology. We will look at it more thoroughly in the following recipes of this article. Plugin is an implementation of the browser plugin using the NPAPI plugin architecture that is supported by multiple browsers. Plugin is written partly in Java and partly in native code (C++), and it officially supports only Linux-based operating systems. There exists an opinion about NPAPI that this architecture is dated, overcomplicated, and insecure, and that modern web browsers have enough built-in capabilities to not require external plugins. And browsers have gradually reduced support for NPAPI. Despite that, at the time of writing this book, the IcedTea-Web browser plugin worked on all major Linux browsers (Firefox and derivatives, Chromium and derivatives, and Konqueror). We will build the IcedTea-Web browser plugin from sources using Ubuntu 12.04 LTS amd64. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build the IcedTea-Web browser plugin: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Install the specific dependency for the browser plugin: sudo apt-get install firefox-dev Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up the build environment: ./configure Run the build process: make Install the newly built plugin into the /usr/local directory: sudo make install Configure the Firefox web browser to use the newly built plugin library: mkdir ~/.mozilla/plugins cd ~/.mozilla/plugins ln -s /usr/local/IcedTeaPlugin.so libjavaplugin.so Check whether the IcedTea-Web plugin has appeared under Tools | Add-ons | Plugins. Open the http://java.com/en/download/installed.jsp web page to verify that the browser plugin works. How it works... The IcedTea browser plugin requires the IcedTea Java implementation to be compiled successfully. The prepackaged OpenJDK 7 binaries in Ubuntu 12.04 are based on IcedTea, so we installed them first. The plugin uses the GNU Autconf build system that is common between free software tools. The xulrunner-dev package is required to access the NPAPI headers. The built plugin may be installed into Firefox for the current user only without requiring administrator privileges. For that, we created a symbolic link to our plugin in the place where Firefox expects to find the libjavaplugin.so plugin library. There's more... The plugin can also be installed into other browsers with NPAPI support, but installation instructions can be different for different browsers and different Linux distributions. As the NPAPI architecture does not depend on the operating system, in theory, a plugin can be built for non-Linux operating systems. But currently, no such ports are planned. Using the IcedTea Java WebStart implementation on Linux On the Java platform, the JVM needs to perform the class load process for each class it wants to use. This process is opaque for the JVM and actual bytecode for loaded classes may come from one of many sources. For example, this method allows the Java Applet classes to be loaded from a remote server to the Java process inside the web browser. Remote class loading also may be used to run remotely loaded Java applications in standalone mode without integration with the web browser. This technique is called Java WebStart and was developed under Java Specification Request (JSR) number 56. To run the Java application remotely, WebStart requires an application descriptor file that should be written using the Java Network Launching Protocol (JNLP) syntax. This file is used to define the remote server to load the application form along with some metainformation. The WebStart application may be launched from the web page by clicking on the JNLP link, or without the web browser using the JNLP file obtained beforehand. In either case, running the application is completely separate from the web browser, but uses a sandboxed security model similar to Java Applets. The OpenJDK project does not contain the WebStart implementation; the Oracle Java distribution provides its own closed-source WebStart implementation. The open source WebStart implementation exists as part of the IcedTea-Web project. It was initially based on the NETwork eXecute (NetX) project. Contrary to the Applet technology, WebStart does not require any web browser integration. This allowed developers to implement the NetX module using pure Java without native code. For integration with Linux-based operating systems, IcedTea-Web implements the javaws command as shell script that launches the netx.jar file with proper arguments. In this recipe, we will build the NetX module from the official IcedTea-Web source tarball. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build a NetX module: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up a build environment excluding the browser plugin from the build: ./configure –disable-plugin Run the build process: make Install the newly-built plugin into the /usr/local directory: sudo make install Run the WebStart application example from the Java tutorial: javaws http://docs.oracle.com/javase/tutorialJWS/samples/ deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp How it works... The javaws shell script is installed into the /usr/local/* directory. When launched with a path or a link to the JNLP file, javaws launches the netx.jar file, adding it to the boot classpath (for security reasons) and providing the JNLP link as an argument. Preparing the IcedTea Java WebStart implementation for Mac OS X The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Mac OS X. IcedTea-Web provides the javaws launcher implementation only for Linux-based operating systems. In this recipe, we will create a simple implementation of the WebStart launcher script for Mac OS X. Getting ready For this recipe, we will need Mac OS X Lion with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe. How to do it... The following procedure will help you to run WebStart applications on Mac OS X: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The next.jar file contains a Java application that can read JNLP files and download and run classes described in JNLP. But for security reasons, next.jar cannot be launched directly as an application (using the java -jar netx.jar syntax). Instead, netx.jar is added to the privileged boot classpath and is run specifying the main class directly. This allows us to download applications in sandbox mode. The wslauncher.sh script tries to find the Java executable file using the PATH and JAVA_HOME environment variables and then launches specified JNLP through netx.jar. There's more... The wslauncher.sh script provides a basic solution to run WebStart applications from the terminal. To integrate netx.jar into your operating system environment properly (to be able to launch WebStart apps using JNLP links from the web browser), a native launcher or custom platform scripting solution may be used. Such solutions lay down the scope of this book. Preparing the IcedTea Java WebStart implementation for Windows The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Windows; we also used it on Linux and Mac OS X in previous recipes in this article. In this recipe, we will create a simple implementation of the WebStart launcher script for Windows. Getting ready For this recipe, we will need a version of Windows running with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe in this article. How to do it... The following procedure will help you to run WebStart applications on Windows: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The netx.jar module must be added to the boot classpath as it cannot be run directly because of security reasons. The wslauncher.bat script tries to find the Java executable using the JAVA_HOME environment variable and then launches specified JNLP through netx.jar. There's more... The wslauncher.bat script may be registered as a default application to run the JNLP files. This will allow you to run WebStart applications from the web browser. But the current script will show the batch window for a short period of time before launching the application. It also does not support looking for Java executables in the Windows Registry. A more advanced script without those problems may be written using Visual Basic script (or any other native scripting solution) or as a native executable launcher. Such solutions lay down the scope of this book. Summary In this article we covered the configuration and installation of WebStart and browser plugin components, which are the biggest parts of the Iced Tea project.
Read more
  • 0
  • 0
  • 7645

article-image-getting-your-own-video-and-feeds
Packt
06 Feb 2015
18 min read
Save for later

Getting Your Own Video and Feeds

Packt
06 Feb 2015
18 min read
"One server to satisfy them all" could have been the name of this article by David Lewin, the author of BeagleBone Media Center. We now have a great media server where we can share any media, but we would like to be more independent so that we can choose the functionalities the server can have. The goal of this article is to let you cross the bridge, where you are going to increase your knowledge by getting your hands dirty. After all, you want to build your own services, so why not create your own contents as well. (For more resources related to this topic, see here.) More specifically, here we will begin by building a webcam streaming service from scratch, and we will see how this can interact with what we have implemented previously in the server. We will also see how to set up a service to retrieve RSS feeds. We will discuss the services in the following sections: Installing and running MJPG-Streamer Detecting the hardware device and installing drivers and libraries for a webcam Configuring RSS feeds with Leed Detecting the hardware device and installing drivers and libraries for a webcam Even though today many webcams are provided with hardware encoding capabilities such as the Logitech HD Pro series, we will focus on those without this capability, as we want to have a low budget project. You will then learn how to reuse any webcam left somewhere in a box because it is not being used. At the end, you can then create a low cost video conference system as well. How to know your webcam As you plug in the webcam, the Linux kernel will detect it, so you can read every detail it's able to retrieve about the connected device. We are going to see two ways to retrieve the webcam we have plugged in: the easy one that is not complete and the harder one that is complete. "All magic comes with a price."                                                                                     –Rumpelstiltskin, Once Upon a Time Often, at a certain point in your installation, you have to choose between the easy or the hard way. Most of the time, powerful Linux commands or tools are not thought to be easy at first but after some experiments you'll discover that they really can make your life better. Let's start with the fast and easy way, which is lsusb : debian@arm:~$ lsusb Bus 001 Device 002: ID 046d:0802 Logitech, Inc. Webcam C200 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub This just confirms that the webcam is running well and is seen correctly from the USB. Most of the time we want more details, because a hardware installation is not exactly as described in books or documentations, so you might encounter slight differences. This is why the second solution comes in. Among some of the advantages, you are able to know each step that has taken place when the USB device was discovered by the board and Linux, such as in a hardware scenario: debian@arm:~$ dmesg A UVC device (here, a Logitech C200) has been used to obtain these messages Most probably, you won't exactly have the same outputs, but they should be close enough so that you can interpret them easily when they are referred to: New USB device found: This is the main message. In case of any issue, we will check its presence elsewhere. This message indicates that this is a hardware error and not a software or configuration error that you need to investigate. idVendor and idProduct: This message indicates that the device has been detected. This information is interesting so you can check the constructor detail. Most recent webcams are compatible with the Linux USB Video Class (UVC), you can check yours at http://www.ideasonboard.org/uvc/#devices. Among all the messages, you should also look for the one that says Registered new interface driver interface because failing to find it can be a clue that Linux could detect the device but wasn't able to install it. The new device will be detected as /dev/video0. Nevertheless, at start, you can see your webcam as a different device name according to your BeagleBone configuration, for example, if a video capable cape is already plugged in. Setting up your webcam Now we know what is seen from the USB level. The next step is to use the crucial Video4Linux driver, which is like a Swiss army knife for anything related to video capture: debian@arm:~$ Install v4l-utils The primary use of this tool is to inquire about what the webcam can provide with some of its capabilities: debian@arm:~$ v4l2-ctl -–all There are four distinctive sections that let you know how your webcam will be used according to the current settings: Driver info (1) : This contains the following information: Name, vendor, and product IDs that we find in the system message The driver info (the kernel's version) Capabilities: the device is able to provide video streaming Video capture supported format(s) (2): This contains the following information: What resolution(s) are to be used. As this example uses an old webcam, there is not much to choose from but you can easily have a lot of choices with devices nowadays. The pixel format is all about how the data is encoded but more details can be retrieved about format capabilities (see the next paragraph). The remaining stuff is relevant only if you want to know in precise detail. Crop capabilities (3): This contains your current settings. Indeed, you can define the video crop window that will be used. If needed, use the crop settings: --set-crop-output=top=<x>,left=<y>,width=<w>,height=<h> Video input (4): This contains the following information: The input number. Here we have used 0, which is the one that we found previously. Its current status. The famous frames per second, which gives you a local ratio. This is not what you will obtain when you'll be using a server, as network latencies will downgrade this ratio value. You can grab capabilities for each parameter. For instance, if you want to see all the video formats the webcam can provide, type this command: debian@arm:~$ v4l2-ctl --list-formats Here, we see that we can also use MJPEG format directly provided by the cam. While this part is not mandatory, such a hardware tour is interesting because you know what you can do with your device. It is also a good habit to be able to retrieve diagnostics when the webcam shows some bad signs. If you would like to get more in depth knowledge about your device, install the uvcdynctrl package, which lets you retrieve all the formats and frame rates supported. Installing and running MJPG-Streamer Now that we have checked the chain from the hardware level up to the driver, we can install the software that will make use of Video4Linux for video streaming. Here comes MJPG-Streamer. This application aims to provide you with a JPEG stream on the network available for browsers and all video applications. Besides this, we are also interested in this solution as it's made for systems with less advanced CPU, so we can start MJPG-Streamer as a service. With this streamer, you can also use the built-hardware compression and even control webcams such as pan, tilt, rotations, zoom capabilities, and so on. Installing MJPG-Streamer Before installing MJPG-Streamer, we will install all the necessary dependencies: debian@arm:~$ install subversion libjpeg8-dev imagemagick Next, we will retrieve the code from the project: debian@arm:~$ svn checkout http://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code You can now build the executable from the sources you just downloaded by performing the following steps: Enter the following into the local directory you have downloaded: debian@arm:~$ cd mjpg-streamer-code/mjpg-streamer Then enter the following command: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ make When the compilation is complete, we end up with some new files. From this picture the new green files are produced from the compilation: there are the executables and some plugins as well. That's all that is needed, so the application is now considered ready. We can now try it out. Not so much to do after all, don't you think? Starting the application This section aims at getting you started quickly with MJPG-Streamer. At the end, we'll see how to start it as a service on boot. Before getting started, the server requires some plugins to be copied into the dedicated lib directory for this purpose: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ sudo cp input_uvc.so output_http.so /usr/lib The MJPG-Streamer application has to know the path where these files can be found, so we define the following environment variable: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ export LD_LIBRARY_PATH=/usr/ lib;$LD_LIBRARY_PATH Enough preparation! Time to start streaming: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" As the script starts, the input parameters that will be taken into consideration are displayed. You can now identify this information, as they have been explained previously: The detected device from V4L2 The resolution that will be displayed, according to your settings Which port will be opened Some controls that depend on your camera capabilities (tilt, pan, and so on) If you need to change the port used by MJPG-Streamer, add -p xxxx at the end of the command, which is shown as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www –p 1234" Let's add some security If you want to add some security, then you should set the credentials: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg-streamer -o "output_http.so -w ./www -c debian:temppwd" Credentials can always be stolen and used without your consent. The best way to ensure that your stream is confidential all along would be to encrypt it. So if you intend to use strong encryption for secured applications, the crypto-cape is worth taking a look at http://datko.net/2013/10/03/howto_crypto_beaglebone_black/. "I'm famous" – your first stream That's it. The webcam is made accessible to everyone across the network from BeagleBone; you can access the video from your browser and connect to http://192.168.0.15:8080/. You will then see the default welcome screen, bravo!: Your first contact with the MJPG-Server You might wonder how you would get informed about which port to use among those already assigned. Using our stream across the network Now that the webcam is available across the network, you have several options to handle this: You can use the direct flow available from the home page. On the left-hand side menu, just click on the stream tab. Using VLC, you can open the stream with the direct link available at http://192.168.0.15:8080/?action=stream.The VideoLAN menu tab is a M3U-playlist link generator that you can click on. This will generate a playlist file you can open thereafter. In this case, VLC is efficient, as you can transcode the webcam stream to any format you need. Although it's not mandatory, this solution is the most efficient, as it frees the BeagleBone's CPU so that your server can focus on providing services. Using MediaDrop, we can integrate this new stream in our shiny MediaDrop server, knowing that currently MediaDrop doesn't support direct local streams. You can create a new post with the related URL link in the message body, as shown in the following screenshot: Starting the streaming service automatically on boot In the beginning, we saw that MJPG-Streamer needs only one command line to be started. We can put it in a bash script, but servicing on boot is far better. For this, use a console text editor – nano or vim – and create a file dedicated to this service. Let's call it start_mjpgstreamer and add the following commands: #! /bin/sh # /etc/init.d/start_mjpgstreamer export LD_LIBRARY_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/ mjpg-streamer;$LD_LIBRARY_PATH" EXEC_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/mjpg-streamer" $EXEC_PATH/mjpg_streamer -i "input_uvc.so" -o "output_http.so -w EXEC_PATH /www" You can then use administrator rights to add it to the services: debian@arm:~$ sudo /etc/init.d/start_mjpgstreamer start On the next reboot, MJPG-Streamer will be started automatically. Exploring new capabilities to install For those about to explore, we salute you! Plugins Remember that at the beginning of this article, we began the demonstration with two plugins: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" If we take a moment to look at these plugins, we will understand that the first plugin is responsible for handling the webcam directly from the driver. Simply ask for help and options as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer --input "input_uvc.so --help" The second plugin is about the web server settings: The path to the directory contains the final web server HTML pages. This implies that you can modify the existing pages with a little effort or create new ones based on those provided. Force a special port to be used. Like I said previously, port use is dedicated for a server. You define here which will be the one for this service. You can discover many others by asking: debian@arm:~$ ./mjpg_streamer --output "output_http.so --help" Apart from input_uvc and output_http, you have other available plugins to play with. Let's take a look at the plugins directory. Another tool for the webcam The Mjpg_streamer project is dedicated for streaming over network, but it is not the only one. For instance, do you have any specific needs such as monitoring your house/son/cat/Jon Snow figurine? buuuuzzz: if you answered yes to the last one, you just defined yourself as a geek. Well, in that case the Motion project is for you; just install the motion package and start it with the default motion.conf configuration. You will then record videos and pictures of any moving object/person that will be detected. As MJPG-Streamer motion aims to be a low CPU consumer, it works very well on BeagleBone Black. Configuring RSS feeds with Leed Our server can handle videos, pictures, and music from any source and it would be cool to have another tool to retrieve news from some RSS providers. This can be done with Leed, a RSS project organized for servers. You can have a final result, as shown in the following screenshot: This project has a "quick and easy" installation spirit, so you can give it a try without harness. Leed (for Light Feed) allows you to you access RSS feeds from any browser, so no RSS reader application is needed, and every user in your network can read them as well. You install it on the server and feeds are automatically updated. Well, the truth behind the scenes is that a cron task does this for you. You will be guided to set some synchronisation after the installation. Creating the environment for Leed in three steps We already have Apache, MySQL, and PHP installed, and we need a few other prerequisites to run Leed: Create a database for Leed Download the project code and set permissions Install Leed itself Creating a database for Leed You will begin by opening a MySQL session: debian@arm:~$ mysql –u root –p What we need here is to have a dedicated Leed user with its database. This user will be connected using the following: create user 'debian_leed'@'localhost' IDENTIFIED BY 'temppwd'; create database leed_db; use leed_db; grant create, insert, update, select, delete on leed_db.* to debian_leed@localhost; exit Downloading the project code and setting permissions We prepared our server to have its environment ready for Leed, so after getting the latest version, we'll get it working with Apache by performing the following steps: From your home, retrieve the latest project's code. It will also create a dedicated directory: debian@arm:~$ git clone https://github.com/ldleman/Leed.git debian@arm:~$ ls mediadrop mjpg-streamer Leed music Now, we need to put this new directory where the Apache server can find it: debian@arm:~$ sudo mv Leed /var/www/ Change the permissions for the application: debian@arm:~$ chmod 777 /var/www/Leed/ -R Installing Leed When you go to the server address (http//192.168.0.15/leed/install.php), you'll get the following installation screen: We now need to fill in the database details that we previously defined and add the Administrator credentials as well. Now save and quit. Don't worry about the explanations, we'll discuss these settings thereafter. It's important that all items from the prerequisites list on the right are green. Otherwise, a warning message will be displayed about the wrong permissions settings, as shown in the following screenshot: After the configuration, the installation is complete: Leed is now ready for you. Setting up a cron job for feed updates If you want automatic updates for your feeds, you'll need to define a synchronization task with cron: Modify cron jobs: debian@arm:~$ sudo crontab –e Add the following line: 0 * * * * wget -q -O /var/www/leed/logsCron "http://192.168.0.15/Leed/action.php?action=synchronize Save it and your feeds will be refreshed every hour. Finally, some little cleanup: remove install.php for security matters: debian@arm:~$ rm /var/www/Leed/install.php Using Leed to add your RSS feed When you need to add some feeds from the Manage menu, in Feed Options (on the right- hand side) select Preferences and you just have to paste the RSS link and add it with the button: You might find it useful to organize your feeds into groups, as we did for movies in MediaDrop. The Rename button will serve to achieve this goal. For example, here a TV Shows category has been created, so every feed related to this type will be organized on the main screen. Some Leed preferences settings in a server environment You will be asked to choose between two synchronisation modes: Complete and Graduated. Complete: This isto be used in a usual computer, as it will update all your feeds in a row, which is a CPU consuming task Graduated: Look for the oldest 10 feeds and update them if required You also have the possibility of allowing anonymous people to read your feeds. Setting Allow anonymous readers to Yeswill let your guests access your feeds but not add any. Extending Leed with plugins If you want to extend Leed capabilities, you can use the Leed Market—as the author defined it—from Feed options in the Manage menu. There, you'll be directed to the Leed Market space. Installation is just a matter of downloading the ZIP file with all plugins: debian@arm:~/Leed$ wget  https://github.com/ldleman/Leed-market/archive/master.zip debian@arm:~/Leed$ sudo unzip master.zip Let's use the AdBlock plugin for this example: Copy the content of the AdBlock plugin directory where Leed can see it: debian@arm:~/Leed$ sudo cp –r Leed-market-master/adblock /var/www/Leed/plugins Connect yourself and set the plugin by navigating to Manage | Available Plugins and then activate adblock withEnable, as follows: In this article, we covered: Some words about the hardware How to know your webcam Configuring RSS feeds with Leed Summary In this article, we had some good experiments with the hardware part of the server "from the ground," to finally end by successfully setting up the webcam service on boot. We discovered hardware detection, a way to "talk" with our local webcam and thus to be able to see what happens when we plug a device in the BeagleBone. Through the topics, we also discovered video4linux to retrieve information about the device, and learned about configuring devices. Along the way, we encountered MJPG-Streamer. Finally, it's better to be on our own instead of being dependent on some GUI interfaces, where you always wonder where you need to click. Finally, our efforts have been rewarded, as we ended up with a web page we can use and modify according to our tastes. RSS news can also be provided by our server so that you can manage all your feeds in one place, read them anywhere, and even organize dedicated groups. Plenty of concepts have been seen for hardware and software. Then think of this article as a concrete example you can use and adapt to understand how Linux works. I hope you enjoyed this freedom of choice, as you drag ideas and drop them in your BeagleBone as services. We entered in the DIY area, showing you ways to explore further. You can argue, saying that we can choose the software but still use off the shelf commercial devices. Resources for Article: Further resources on this subject: Using PVR with Raspbmc [Article] Pulse width modulator [Article] Making the Unit Very Mobile - Controlling Legged Movement [Article]
Read more
  • 0
  • 0
  • 4608

article-image-five-kinds-python-functions-python-34-edition
Packt
06 Feb 2015
33 min read
Save for later

The Five Kinds of Python Functions Python 3.4 Edition

Packt
06 Feb 2015
33 min read
This article is written by Steven Lott, author of the book Functional Python Programming. You can find more about him at http://slott-softwarearchitect.blogspot.com. (For more resources related to this topic, see here.) What's This About? We're going to look at various ways that Python 3 lets us define things which behave like functions. The proper term here is Callable – we're looking at objects that can be called like a function. We'll look at the following Python constructs: Function definitions Higher-order functions Function wrappers (around methods) Lambdas Callable objects Generator functions and the yield parameter And yes, we're aware that the list above has six items on it. That's because higher-order functions in Python aren't really all that complex or different. In some languages, functions that take functions are arguments involving special syntax. In Python, it's simple and common and barely worth mentioning as a separate topic. We'll look at when it's appropriate and inappropriate to use one or the other of these various functional forms. Some background Let's take a quick peek at a basic bit of mathematical formalism. We'll look at a function as an abstract formalism. We often annotate it like this: This shows us that f() is a function. It has one argument, x, and will map this to a single value, y. Some mathematical functions are written in front, for example, y=sin x. Some are written in other places around the argument, for example, y=|x|. In Python, the syntax is more consistent, for example, we use a function like this: >>> abs(-5)5 We've applied the abs() function to an argument value of -5. The argument value was mapped to a value of 5. Terminology Consider the following function: In this definition, the argument is a pair of values, (a,b). This is called the domain. We can summarize it as the domain of values for which the function is defined. Outside this domain, the function is not defined. In Python, we get a TypeError exception if we provide one value or three values as the argument. The function maps the domain pair to a pair of values, (q,r). This is the range of the function. We can call this the range of values that could be returned by the function. Mathematical function features As we look at the abstract mathematical definition of functions, we note that functions are generally assumed to have no hysteresis; they have no history or memory of prior use. This is sometimes called the property of being idempotent: the results are always the same for a given argument value. We see this in Python as a common feature. But it's not universally true. We'll look at a number of exceptions to the rule of idempotence. Here's an example of the usual situation: >>> int("10f1", 16)4337 The value returned from the evaluation of int("10f1", 16) never changes. There are, however, some common examples of non-idempotent functions in Python. Examples of hysteresis Here are three common situations where a function has hysteresis. In some cases, results vary based on history. In other cases, results vary based on events in some external environment, such as follows: Random number generators. We don't want them to produce the same value over and over again. The Python random.randrange() function, is not obviously idempotent. OS functions depend on the state of the machine as a whole. The os.listdir() function returns values that depend on the use of functions such as os.unlink(), os.rename(), and open() (among several others).While the rules are generally simple, it requires a stateful object outside the narrow world of the code itself. These are examples of Python functions that don't completely fit the formal mathematical definition; they lack idempotence, and their values depend on history, other functions, or both. Function Definitions Python has two statements that are essential features of function definition. The def statement specifies the domain and the return statement(s) specify the range. A simplified gloss of the syntax is as follows: def name(params):   body   return expression In effect, the function's domain is defined by the parameters provided in the def statement. This list of parameter names is not all the information on the domain, however. Even if we use one of the Python extensions to add type annotations, that's still not all the information. There may be if statements in the body of the function that impose additional explicit restrictions. There may be other functions that impose their own kind of implicit restrictions. If, for example, the body included math.sqrt() then there would be an implicit restriction on some values being non-negative. The return statements provide the function's range. An empty return statement means a range of simply None values. When there are multiple return statements, the range is the union of the ranges on all the return statements. This mapping between Python syntax and mathematical concepts isn't very complete. We need more information about a function. Example definition Here's an example of function definition: def odd(n):   """odd(n) -> boolean, true if n is odd."""   return n % 2 == 1 What do does this definition tell us? Several things such as: Domain: We know that this function accepts n, a single object. Range: Boolean value, True if n is an odd number. This is the most likely interpretation. It's also remotely possible that the class of n has repurposed __mod__() or __rmod__() methods, in which case the semantics can be pretty obscure. Because of the inherent ambiguity in Python, this function has provided a triple-quoted """Docstring""" parameter with a summary of the function. This is a best practice, and should be followed universally except in articles like this where it gets too long-winded to include a docstring parameter everywhere. In this case, the doctoring parameter doesn't state unambiguously that n is intended to be a number. There are two ways to handle this gap, they are as follows: Actually include words like n is a number in the docstring parameter Include the docstring parameter test cases that show the required behavior Either is acceptable. Both are preferable. Using a function To complete this example, here's how we'd use this odd little function named odd(): >>> odd(3)True>>> odd(4)False This kind of example text can be included into the docstring parameter to create two test cases that offer insight into what the function really means. The lack of declarations More verbose type declarations—as used in many popular programming languages—aren't actually enough information to fully specify a function's domain and range. To be rigorously complete, we need type definitions that include optional predicates. Take a look at the following command: isinstance(n,int) and n >= 0 The assert statement is a good place for this kind of additional argument domain checking. This isn't the perfect solution because assert statements can be disabled very easily. It can help during design and testing and it can help people to read your code. The fussy formal declarations of data type used in other languages are not really needed in Python. Python replaces an up-front claim about required types with a runtime search for appropriate class methods. This works because each Python object has all the type information bound into it. Static compile-time type information is redundant, since the runtime type information is complete. A Python function definition is pretty spare. In includes the minimal amount of information about the function. There are no formal declaration of parameter types or return type. This odd little function will work with any object that implements the % operator: Generally, this means any object that implements __mod__() or __rmod__(). This means most subclasses of numbers.Number. It also means instances of any class that happen to provide these methods. That could become very weird, but still possible. We hesitate to think about non-numeric objects that work with the number-like % operator. Some Python features In Python, functions we declare are proper first-class objects. This means that they have attributes that can be assigned to variables and placed into collections. Quite a few clever things can be done with function objects. One of the most elegant things is to use a function as an argument or a return value from another function. The ability to do this means that we can easily create and use higher-order functions in Python. For folks who know languages such as C (and C++), functions aren't proper first-class objects. A pointer to a function, however, is a first class object in C. But the function itself is a block of code that can't easily be manipulated. We'll look at a number of simple ways in which we can write—and use—higher-order functions in Python. Functions are objects Consider the following command example: >>> not_even = odd>>> not_even(3)True We've assigned the odd little function object to a new variable, not_even. This creates an alias for a function. While this isn't always the best idea, there are times when we might want to provide an alternate name for a function as part of maintaining reverse compatibility with a previous release of a library. Using functions Consider the following function definition: def some_test(function, value):   print(function, value)   return function(value) This function's domain includes arguments named function and value. We can see that it prints the arguments, then applies the function argument to the given value. When we use the preceding function, it looks like this: >>> some_test(odd, 3)<function odd at 0x613978> 3True The some_test() function accepted a function as an argument. When we printed the function, we got a summary, <function odd at 0x613978>, that shows us some information about the object. We also show a summary of the argument value, 3. When we applied the function to a value, we got the expected result. We can—of course—extend this concept. In particular, we can apply a single function to many values. Higher-order Functions Higher-order functions become particularly useful when we apply them to collections of objects. The built-in map() function applies a simple function to each value in an argument sequence. Here's an example: >>> list(map(odd, [1,2,3,4]))[True, False, True, False] We've used the map() function to apply the odd() function to each value in the sequence. This is a lot like evaluating: >>> [odd(x) for x in [1,2,3,4]] We've created a list comprehension instead of applying a higher-order map() function. This is equivalent to the following command snippet: [odd(1), odd(2), odd(3), odd(4)] Here, we've manually applied the odd() function to each value in a sequence. Yes, that's a diesel engine alternator and some hoses: We'll use this alternator as a subject for some concrete examples of higher-order functions. Diesel engine background Some basic diesel engine mechanics. The following some basic information: The engine turns the alternator. The alternator generates pulses that drive the tachometer. Amongst other things, like charging the batteries. The alternator provides an indirect measurement of engine RPMs. Direct measurement would involve connecting to a small geared shaft. It's difficult and expensive. We already have a tachometer; it's just incorrect. The new alternator has new wheels. The ratios between engine and alternator have changed. We're not interested in installing a new tachometer. Instead, we'll create a conversion from a number on the tachometer, which is calibrated to the old alternator, to a proper number of engine RPMs. This has to allow the change in ratio between the original tachometer and the new tach. Let's collect some data and see what we can figure out about engine RPMs. New alternator First approximation: all we did was get new wheels. We can presume that the old tachometer was correct. Since the new wheel is smaller, we'll have higher alternator RPMs. That means higher readings on the old tachometer. Here's the key question: How far wrong are the RPMs? The old wheel was approximately 3.5 RPM and the new wheel is approximately 2.5 RPM. We can compute the potential ratio between what the tach says and what the engine is really doing: >>> 3.5/2.51.4>>> 1/_0.7142857142857143 That's nice. Is it right? Can we really just multiply and display RPMs by .7 to get actual engine RPMs? Let's create the conversion card first, then collect some more data. Use case Given RPM on the tachometer, what's the real RPM of the engine? Use the following command to find the RPM: def eng(r):   return r/1.4 Use it like the following: >>> eng(2100)1500.0 This seems useful. Tach says 2100, engine (theoretically) spinning at 1500, more or less. Let's confirm our hypothesis with some real data. Data collection Over a period of time, we recorded tachometer readings and actual RPMs using a visual RPM measuring device. The visual device requires a strip of reflective tape on one of the engine wheels. It uses a laser and counts returns per minute. Simple. Elegant. Accurate. It's really inconvenient. But it got some data we could digest. Skipping some boring statistics, we wind up with the following function that maps displayed RPMs to actual RPMs, such as this: def eng2(r):   return 0.7724*r**1.0134 Here's a sample result: >>> eng2(2100)1797.1291903589386 When tach says 2100, the engine is measured as spinning at about 1800 RPM. That's not quite the same as the theoretical model. But it's so close that it gives us a lot of confidence in this version. Of course, the number displayed is hideous. All that floating-point cruft is crazy. What can we do? Rounding is only part of the solution. We need to think through the use case. After all, we use this standing at the helm of the boat; how much detail is appropriate? Limits and ranges The engine has governors and only runs between 800 and 2500 RPM. There's a very tight limit here. Realistically, we're talking about this small range of values: >>> list(range(800, 2500, 200))[800, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400] There's no sensible reason for proving any more detailed engine RPMs. It's a sailboat; top speed is 7.5 knots (Nautical miles per hour). Wind and current have far more impact on the boat speed than the difference between 1600 and 1700 RPMs. The tach can't be read to closer than 100-200 RPM. It's not digital, it's a red pointer near little tick lines. There's no reason to preserve more than a few bits of precision. Example of Tach translation Given the engine RPMs and the conversion function, we can deduce that the tachometer display will be between 1000 to 3200. This will map to engine RPMs in the range of about 800 to 2500. We can confirm this with a mapping like this: >>> list(map(eng2, range(1000,3200,200)))[847.3098694826986, 1019.258964596305, 1191.5942982618956, 1364.2609728487703, 1537.2178605443924, 1710.4329833319157, 1883.8807562755746, 2057.5402392829747, 2231.3939741669838, 2405.4271806626366, 2579.627182659544] We've applied the eng2() mapping from tach to engine RPM. For tach readings between 1000 and 3200 in steps of 200, we've computed the actual engine RPMs. For those who use spreadsheets a lot, the range() function is like filling a column with values. The map(eng2, …) function is like filling an adjacent column with a calculation. We've created the result of applying a function to each value of a given range. As shown, this is little difficult to use. We need to do a little more cleanup. What other function do we need to apply to the results? Round to 100 Here's a function that will round up to the nearest 100: def next100(n):   return int(round(n, -2)) We could call this a kind of composite function built from a partial application of round() and int() functions. If we map this function to the previous results, we get something a little easier to work with. How does this look? >>> tach= range(1000,3200,200)>>> list(map(next100, map(eng2, tach)))[800, 1000, 1200, 1400, 1500, 1700, 1900, 2100, 2200, 2400, 2600] This expression is a bit complex; let's break it down into three discrete steps: First, map the eng2() function to tach numbers between 1000 and 3200. The result is effectively a sequence of values (it's not actually a list, it's a generator, a potential list) Second, map the next100() function to results of previous mapping Finally, collect a single list object from the results We've applied two functions, eng2() and next100(), to a list of values. In principle, we've created a kind of composite function, next100○eng20(rpm). Python doesn't support function composition directly, hence the complex-looking map of map syntax. Interleave sequences of values The final step is to create a table that shows both the tachometer reading and the computed engine RPMs. We need to interleave the input and output values into a single list of pairs. Here are the tach readings we're working with, as a list: >>> tach= range(1000,3200,200) Here are the engine RPMs: >>> engine= list(map(next100,map(eng2,tach))) Here's how we can interleave the two to create something that shows our tachometer reading and engine RPMs: >>> list(zip(tach, engine))[(1000, 800), (1200, 1000), (1400, 1200), (1600, 1400), (1800, 1500), (2000, 1700),(2200, 1900), (2400, 2100), (2600, 2200), (2800, 2400), (3000, 2600)] The rest is pretty-printing. What's important is that we could take functions like eng() or eng2() and apply it to columns of numbers, creating columns of results. The map() function means that we don't have to write explicit for loops to simply apply a function to a sequence of values. Map is lazy We have a few other observations about the Python higher-order functions. First, these functions are lazy, they don't compute any results until required by other statements or expressions. Because they don't actually create intermediate list objects, they may be quite fast. The laziness feature is true for the built-in higher-order functions map() and filter(). It's also true for many of the functions in the itertools library. Many of these functions don't simply create a list object, they yield values as requested. For debugging purposes, we use list() to see what's being produced. If we don't apply list() to the result of a lazy function, we simply see that it's a lazy function. Here's an example: >>> map(lambda x:x*1.4, range(1000,3200,200))<map object at 0x102130610> We don't see a proper result here, because the lazy map() function didn't do anything. The list(), tuple(), or set() functions will force a lazy map() function to actually get up off the couch and compute something. Function Wrappers There are a number of Python functions which are syntactic sugar for method functions. One example is the len() function. This function behaves as if it had the following definition: def len(obj):   return obj.__len__() The function acts like it's simply invoking the object's built-in __len__() method. There are several Python functions that exist only to make the syntax a little more readable. Post-fix syntax purists would prefer to see syntax such as some_list.len(). Those who like their code to look a little more mathematical prefer len(some_list). Some people will go so far as to claim that the presence of prefix functions means that Python isn't strictly object-oriented. This is false; Python is very strictly object-oriented. It doesn't—however—use only postfix method notation. We can write function wrappers to make some method functions a little more palatable. Another good example is the divmod() function. This relies on two method functions, such as the following: a.__divmod__(b) b.__rdivmod__(a) The usual operator rules apply here. If the class for object a implements __divmod__(), then that's used to compute the result. If not, then the same test is made for the class of object b; if there's an implementation, that will be used to compute the results. Otherwise, it's undefined and we'll get an exception. Why wrap a method? Function wrappers for methods are syntactic sugar. They exist to make object methods look like simple functions. In some cases, the functional view is more succinct and expressive. Sometimes the object involved is obvious. For example, the os module functions provide access to OS-level libraries. The OS object is concealed inside the module. Sometimes the object is implied. For example, the random module makes a Random instance for us. We can simply call random.randint() without worrying about the object that was required for this to work properly. Lambdas A lambda is an anonymous function with a degenerate body. It's like a function in some respects and it's unlike a function because of the following two things: A lambda has no name A lambda has no statements A lambda's body is a single expression, nothing more. This expression can have parameters, however, which is why a lambda is a handy form of a callable function. The syntax is essentially as follows: lambda params : expression Here's a concrete example: lambda r: 0.7724*r**1.0134 You may recognize this as the eng2() function defined previously. We don't always need a complete, formal function. Sometimes, we just need an expression that has parameters. Speaking theoretically, a lambda is a one-argument function. When we have multi-argument functions, we can transform it to a series of one-argument lambda forms. This transformation can be helpful for optimization. None of that applies to Python. We'll move on. Using a Lambda with map Here are two equivalent results: map(eng2, tach) map(lambda r: 0.7724*r**1.0134, tach) Here's a previous example, using the lambda instead of the function: >>> tach= range(1000,3200,200)>>> list( map(lambda r: 0.7724*r**1.0134, tach))[847.3098694826986, 1019.258964596305, 1191.5942982618956, 1364.2609728487703, 1537.2178605443924, 1710.4329833319157, 1883.8807562755746, 2057.5402392829747, 2231.3939741669838, 2405.4271806626366, 2579.627182659544] You could scroll back to see that the results are the same. If we're doing a small thing once only, a lambda object might be more clear than a complete function definition. Emphasis here is on small once only. If we start trying to reuse a lambda object, or feel the need to assign a lambda object to a variable, we should really consider a function definition and the associated docstring and doctest features. Another use of Lambdas A common use of lambdas is with three other higher-order functions: sort(), min(), and max(). We might use one of these with a list object: list.sort(key= lambda x: expr) list.min(key= lambda x: expr) list.max(key= lambda x: expr) In each case, we're using a lambda object to embed an expression into the argument values for a function. In some cases, the expression might be very sophisticated; in other cases, it might be something as trivial as lambda x: x[1]. When the expression is trivial, a lambda object is a good idea. If the expression is going to get reused, however, a lambda object might be a bad idea. You can do this… But… The following kind of statement makes sense: some_name = lambda x: 3*x+1 We've created a callable object that takes a single argument value and returns a numeric value such as the following command snippet: def some_name(x): return 3*x+1. There are some differences. Most notably the following: A lambda object is all on one line of code. A possible advantage. There's no docstring. A disadvantage for lambdas of any complexity. Nor is there any doctest in the missing docstring. A significant problem for a lambda object that requires testing. There are ways to test lambdas with doctest outside a docstring, but it seems simpler to switch to a full function definition. We can't easily apply decorators to it. To do it, we lose the @decorator syntax. We can't use any Python statements in it. In particular, no try-except block is possible. For these reasons, we suggest limiting the use of lambdas to truly trivial situations. Callable Objects A callable object fits the model of a function. The unifying feature of all of the things we've looked at is that they're callable. Functions are the primary example of being callable but objects can also be callable. Callable objects can be subclasses of collections.abc.Callable. Because of Python's flexibility, this isn't a requirement, it's merely a good idea. To be callable, a class only needs to provide a __call__() method. Here's a complete callable class definition: from collections.abc import Callableclass Engine(Callable):   def __call__(self, tach):       return 0.7724*tach**1.0134 We've imported the collections.abc.Callable class. This will provide some assurance that any class that extends this abstract superclass will provide a definition for the __call__() method. This is a handy error-checking feature. Our class extends Callable by providing the needed __call__() method. In this case, the __call__() method performs a calculation on the single parameter value, returning a single result. Here's a callable object built from this class: eng= Engine() This creates a function that we can then use. We can evaluate eng(1000) to get the engine RPMs when the tach reads 1000. Callable objects step-by-step There are two parts to making a function a callable object. We'll emphasize these for folks who are new to object-oriented programming: Define a class. Generally, we make this a subclass of collections.abc.Callable. Technically, we only need to implement a __call__() method. It helps to use the proper superclass because it might help catch a few common mistakes. Create an instance of the class. This instance will be a callable object. The object that's created will be very similar to a defined function. And very similar to a lambda object that's been assigned to a variable. While it will be similar to a def statement, it will have one important additional feature: hysteresis. This can be the source of endless bugs. It can also be a way to improve performance. Callables can have hysteresis Here's an example of a callable object that uses hysteresis as a kind of optimization: class Factorial(Callable):   def __init__(self):       self.previous = {}   def __call__(self, n):       if n not in self.previous:           self.previous[n]= self.compute(n)       return self.previous[n]   def compute(self, n):       if n == 0 : return 1       return n*self.__call__(n-1)Here's how we can use this:>>> fact= Factorial()>>> fact(5)120 We create an instance of the class, and then call the instance to compute a value for us. The initializer The initialization method looks like this:    def __init__(self):       self.previous = {} This function creates a cache of previously computed values. This is a technique called memoization. If we've already computed a result once, it's in the self.previous cache; we don't need to compute it again, we already know the answer. The Callable interface The required __call__() method looks like this:    def __call__(self, n):       if n not in self.previous:           self.previous[n]= self.compute(n)       return self.previous[n] We've checked the memoization cache first. If the value is not there, we're forced to compute the answer, and insert it into the cache. The final answer is always a value in the cache. A common what if question is what if we have a function of multiple arguments? There are two minuscule changes to support more complex arguments. Use def __call__(self, *n): and self.compute(*n). Since we're only computing factorial, there's no need to over-generalize. The Compute method The essential computation has been allocated to a method called compute. It looks like this:    def compute(self, n):       if n == 0: return 1           return n*self.__call__(n-1) This does the real work of the callable object: it computes n!. In this case, we've used a pretty standard recursive factorial definition. This recursion relies on the __call__() method to check the cache for previous values. If we don't expect to compute values larger than 1000! (a 2,568 digit number, by the way) the recursion works nicely. If we think we need to compute really large factorials, we'll need to use a different approach. Execute the following code to compute very large factorials: functools.reduce(operator.mul, range(1,n+1)) Either way, we can depend on the internal memoization to leverage previous results. Note the potential issue Hysteresis—memory of what came before—is available to the callable objects. We call functions and lambdas stateless, where callable objects can be stateful. This may be desirable to optimize performance. We can memoize the previous results or we can design an object that's simply confusing. Consider a function like divmod() that returns two values. We could try to define a callable object that first returns the quotient and on the second call with the same arguments returns the remainder: >>> crazy_divmod(355,113)3>>> crazy_divmod(255,113)16 This is technically possible. But it's crazy. Warning: Stay away. We generally expect idempotence: functions do the same thing each time. Implementing memoization didn't alter the basic idempotence of our factorial function. Generator Functions Here's a fun generator, the Collatz function. The function creates a sequence using a simple pair of rules. We'll could call this rule, Half-Or-Three-Plus-One (HOTPO). We'll call it collatz(): def collatz(n):   if n % 2 == 0:        return n//2   else:       return 3*n+1 Each integer argument yields a next integer. These can form a chain. For example, if we start with collatz(13), we get 40. The value of collatz(40) is 20. Here's the sequence of values: 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1At 1, it loops: 1 → 4 → 2 → 1 … Interestingly, all chains seem to lead—eventually—to 1. To explore this, we need a simple function that will build a chain from a given starting value. Successive values Here's a generator function that will build a list object. This iterates through values in the sequence until it reaches 1, when it terminates: def col_list(n):   seq= [n]   while n != 1:       n= collatz(n)       seq.append(n)   return seq This is not wrong. But it's not really the most useful implementation. This always creates a sequence object. In many cases, we don't really want an object, we only want information about the sequence. We might only want the length, or the largest numbers, or the sum. This is where a generator function might be more useful. A generator function yields elements from the sequence instead of building the sequence as a single object. Generator functions To create a generator function, we write a function that has a loop; inside the loop, there's a yield statement. A function with a yield statement is effectively an Iterable object, it can be used in a for statement to produce values. It doesn't create a big list object, it creates the items that can be accumulated into a list or tuple object. A generator function is lazy: it doesn't compute anything unless forced to by another function needing results. We can iterate through as many (or as few) of the results as we need. For example, list(some_generator()) forces all values to be returned. For another example of a lazy generator, look at how range() objects work. If we evaluate range(10), we only get a generator. If we evaluate list(range(10)), we get a list object. The Collatz generator Here's a generator function that will produce sequences of values using the collatz() method rule shown previously: def col_iter(n):   yield n   while n != 1:       n= collatz(n)        yield n When we use this in a for loop or with the list() function, this will yield the argument number. While the number is not 1, it will apply the collatz() function and yield successive values in the chain. When it has yielded 1, it will will terminate. One common pattern for generator functions is to replace all list-accumulation statements with yield statements. Instead of building a list one time at a time, we yield each item. The collatz() function it lazy. We don't get an answer unless we use list() or tuple() or some variation of a for statement context. Using a generator function Here's how this function looks in practice: >>> for i in col_iter(3):…   print(i)3105168421 We've used the generator function in a for loop so that it will yield all of the values until it terminates. Collatz function sequences Now we can do some exploration of this Collatz sequence. Here are a few evaluations of the col_iter() function: >>> list(col_iter(3))[3, 10, 5, 16, 8, 4, 2, 1]>>> list(col_iter(5))[5, 16, 8, 4, 2, 1]>>> list(col_iter(6))[6, 3, 10, 5, 16, 8, 4, 2, 1]>>> list(syracuse_iter(13))[13, 40, 20, 10, 5, 16, 8, 4, 2, 1] There's an interesting pattern here. It seems that from 16, we know the rest. Generalizing this: from any number we've already seen, we know the rest. Wait. This means that memoization might be a big help in exploring the values created by this sequence. When we start combining function design patterns like this, we're doing functional programming. We're stepping outside the box of purely object-oriented Python. Alternate styles Here is an alternative version of the collatz() function: def collatz2(n):   return n//2 if n%2 == 0 else 3*n+1 This simply collapses the if statements into a single if expression and may not help much. We also have this: collatz3= lambda n: n//2 if n%2 == 0 else 3*n+1 We've collapsed the expression into a lambda object. Helpful? Perhaps not. On the other hand, the function doesn't really need all of the overhead of a full function definition and multiple statements. The lambda object seems to capture everything relevant. Functions as object There's a higher-level function that will produce values until some ending condition is met. We can plug in one of the versions of the collatz() function and a termination test into this general-purpose function: def recurse_until(ending, the_function, n):   yield n   while not ending(n):       n= the_function(n)       yield n This requires two plug-in functions, they are as follows: ending() is a function to test to see whether we're done, for example, lambda n: n==1 the_function() is a form of the Collatz function We've completely uncoupled the general idea of recursively applying a function from a specific function and a specific terminating condition. Using the recurs_until() function We can apply this higher-order recurse_until() function like this: >>> recurse_until(lambda n: n==1, syracuse2, 9)<generator object recurse_until at 0x1021278c0> What's that? That's how a lazy generator looks: it didn't return any values because we didn't demand any values. We need to use it in a loop or some kind of expression that iterates through all available values. The list() function, for example, will collect all of the values. Getting the list of values Here's how we make the lazy generator do some work: >>> list(_)[9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] The _ variable is the previously computed value. It relieves us from the burden of having to write an assignment statement. We can write an expression, see the results, and know the results were automatically saved in the _ variable. Project Euler #14 Which starting number, under one million, produces the longest chain? Try it without memoization. It's really simple: >>> collatz_len= [len(list(recurse_until(lambda n: n==1, collatz2, i))) ... for i in range(1,11)]>>> results = zip(collatz_len, range(1,11))>>> max(results)(20, 9)>>> list(col_iter(9))[9, 28, 14, 7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1] We defined collatz_len as a list. We're writing a list comprehension that shows the values built from a generator expression. The generator expression evaluates len(something) for i in range(1,11). This means we'll be collecting ten values into the list, each of which is the length of something. The something is a list object built from the recurse_until(lambda n: n==1, collatz2, i) function that we discussed. This will compute the sequence of Collatz values starting from i and proceeding until the value in the sequence is 1. We've zipped the lengths with the original values of i. This will create pairs of lengths and starting numbers. The maximum length will now have a starting value associated with it so that we can confirm that the results match our expectations. And yes, this Project Euler problem could—in principle—be solved in a single line of code. Will this scale to 100? 1,000? 1,000,000? How much will memoization help? Summary In this article, we've looked at five (or six) kinds of Python callables. They all fit the y = f(x) model of a function to varying degrees. When is it appropriate to use each of these different ways to express the same essential concept? Functions are created with def and return. It shouldn't come as a surprise that this should cover most cases. This allows a docstring comment and doctest test cases. We could call these def functions, since they're built with the def statement. Higher-order functions—map(), filter(), and the itertools library—are generally written as plain-old def functions. They're higher-order because they accept functions as arguments or return functions as results. Otherwise, they're just functions. Function wrappers—len(), divmod(), pow(), str(), and repr()—are function syntax wrapped around object methods. These are def'd functions with very tiny bodies. We use them because a.pow(2) doesn't seem as clear as pow(2,a). Lambdas are appropriate for one-time use of something so simple that it doesn't deserve being wrapped in a def statement body. In some cases, we have a small nugget of code that seems more clear when written as a lambda expression rather than a more complete function definition. Simple filter rules, and simple computations are often more clearly shown as a lambda object. The Callable objects have a special property that other functions lack: hysteresis. They can retain the results of previous calculations. We've used this hysteresis property to implement memoizing. This can be a huge performance boost. Callable objects can be used badly, however, to create objects that have simply bizarre behavior. Most functions should strive for idempotence—the same arguments should yield the same results. Generator functions are created with a def statement and at least one yield statement. These functions are iterable. They can be used in a for statement to examine each resulting value. They can also be used with functions like list(), tuple(), and set() to create an actual object from the iterable sequence of values. We might combine them with higher-order functions to do complex processing, one item at a time. It's important to work with each of these kinds of callables. If you only have one tool—a hammer—then every problem has to be reshaped into a nail before you can solve it. Once you have multiple tools available, you can pick the tools that provides the most succinct and expressive solution to the problem. Resources for Article: Further resources on this subject: Expert Python Programming [article] Python Network Programming Cookbook [article] Learning Python Design Patterns [article]
Read more
  • 0
  • 0
  • 2725
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-qlik-senses-vision
Packt
06 Feb 2015
12 min read
Save for later

Qlik Sense's Vision

Packt
06 Feb 2015
12 min read
In this article by Christopher Ilacqua, Henric Cronström, and James Richardson, authors of the book Learning Qlik® Sense, we will look at the evolving requirements that compel organizations to readdress how they deliver business intelligence and support data-driven decision-making. This is important as it supplies some of the reasons as to why Qlik® Sense is relevant and important to their success. The purpose of covering these factors is so that you can consider and plan for them in your organization. Among other things, in this article, we will cover the following topics: The ongoing data explosion The rise of in-memory processing Barrierless BI through Human-Computer Interaction The consumerization of BI and the rise of self-service The use of information as an asset The changing role of IT (For more resources related to this topic, see here.) Evolving market factors Technologies are developed and evolved in response to the needs of the environment they are created and used within. The most successful new technologies anticipate upcoming changes in order to help people take advantage of altered circumstances or reimagine how things are done. Any market is defined by both the suppliers—in this case, Qlik®—and the buyers, that is, the people who want to get more use and value from their information. Buyers' wants and needs are driven by a variety of macro and micro factors, and these are always in flux in some markets more than others. This is obviously and apparently the case in the world of data, BI, and analytics, which has been changing at a great pace due to a number of factors discussed further in the rest of this article. Qlik Sense has been designed to be the means through which organizations and the people that are a part of them thrive in a changed environment. Big, big, and even bigger data A key factor is that there's simply much more data in many forms to analyze than before. We're in the middle of an ongoing, accelerating data boom. According to Science Daily, 90 percent of the world's data was generated over the past two years. The fact is that with technologies such as Hadoop and NoSQL databases, we now have unprecedented access to cost-effective data storage. With vast amounts of data now storable and available for analysis, people need a way to sort the signal from the noise. People from a wider variety of roles—not all of them BI users or business analysts—are demanding better, greater access to data, regardless of where it comes from. Qlik Sense's fundamental design centers on bringing varied data together for exploration in an easy and powerful way. The slow spinning down of the disk At the same time, we are seeing a shift in how computation occurs and potentially, how information is managed. Fundamentals of the computing architectures that we've used for decades, the spinning disk and moving read head, are becoming outmoded. This means storing and accessing data has been around since Edison invented the cylinder phonograph in 1877. It's about time this changed. This technology has served us very well; it was elegant and reliable, but it has limitations. Speed limitations primarily. Fundamentals that we take for granted today in BI, such as relational and multidimensional storage models, were built around these limitations. So were our IT skills, whether we realized it at the time. With the use of in-memory processing and 64-bit addressable memory spaces, these limitations are gone! This means a complete change in how we think about analysis. Processing data in memory means we can do analysis that was impractical or impossible before with the old approach. With in-memory computing, analysis that would've taken days before, now takes just seconds (or much less). However, why does it matter? Because it allows us to use the time more effectively; after all, time is the most finite resource of all. In-memory computing enables us to ask more questions, test more scenarios, do more experiments, debunk more hypotheses, explore more data, and run more simulations in the short window available to us. For IT, it means no longer trying to second-guess what users will do months or years in advance and trying to premodel it in order to achieve acceptable response times. People hate watching the hourglass spin. Qlik Sense's predecessor QlikView® was built on the exploitation of in-memory processing; Qlik Sense has it at its core too. Ubiquitous computing and the Internet of Things You may know that more than a billion people use Facebook, but did you know that the majority of those people do so from a mobile device? The growth in the number of devices connected to the Internet is absolutely astonishing. According to Cisco's Zettabyte Era report, Internet traffic from wireless devices will exceed traffic from wired devices in 2014. If we were writing this article even as recently as a year ago, we'd probably be talking about mobile BI as a separate thing from desktop or laptop delivered analytics. The fact of the matter is that we've quickly gone beyond that. For many people now, the most common way to use technology is on a mobile device, and they expect the kind of experience they've become used to on their iOS or Android device to be mirrored in complex software, such as the technology they use for visual discovery and analytics. From its inception, Qlik Sense has had mobile usage in the center of its design ethos. It's the first data discovery software to be built for mobiles, and that's evident in how it uses HTML5 to automatically render output for the device being used, whatever it is. Plug in a laptop running Qlik Sense to a 70-inch OLED TV and the visual output is resized and re-expressed to optimize the new form factor. So mobile is the new normal. This may be astonishing but it's just the beginning. Mobile technology isn't just a medium to deliver information to people, but an acceleration of data production for analysis too. By 2020, pretty much everyone and an increasing number of things will be connected to the Internet. There are 7 billion people on the planet today. Intel predicts that by 2020, more than 31 billion devices will be connected to the Internet. So, that's not just devices used by people directly to consume or share information. More and more things will be put online and communicate their state: cars, fridges, lampposts, shoes, rubbish bins, pets, plants, heating systems—you name it. These devices will generate a huge amount of data from sensors that monitor all kinds of measurable attributes: temperature, velocity, direction, orientation, and time. This means an increasing opportunity to understand a huge gamut of data, but without the right technology and approaches it will be complex to analyze what is going on. Old methods of analysis won't work, as they don't move quickly enough. The variety and volume of information that can be analyzed will explode at an exponential rate. The rise of this type of big data makes us redefine how we build, deliver, and even promote analytics. It is an opportunity for those organizations that can exploit it through analysis; this can sort the signals from the noise and make sense of the patterns in the data. Qlik Sense is designed as just such a signal booster; it takes how users can zoom and pan through information too large for them to easily understand the product. Unbound Human-Computer Interaction We touched on the boundary between the computing power and the humans using it in the previous section. Increasingly, we're removing barriers between humans and technology. Take the rise of touch devices. Users don't want to just view data presented to them in a static form. Instead, they want to "feel" the data and interact with it. The same is increasingly true of BI. The adoption of BI tools has been too low because the technology has been hard to use. Adoption has been low because in the past BI tools often required people to conform to the tool's way of working, rather than reflecting the user's way of thinking. The aspiration for Qlik Sense (when part of the QlikView.Next project) was that the software should be both "gorgeous and genius". The genius part obviously refers to the built-in intelligence, the smarts, the software will have. The gorgeous part is misunderstood or at least oversimplified. Yes, it means cosmetically attractive (which is important) but much more importantly, it means enjoyable to use and experience. In other words, Qlik Sense should never be jarring to users but seamless, perhaps almost transparent to them, inducing a state of mental flow that encourages thinking about the question being considered rather than the tool used to answer it. The aim was to be of most value to people. Qlik Sense will empower users to explore their data and uncover hidden insights, naturally. Evolving customer requirements It is not only the external market drivers that impact how we use information. Our organizations and the people that work within them are also changing in their attitude towards technology, how they express ideas through data, and how increasingly they make use of data as a competitive weapon. Consumerization of BI and the rise of self-service The consumerization of any technology space is all about how enterprises are affected by, and can take advantage of, new technologies and models that originate and develop in the consumer marker, rather than in the enterprise IT sector. The reality is that individuals react quicker than enterprises to changes in technology. As such, consumerization cannot be stopped, nor is it something to be adopted. It can be embraced. While it's not viable to build a BI strategy around consumerization alone, its impact must be considered. Consumerization makes itself felt in three areas: Technology: Most investment in innovation occurs in the consumer space first, with enterprise vendors incorporating consumer-derived features after the fact. (Think about how vendors added the browser as a UI for business software applications.) Economics: Consumer offerings are often less expensive or free (to try) with a low barrier of entry. This drives prices down, including enterprise sectors, and alters selection behavior. People: Demographics, which is the flow of Millennial Generation into the workplace, and the blurring of home/work boundaries and roles, which may be seen from a traditional IT perspective as rogue users, with demands to BYOPC or device. In line with consumerization, BI users want to be able to pick up and just use the technology to create and share engaging solutions; they don't want to read the manual. This places a high degree of importance on the Human-Computer Interaction (HCI) aspects of a BI product (refer to the preceding list) and governed access to information and deployment design. Add mobility to this and you get a brand new sourcing and adoption dynamic in BI, one that Qlik engendered, and Qlik Sense is designed to take advantage of. Think about how Qlik Sense Desktop was made available as a freemium offer. Information as an asset and differentiator As times change, so do differentiators. For example, car manufacturers in the 1980s differentiated themselves based on reliability, making sure their cars started every single time. Today, we expect that our cars will start; reliability is now a commodity. The same is true for ERP systems. Originally, companies implemented ERPs to improve reliability, but in today's post-ERP world, companies are shifting to differentiating their businesses based on information. This means our focus changes from apps to analytics. And analytics apps, like those delivered by Qlik Sense, help companies access the data they need to set themselves apart from the competition. However, to get maximum return from information, the analysis must be delivered fast enough, and in sync with the operational tempo people need. Things are speeding up all the time. For example, take the fashion industry. Large mainstream fashion retailers used to work two seasons per year. Those that stuck to that were destroyed by fast fashion retailers. The same is true for old style, system-of-record BI tools; they just can't cope with today's demands for speed and agility. The rise of information activism A new, tech-savvy generation is entering the workforce, and their expectations are different than those of past generations. The Beloit College Mindset List for the entering class of 2017 gives the perspective of students entering college this year, how they see the world, and the reality they've known all their lives. For this year's freshman class, Java has never been just a cup of coffee and a tablet is no longer something you take in the morning. This new generation of workers grew up with the Internet and is less likely to be passive with data. They bring their own devices everywhere they go, and expect it to be easy to mash-up data, communicate, and collaborate with their peers. The evolution and elevation of the role of IT We've all read about how the role of IT is changing, and the question CIOs today must ask themselves is: "How do we drive innovation?". IT must transform from being gatekeepers (doers) to storekeepers (enablers), providing business users with self-service tools they need to be successful. However, to achieve this transformation, they need to stock helpful tools and provide consumable information products or apps. Qlik Sense is a key part of the armory that IT needs to provide to be successful in this transformation. Summary In this article, we looked at the factors that provide the wider context for the use of Qlik Sense. The factors covered arise out of both increasing technical capability and demands to compete in a globalized, information-centric world, where out-analyzing your competitors is a key success factor. Resources for Article: Further resources on this subject: Securing QlikView Documents [article] Conozca QlikView [article] Introducing QlikView elements [article]
Read more
  • 0
  • 0
  • 2208

article-image-contexts-and-dependency-injection-netbeans
Packt
06 Feb 2015
18 min read
Save for later

Contexts and Dependency Injection in NetBeans

Packt
06 Feb 2015
18 min read
In this article by David R. Heffelfinger, the author of Java EE 7 Development with NetBeans 8, we will introduce Contexts and Dependency Injection (CDI) and other aspects of it. CDI can be used to simplify integrating the different layers of a Java EE application. For example, CDI allows us to use a session bean as a managed bean, so that we can take advantage of the EJB features, such as transactions, directly in our managed beans. In this article, we will cover the following topics: Introduction to CDI Qualifiers Stereotypes Interceptor binding types Custom scopes (For more resources related to this topic, see here.) Introduction to CDI JavaServer Faces (JSF) web applications employing CDI are very similar to JSF applications without CDI; the main difference is that instead of using JSF managed beans for our model and controllers, we use CDI named beans. What makes CDI applications easier to develop and maintain are the excellent dependency injection capabilities of the CDI API. Just as with other JSF applications, CDI applications use facelets as their view technology. The following example illustrates a typical markup for a JSF page using CDI: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                <h:inputText id="firstName" value="#{customer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                  value="#{customer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName" value="#{customer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email" value="#{customer.email}"/>                <h:message for="email"/>                <h:panelGroup/>                <h:commandButton value="Submit"                  action="#{customerController.navigateToConfirmation}"/>            </h:panelGrid>        </h:form>    </h:body> </html> As we can see, the preceding markup doesn't look any different from the markup used for a JSF application that does not use CDI. The page renders as follows (shown after entering some data): In our page markup, we have JSF components that use Unified Expression Language expressions to bind themselves to CDI named bean properties and methods. Let's take a look at the customer bean first: package com.ensode.cdiintro.model;   import java.io.Serializable; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped public class Customer implements Serializable {      private String firstName;    private String middleName;    private String lastName;    private String email;      public Customer() {    }      public String getFirstName() {        return firstName;    }      public void setFirstName(String firstName) {        this.firstName = firstName;    }      public String getMiddleName() {        return middleName;    }      public void setMiddleName(String middleName) {        this.middleName = middleName;    }      public String getLastName() {        return lastName;    }      public void setLastName(String lastName) {        this.lastName = lastName;    }      public String getEmail() {        return email;    }      public void setEmail(String email) {        this.email = email;    } } The @Named annotation marks this class as a CDI named bean. By default, the bean's name will be the class name with its first character switched to lowercase (in our example, the name of the bean is "customer", since the class name is Customer). We can override this behavior if we wish by passing the desired name to the value attribute of the @Named annotation, as follows: @Named(value="customerBean") A CDI named bean's methods and properties are accessible via facelets, just like regular JSF managed beans. Just like JSF managed beans, CDI named beans can have one of several scopes as listed in the following table. The preceding named bean has a scope of request, as denoted by the @RequestScoped annotation. Scope Annotation Description Request @RequestScoped Request scoped beans are shared through the duration of a single request. A single request could refer to an HTTP request, an invocation to a method in an EJB, a web service invocation, or sending a JMS message to a message-driven bean. Session @SessionScoped Session scoped beans are shared across all requests in an HTTP session. Each user of an application gets their own instance of a session scoped bean. Application @ApplicationScoped Application scoped beans live through the whole application lifetime. Beans in this scope are shared across user sessions. Conversation @ConversationScoped The conversation scope can span multiple requests, and is typically shorter than the session scope. Dependent @Dependent Dependent scoped beans are not shared. Any time a dependent scoped bean is injected, a new instance is created. As we can see, CDI has equivalent scopes to all JSF scopes. Additionally, CDI adds two additional scopes. The first CDI-specific scope is the conversation scope, which allows us to have a scope that spans across multiple requests, but is shorter than the session scope. The second CDI-specific scope is the dependent scope, which is a pseudo scope. A CDI bean in the dependent scope is a dependent object of another object; beans in this scope are instantiated when the object they belong to is instantiated and they are destroyed when the object they belong to is destroyed. Our application has two CDI named beans. We already discussed the customer bean. The other CDI named bean in our application is the controller bean: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class CustomerController {      @Inject    private Customer customer;      public Customer getCustomer() {        return customer;    }      public void setCustomer(Customer customer) {        this.customer = customer;    }      public String navigateToConfirmation() {        //In a real application we would        //Save customer data to the database here.          return "confirmation";    } } In the preceding class, an instance of the Customer class is injected at runtime; this is accomplished via the @Inject annotation. This annotation allows us to easily use dependency injection in CDI applications. Since the Customer class is annotated with the @RequestScoped annotation, a new instance of Customer will be injected for every request. The navigateToConfirmation() method in the preceding class is invoked when the user clicks on the Submit button on the page. The navigateToConfirmation() method works just like an equivalent method in a JSF managed bean would, that is, it returns a string and the application navigates to an appropriate page based on the value of that string. Like with JSF, by default, the target page's name with an .xhtml extension is the return value of this method. For example, if no exceptions are thrown in the navigateToConfirmation() method, the user is directed to a page named confirmation.xhtml: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Success</title>    </h:head>    <h:body>        New Customer created successfully.        <h:panelGrid columns="2" border="1" cellspacing="0">            <h:outputLabel for="firstName" value="First Name"/>            <h:outputText id="firstName" value="#{customer.firstName}"/>              <h:outputLabel for="middleName" value="Middle Name"/>            <h:outputText id="middleName"              value="#{customer.middleName}"/>              <h:outputLabel for="lastName" value="Last Name"/>            <h:outputText id="lastName" value="#{customer.lastName}"/>              <h:outputLabel for="email" value="Email Address"/>            <h:outputText id="email" value="#{customer.email}"/>          </h:panelGrid>    </h:body> </html> Again, there is nothing special we need to do to access the named beans properties from the preceding markup. It works just as if the bean was a JSF managed bean. The preceding page renders as follows: As we can see, CDI applications work just like JSF applications. However, CDI applications have several advantages over JSF, for example (as we mentioned previously) CDI beans have additional scopes not found in JSF. Additionally, using CDI allows us to decouple our Java code from the JSF API. Also, as we mentioned previously, CDI allows us to use session beans as named beans. Qualifiers In some instances, the type of bean we wish to inject into our code may be an interface or a Java superclass, but we may be interested in injecting a subclass or a class implementing the interface. For cases like this, CDI provides qualifiers we can use to indicate the specific type we wish to inject into our code. A CDI qualifier is an annotation that must be decorated with the @Qualifier annotation. This annotation can then be used to decorate the specific subclass or interface. In this section, we will develop a Premium qualifier for our customer bean; premium customers could get perks that are not available to regular customers, for example, discounts. Creating a CDI qualifier with NetBeans is very easy; all we need to do is go to File | New File, select the Contexts and Dependency Injection category, and select the Qualifier Type file type. In the next step in the wizard, we need to enter a name and a package for our qualifier. After these two simple steps, NetBeans generates the code for our qualifier: package com.ensode.cdiintro.qualifier;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.PARAMETER; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Qualifier;   @Qualifier @Retention(RUNTIME) @Target({METHOD, FIELD, PARAMETER, TYPE}) public @interface Premium { } Qualifiers are standard Java annotations. Typically, they have retention of runtime and can target methods, fields, parameters, or types. The only difference between a qualifier and a standard annotation is that qualifiers are decorated with the @Qualifier annotation. Once we have our qualifier in place, we need to use it to decorate the specific subclass or interface implementation, as shown in the following code: package com.ensode.cdiintro.model;   import com.ensode.cdiintro.qualifier.Premium; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped @Premium public class PremiumCustomer extends Customer {      private Integer discountCode;      public Integer getDiscountCode() {        return discountCode;    }      public void setDiscountCode(Integer discountCode) {        this.discountCode = discountCode;    } } Once we have decorated the specific instance we need to qualify, we can use our qualifiers in the client code to specify the exact type of dependency we need: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium;   import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer =          (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Since we used our @Premium qualifier to decorate the customer field, an instance of the PremiumCustomer class is injected into that field. This is because this class is also decorated with the @Premium qualifier. As far as our JSF pages go, we simply access our named bean as usual using its name, as shown in the following code; <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Premium Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Premium Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                 <h:inputText id="firstName"                    value="#{premiumCustomer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                     value="#{premiumCustomer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName"                    value="#{premiumCustomer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email"                    value="#{premiumCustomer.email}"/>                <h:message for="email"/>                  <h:outputLabel for="discountCode" value="Discount Code"/>                <h:inputText id="discountCode"                    value="#{premiumCustomer.discountCode}"/>                <h:message for="discountCode"/>                   <h:panelGroup/>                <h:commandButton value="Submit"                      action="#{premiumCustomerController.saveCustomer}"/>            </h:panelGrid>        </h:form>    </h:body> </html> In this example, we are using the default name for our bean, which is the class name with the first letter switched to lowercase. Now, we are ready to test our application: After submitting the page, we can see the confirmation page. Stereotypes A CDI stereotype allows us to create new annotations that bundle up several CDI annotations. For example, if we need to create several CDI named beans with a scope of session, we would have to use two annotations in each of these beans, namely @Named and @SessionScoped. Instead of having to add two annotations to each of our beans, we could create a stereotype and annotate our beans with it. To create a CDI stereotype in NetBeans, we simply need to create a new file by selecting the Contexts and Dependency Injection category and the Stereotype file type. Then, we need to enter a name and package for our new stereotype. At this point, NetBeans generates the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.inject.Stereotype;   @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now, we simply need to add the CDI annotations that we want the classes annotated with our stereotype to use. In our case, we want them to be named beans and have a scope of session; therefore, we add the @Named and @SessionScoped annotations as shown in the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.Stereotype; import javax.inject.Named;   @Named @SessionScoped @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now we can use our stereotype in our own code: package com.ensode.cdiintro.beans;   import com.ensode.cdiintro.stereotype.NamedSessionScoped; import java.io.Serializable;   @NamedSessionScoped public class StereotypeClient implements Serializable {      private String property1;    private String property2;      public String getProperty1() {        return property1;    }      public void setProperty1(String property1) {        this.property1 = property1;    }      public String getProperty2() {        return property2;    }      public void setProperty2(String property2) {        this.property2 = property2;    } } We annotated the StereotypeClient class with our NamedSessionScoped stereotype, which is equivalent to using the @Named and @SessionScoped annotations. Interceptor binding types One of the advantages of EJBs is that they allow us to easily perform aspect-oriented programming (AOP) via interceptors. CDI allows us to write interceptor binding types; this lets us bind interceptors to beans and the beans do not have to depend on the interceptor directly. Interceptor binding types are annotations that are themselves annotated with @InterceptorBinding. Creating an interceptor binding type in NetBeans involves creating a new file, selecting the Contexts and Dependency Injection category, and selecting the Interceptor Binding Type file type. Then, we need to enter a class name and select or enter a package for our new interceptor binding type. At this point, NetBeans generates the code for our interceptor binding type: package com.ensode.cdiintro.interceptorbinding;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.interceptor.InterceptorBinding;   @Inherited @InterceptorBinding @Retention(RUNTIME) @Target({METHOD, TYPE}) public @interface LoggingInterceptorBinding { } The generated code is fully functional; we don't need to add anything to it. In order to use our interceptor binding type, we need to write an interceptor and annotate it with our interceptor binding type, as shown in the following code: package com.ensode.cdiintro.interceptor;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import java.io.Serializable; import java.util.logging.Level; import java.util.logging.Logger; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext;   @LoggingInterceptorBinding @Interceptor public class LoggingInterceptor implements Serializable{      private static final Logger logger = Logger.getLogger(            LoggingInterceptor.class.getName());      @AroundInvoke    public Object logMethodCall(InvocationContext invocationContext)            throws Exception {          logger.log(Level.INFO, new StringBuilder("entering ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          Object retVal = invocationContext.proceed();          logger.log(Level.INFO, new StringBuilder("leaving ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          return retVal;    } } As we can see, other than being annotated with our interceptor binding type, the preceding class is a standard interceptor similar to the ones we use with EJB session beans. In order for our interceptor binding type to work properly, we need to add a CDI configuration file (beans.xml) to our project. Then, we need to register our interceptor in beans.xml as follows: <?xml version="1.0" encoding="UTF-8"?> <beans               xsi_schemaLocation="http://>    <interceptors>          <class>        com.ensode.cdiintro.interceptor.LoggingInterceptor      </class>    </interceptors> </beans> To register our interceptor, we need to set bean-discovery-mode to all in the generated beans.xml and add the <interceptor> tag in beans.xml, with one or more nested <class> tags containing the fully qualified names of our interceptors. The final step before we can use our interceptor binding type is to annotate the class to be intercepted with our interceptor binding type: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium; import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @LoggingInterceptorBinding @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer = (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Now, we are ready to use our interceptor. After executing the preceding code and examining the GlassFish log, we can see our interceptor binding type in action. The lines entering saveCustomer method and leaving saveCustomer method were added to the log by our interceptor, which was indirectly invoked by our interceptor binding type. Custom scopes In addition to providing several prebuilt scopes, CDI allows us to define our own custom scopes. This functionality is primarily meant for developers building frameworks on top of CDI, not for application developers. Nevertheless, NetBeans provides a wizard for us to create our own CDI custom scopes. To create a new CDI custom scope, we need to go to File | New File, select the Contexts and Dependency Injection category, and select the Scope Type file type. Then, we need to enter a package and a name for our custom scope. After clicking on Finish, our new custom scope is created, as shown in the following code: package com.ensode.cdiintro.scopes;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Scope;   @Inherited @Scope // or @javax.enterprise.context.NormalScope @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface CustomScope { } To actually use our scope in our CDI applications, we would need to create a custom context which, as mentioned previously, is primarily a concern for framework developers and not for Java EE application developers. Therefore, it is beyond the scope of this article. Interested readers can refer to JBoss Weld CDI for Java Platform, Ken Finnigan, Packt Publishing. (JBoss Weld is a popular CDI implementation and it is included with GlassFish.) Summary In this article, we covered NetBeans support for CDI, a new Java EE API introduced in Java EE 6. We provided an introduction to CDI and explained additional functionality that the CDI API provides over standard JSF. We also covered how to disambiguate CDI injected beans via CDI Qualifiers. Additionally, we covered how to group together CDI annotations via CDI stereotypes. We also, we saw how CDI can help us with AOP via interceptor binding types. Finally, we covered how NetBeans can help us create custom CDI scopes. Resources for Article: Further resources on this subject: Java EE 7 Performance Tuning and Optimization [article] Java EE 7 Developer Handbook [article] Java EE 7 with GlassFish 4 Application Server [article]
Read more
  • 0
  • 0
  • 9639

article-image-android-virtual-device-manager
Packt
06 Feb 2015
8 min read
Save for later

Android Virtual Device Manager

Packt
06 Feb 2015
8 min read
This article written by Belén Cruz Zapata, the author of the book Android Studio Essentials, teaches us the uses of the AVD Manager tool. It introduces us to the Google Play services. (For more resources related to this topic, see here.) The Android Virtual Device Manager (AVD Manager) is an Android tool accessible from Android Studio to manage the Android virtual devices that will be executed in the Android emulator. To open the AVD Manager from Android Studio, navigate to the Tools | Android | AVD Manager menu option. You can also click on the shortcut from the toolbar. The AVD Manager displays the list of the existing virtual devices. Since we have not created any virtual device, initially the list will be empty. To create our first virtual device, click on the Create Virtual Device button to open the configuration dialog. The first step is to select the hardware configuration of the virtual device. The hardware definitions are listed on the left side of the window. Select one of them, like the Nexus 5, to examine its details on the right side as shown in the following screenshot. Hardware definitions can be classified into one of these categories: Phone, Tablet, Wear or TV. We can also configure our own hardware device definitions from the AVD Manager. We can create a new definition using the New Hardware Profile button. The Clone Device button creates a duplicate of an existing device. Click on the New Hardware Profile button to examine the existing configuration parameters. The most important parameters that define a device are: Device Name: Name of the device. Screensize: Screen size in inches. This value determines the size category of the device. Type a value of 4.0 and notice how the Size value (on the right side) is normal. Now type a value of 7.0 and the Size field changes its value to large. This parameter along with the screen resolution also determines the density category. Resolution: Screen resolution in pixels. This value determines the density category of the device. Having a screen size of 4.0 inches, type a value of 768 x 1280 and notice how the density value is 400 dpi. Change the screen size to 6.0 inches and the density value changes to hdpi. Now change the resolution to 480 x 800 and the density value is mdpi. RAM: RAM memory size of the device. Input: Indicate if the home, back, or menu buttons of the device are available via software or hardware. Supported device states: Check the allowed states. Cameras: Select if the device has a front camera or a back camera. Sensors: Sensors available in the device: accelerometer, gyroscope, GPS, and proximity sensor. Default Skin: Select additional hardware controls. Create a new device with a screen size of 4.7 inches, a resolution of 800 x 1280, a RAM value of 500 MiB, software buttons, and both portrait and landscape states enabled. Name it as My Device. Click on the Finish button. The hardware definition has been added to the list of configurations. Click on the Next button to continue the creation of a new virtual device. The next step is to select the virtual device system image and the target Android platform. Each platform has its architecture, so the system images that are installed on your system will be listed along with the rest of the images that can be downloaded (Show downloadable system images box checked). Download and select one of the images of the Lollipop release and click on the Next button. Finally, the last step is to verify the configuration of the virtual device. Enter the name of the Android Virtual Device in the AVD Name field. Give the virtual device a meaningful name to recognize it easily, such as AVD_nexus5_api21. Click on the Show Advanced Settings button. The settings that we can configure for the virtual device are the following: Emulation Options: The Store a snapshot for faster startup option saves the state of the emulator in order to load faster the next time. The Use Host GPU tries to accelerate the GPU hardware to run the emulator faster. Custom skin definition: Select if additional hardware controls are displayed in the emulator. Memory and Storage: Select the memory parameters of the virtual device. Let the default values, unless a warning message is shown; in this case, follow the instructions of the message. For example, select 1536M for the RAM memory and 64 for the VM Heap. The Internal Storage can also be configured. Select for example: 200 MiB. Select the size of the SD Card or select a file to behave as the SD card. Device: Select one of the available device configurations. These configurations are the ones we tested in the layout editor preview. Select the Nexus 5 device to load its parameters in the dialog. Target: Select the device Android platform. We have to create one virtual device with the minimum platform supported by our application and another virtual device with the target platform of our application. For this first virtual device, select the target platform, Android 4.4.2 - API Level 19. CPU/ABI: Select the device architecture. The value of this field is set when we select the target platform. Each platform has its architecture, so if we do not have it installed, the following message will be shown; No system images installed for this target. To solve this, open the SDK Manager and search for one of the architectures of the target platform, ARM EABI v7a System Image or Intel x86 Atom System Image. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Skin: Select if additional hardware controls are displayed in the emulator. You can select the Skin with dynamic hardware controls option. Front Camera: Select if the emulator has a front camera or a back camera. The camera can be emulated or can be real by the use of a webcam from the computer. Select None for both cameras. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Network: Select the speed of the simulated network and select the delay in processing data across the network. The new virtual device is now listed in the AVD Manager. Select the recently created virtual device to enable the remaining actions: Start: Run the virtual device. Edit: Edit the virtual device configuration. Duplicate: Creates a new device configuration displaying the last step of the creation process. You can change its configuration parameters and then verify the new device. Wipe Data: Removes the user files from the virtual device. Show on Disk: Opens the virtual device directory on your system. View Details: Open a dialog detailing the virtual device characteristics. Delete: Delete the virtual device. Click on the Start button. The emulator will be opened as shown in the following screenshot. Wait until it is completely loaded, and then you will be able to try it. In Android Studio, open the main layout with the graphical editor and click on the list of the devices. As the following screenshot shows, our custom device definition appears and we can select it to preview the layout: Navigation Editor The Navigation Editor is a tool to create and structure the layouts of the application using a graphical viewer. To open this tool navigate to the Tools | Android | Navigation Editor menu. The tool opens a file in XML format named main.nvg.xml. This file is stored in your project at /.navigation/app/raw/. Since there is only one layout and one activity in our project, the navigation editor only shows this main layout. If you select the layout, detailed information about it is displayed on the right panel of the editor. If you double-click on the layout, the XML layout file will be opened in a new tab. We can create a new activity by right-mouse clicking on the editor and selecting the New Activity option. We can also add transitions from the controls of a layout by shift clicking on a control and then dragging to the target activity. Open the main layout and create a new button with the label Open Activity: <Button        android_id="@+id/button_open"        android_layout_width="wrap_content"        android_layout_height="wrap_content"        android_layout_below="@+id/button_accept"        android_layout_centerHorizontal="true"        android_text="Open Activity" /> Open the Navigation Editor and add a second activity. Now the navigation editor displays both activities as the next screenshot shows. Now we can add the navigation between them. Shift-drag from the new button of the main activity to the second activity. A blue line and a pink circle have been added to represent the new navigation. Select the navigation relationship to see its details on the right panel as shown in the following screenshot. The right panel shows the source the activity, the destination activity and the gesture that triggers the navigation. Now open our main activity class and notice the new code that has been added to implement the recently created navigation. The onCreate method now contains the following code: findViewById(R.id.button_open).setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { MainActivity.this.startActivity( new Intent(MainActivity.this, Activity2.class)); } }); This code sets the onClick method of the new button, from where the second activity is launched. Summary This article thought us about the Navigation Editor tool. It also showed how to integrate the Google Play services with a project in Android Studio. In this article, we got acquainted to the AVD Manager tool. Resources for Article: Further resources on this subject: Android Native Application API [article] Creating User Interfaces [article] Android 3.0 Application Development: Multimedia Management [article]
Read more
  • 0
  • 0
  • 14118

article-image-creating-games-cocos2d-x-easy-and-100-percent-free
Packt
06 Feb 2015
5 min read
Save for later

Creating Games with Cocos2d-x is Easy and 100-percent Free

Packt
06 Feb 2015
5 min read
This article written by Raydelto Hernandez, the author of Cocos2d-x Android Game Development, explains the history of game development. It also shows how Cocos2d-x is a beneficial software for game development. This article also explains that this software is free and open source, which makes it all the more beneficial. The launch of the Apple App Store back in 2008 leveraged the reach capacity of indie game developers that since this occurrence are able to reach millions of users and compete with large companies, outperforming them in some situations. This reality led the trend of creating reusable game engines such as Cocos2D-iPhone written natively using Objective-C by the argentine, Ricardo Quesada; it allowed many independent developers to reach the top charts of downloads. Picking an existing game engine is a smart choice for indies and large companies since it allows them to focus on the game logic rather than rewriting core features over and over again, thus there are many game engines out there with all kind of licenses and characteristics. The most popular game engines for mobile systems right now are Unity, Marmalade, and Cocos2d-x; the three of them have the capabilities to create 2D and 3D games. Determining which one is the best in terms of ease of use and available tools may be arguably but, there is one objective fact that we can mention that could be easily verified. Among these three engines, Cocos2d-x is the only one that you can use for free, no matter how much money you make using it. We highlighted on this article's title that Cocos2d-x is completely free. This emphasis was done because the other two frameworks also allow some ways of free usage; nevertheless, both at some point require a payment for the usage license. In order to understand why Cocos2d-x is still free and open source, we need to understand how this tool was born. Ricardo, an enthusiastic Python programmer, often participated on game creation challenges from the scratch in only one week. Back in those days, Ricardo and his team re-wrote the core engine for each game until they came with the idea of creating a framework for encapsulating core game capabilities that could be used on any two-dimensional game and make it open source, so contributions could be received worldwide. And that is why Cocos2d was originally written for fun. With the launch in 2007 of the first iPhone, Ricardo lead the development of the port of the Cocos2d Python framework to the iPhone platform using its native language Objective-C. Cocos2D-iPhone quickly became popular among indie game developers, some of them turning themselves into appillionaires, as Chris Stevens called those individuals and enterprises that made millions of dollars during the app store bubble period. This phenomenon made game development companies look at this framework created by hobbyist as a tool creating their products. Zynga was one of the first big companies to adopt Cocos2d as their framework for delivering their famous Farmville game to the iPhone in 2009; this company trades on NASDAQ since 2011 and has more than 2,000 employees. In July 2010, a C++ port of the Cocos2d iPhone called Cocos2d-x was written in China with the objective of taking the power of the framework to other platforms such as the Android operating system that by that time was gaining market share at a spectacular rate. In 2011, this Cocos2d port was acquired by Chukong Technologies, the third largest mobile game development company in China, who later hired the original Cocos2d-iPhone author to join their team. Today, Cocos2d-x-based games dominate the top grossing charts of Google Play and the App Store, especially in Asia. Recognized companies and leading studios such as Konami, Zynga, BANDAI NAMCO, Wooga, Disney Mobile, and Square Enix are using Cocos2d-x in their games. Currently, there are 400,000 developers working on adding new functionalities and making this framework as stable as possible, including engineers from Google, ARM, INTEL, BlackBerry, and Microsoft, who officially support the ports to their products such as Windows Phone, Windows, Windows Metro Interface, and they're planning to support Cocos2d-x for the Xbox during this year. Cocos2d-x is a very straightforward engine that requires a little learning curve to grasp it. I teach game development courses at many universities using this framework. During the first week, the students are capable of creating a game with the complexity of the famous title, Doodle Jump. This can be easily achieved because the framework provides us with all the single components required for our game, such as physics, audio-handling, collision detection, animations, networking, data storage, user input, map rendering, scene transitions, 3D rendering, particle systems rendering, font handling, menu creation, displaying forms, threads handling, and so on, abstracting us from the low-level logic and allowing us to focus on the game logic. In conclusion, if you are willing to learn how to develop games for mobile platforms I strongly recommend you to learn and use the Cocos2d-x framework because it is easy to use, is totally free, is open source, which means that you could better understand it by reading its source, you could modify it if needed, and you have the warranty that you will never be forced to pay a license fee if your game becomes a hit. Another big advantage of this framework is its highly available documentation including the Packt Publishing collection of Cocos2d-x game development books. Sumary This article talked about the different uses of Cocos2d-x. It explained how Cocos2d-x is used worldwide today for game development. This article talked about the use of Cocos2d-x as a free and open source platform for game development.
Read more
  • 0
  • 0
  • 1699
article-image-multiplying-performance-parallel-computing
Packt
06 Feb 2015
22 min read
Save for later

Multiplying Performance with Parallel Computing

Packt
06 Feb 2015
22 min read
In this article, by Aloysius Lim and William Tjhi, authors of the book R High Performance Programming, we will learn how to write and execute a parallel R code, where different parts of the code run simultaneously. So far, we have learned various ways to optimize the performance of R programs running serially, that is in a single process. This does not take full advantage of the computing power of modern CPUs with multiple cores. Parallel computing allows us to tap into all the computational resources available and to speed up the execution of R programs by many times. We will examine the different types of parallelism and how to implement them in R, and we will take a closer look at a few performance considerations when designing the parallel architecture of R programs. (For more resources related to this topic, see here.) Data parallelism versus task parallelism Many modern software applications are designed to run computations in parallel in order to take advantage of the multiple CPU cores available on almost any computer today. Many R programs can similarly be written in order to run in parallel. However, the extent of possible parallelism depends on the computing task involved. On one side of the scale are embarrassingly parallel tasks, where there are no dependencies between the parallel subtasks; such tasks can be made to run in parallel very easily. An example of this is, building an ensemble of decision trees in a random forest algorithm—randomized decision trees can be built independently from one another and in parallel across tens or hundreds of CPUs, and can be combined to form the random forest. On the other end of the scale are tasks that cannot be parallelized, as each step of the task depends on the results of the previous step. One such example is a depth-first search of a tree, where the subtree to search at each step depends on the path taken in previous steps. Most algorithms fall somewhere in between with some steps that must run serially and some that can run in parallel. With this in mind, careful thought must be given when designing a parallel code that works correctly and efficiently. Often an R program has some parts that have to be run serially and other parts that can run in parallel. Before making the effort to parallelize any of the R code, it is useful to have an estimate of the potential performance gains that can be achieved. Amdahl's law provides a way to estimate the best attainable performance gain when you convert a code from serial to parallel execution. It divides a computing task into its serial and potentially-parallel parts and states that the time needed to execute the task in parallel will be no less than this formula: T(n) = T(1)(P + (1-P)/n), where: T(n) is the time taken to execute the task using n parallel processes P is the proportion of the whole task that is strictly serial The theoretical best possible speed up of the parallel algorithm is thus: S(n) = T(1) / T(n) = 1 / (P + (1-P)/n) For example, given a task that takes 10 seconds to execute on one processor, where half of the task can be run in parallel, then the best possible time to run it on four processors is T(4) = 10(0.5 + (1-0.5)/4) = 6.25 seconds. The theoretical best possible speed up of the parallel algorithm with four processors is 1 / (0.5 + (1-0.5)/4) = 1.6x . The following figure shows you how the theoretical best possible execution time decreases as more CPU cores are added. Notice that the execution time reaches a limit that is just above five seconds. This corresponds to the half of the task that must be run serially, where parallelism does not help. Best possible execution time versus number of CPU cores In general, Amdahl's law means that the fastest execution time for any parallelized algorithm is limited by the time needed for the serial portions of the algorithm. Bear in mind that Amdahl's law provides only a theoretical estimate. It does not account for the overheads of parallel computing (such as starting and coordinating tasks) and assumes that the parallel portions of the algorithm are infinitely scalable. In practice, these factors might significantly limit the performance gains of parallelism, so use Amdahl's law only to get a rough estimate of the maximum speedup possible. There are two main classes of parallelism: data parallelism and task parallelism. Understanding these concepts helps to determine what types of tasks can be modified to run in parallel. In data parallelism, a dataset is divided into multiple partitions. Different partitions are distributed to multiple processors, and the same task is executed on each partition of data. Take for example, the task of finding the maximum value in a vector dataset, say one that has one billion numeric data points. A serial algorithm to do this would look like the following code, which iterates over every element of the data in sequence to search for the largest value. (This code is intentionally verbose to illustrate how the algorithm works; in practice, the max() function in R, though also serial in nature, is much faster.) serialmax <- function(data) {max = -Inffor (i in data) {if (i > max)max = i}return max} One way to parallelize this algorithm is to split the data into partitions. If we have a computer with eight CPU cores, we can split the data into eight partitions of 125 million numbers each. Here is the pseudocode for how to perform the same task in parallel: # Run this in parallel across 8 CPU corespart.results <- run.in.parallel(serialmax(data.part))# Compute global maxglobal.max <- serialmax(part.results) This pseudocode runs eight instances of serialmax()in parallel—one for each data partition—to find the local maximum value in each partition. Once all the partitions have been processed, the algorithm finds the global maximum value by finding the largest value among the local maxima. This parallel algorithm works because the global maximum of a dataset must be the largest of the local maxima from all the partitions. The following figure depicts data parallelism pictorially. The key behind data parallel algorithms is that each partition of data can be processed independently of the other partitions, and the results from all the partitions can be combined to compute the final results. This is similar to the mechanism of the MapReduce framework from Hadoop. Data parallelism allows algorithms to scale up easily as data volume increases—as more data is added to the dataset, more computing nodes can be added to a cluster to process new partitions of data. Data parallelism Other examples of computations and algorithms that can be run in a data parallel way include: Element-wise matrix operations such as addition and subtraction: The matrices can be partitioned and the operations are applied to each pair of partitions. Means: The sums and number of elements in each partition can be added to find the global sum and number of elements from which the mean can be computed. K-means clustering: After data partitioning, the K centroids are distributed to all the partitions. Finding the closest centroid is performed in parallel and independently across the partitions. The centroids are updated by first, calculating the sums and the counts of their respective members in parallel, and then consolidating them in a single process to get the global means. Frequent itemset mining using the Partition algorithm: In the first pass, the frequent itemsets are mined from each partition of data to generate a global set of candidate itemsets; in the second pass, the supports of the candidate itemsets are summed from each partition to filter out the globally infrequent ones. The other main class of parallelism is task parallelism, where tasks are distributed to and executed on different processors in parallel. The tasks on each processor might be the same or different, and the data that they act on might also be the same or different. The key difference between task parallelism and data parallelism is that the data is not divided into partitions. An example of a task parallel algorithm performing the same task on the same data is the training of a random forest model. A random forest is a collection of decision trees built independently on the same data. During the training process for a particular tree, a random subset of the data is chosen as the training set, and the variables to consider at each branch of the tree are also selected randomly. Hence, even though the same data is used, the trees are different from one another. In order to train a random forest of say 100 decision trees, the workload could be distributed to a computing cluster with 100 processors, with each processor building one tree. All the processors perform the same task on the same data (or exact copies of the data), but the data is not partitioned. The parallel tasks can also be different. For example, computing a set of summary statistics on the same set of data can be done in a task parallel way. Each process can be assigned to compute a different statistic—the mean, standard deviation, percentiles, and so on. Pseudocode of a task parallel algorithm might look like this: # Run 4 tasks in parallel across 4 coresfor (task in tasks)run.in.parallel(task)# Collect the results of the 4 tasksresults <- collect.parallel.output()# Continue processing after all 4 tasks are complete Implementing data parallel algorithms Several R packages allow code to be executed in parallel. The parallel package that comes with R provides the foundation for most parallel computing capabilities in other packages. Let's see how it works with an example. This example involves finding documents that match a regular expression. Regular expression matching is a fairly computational expensive task, depending on the complexity of the regular expression. The corpus, or set of documents, for this example is a sample of the Reuters-21578 dataset for the topic corporate acquisitions (acq) from the tm package. Because this dataset contains only 50 documents, they are replicated 100,000 times to form a corpus of 5 million documents so that parallelizing the code will lead to meaningful savings in execution times. library(tm)data("acq")textdata <- rep(sapply(content(acq), content), 1e5) The task is to find documents that match the regular expression d+(,d+)? mln dlrs, which represents monetary amounts in millions of dollars. In this regular expression, d+ matches a string of one or more digits, and (,d+)? optionally matches a comma followed by one more digits. For example, the strings 12 mln dlrs, 1,234 mln dlrs and 123,456,789 mln dlrs will match the regular expression. First, we will measure the execution time to find these documents serially with grepl(): pattern <- "\d+(,\d+)? mln dlrs"system.time(res1 <- grepl(pattern, textdata))##   user  system elapsed ## 65.601   0.114  65.721 Next, we will modify the code to run in parallel and measure the execution time on a computer with four CPU cores: library(parallel)detectCores()## [1] 4cl <- makeCluster(detectCores())part <- clusterSplit(cl, seq_along(textdata))text.partitioned <- lapply(part, function(p) textdata[p])system.time(res2 <- unlist(    parSapply(cl, text.partitioned, grepl, pattern = pattern))) ##  user  system elapsed ## 3.708   8.007  50.806 stopCluster(cl) In this code, the detectCores() function reveals how many CPU cores are available on the machine, where this code is executed. Before running any parallel code, makeCluster() is called to create a local cluster of processing nodes with all four CPU cores. The corpus is then split into four partitions using the clusterSplit() function to determine the ideal split of the corpus such that each partition has roughly the same number of documents. The actual parallel execution of grepl() on each partition of the corpus is carried out by the parSapply() function. Each processing node in the cluster is given a copy of the partition of data that it is supposed to process along with the code to be executed and other variables that are needed to run the code (in this case, the pattern argument). When all four processing nodes have completed their tasks, the results are combined in a similar fashion to sapply(). Finally, the cluster is destroyed by calling stopCluster(). It is good practice to ensure that stopCluster() is always called in production code, even if an error occurs during execution. This can be done as follows: doSomethingInParallel <- function(...) {    cl <- makeCluster(...)    on.exit(stopCluster(cl))    # do something} In this example, running the task in parallel on four processors resulted in a 23 percent reduction in the execution time. This is not in proportion to the amount of compute resources used to perform the task; with four times as many CPU cores working on it, a perfectly parallelizable task might experience as much as a 75 percent runtime reduction. However, remember Amdahl's law—the speed of parallel code is limited by the serial parts, which includes the overheads of parallelization. In this case, calling makeCluster() with the default arguments creates a socket-based cluster. When such a cluster is created, additional copies of R are run as workers. The workers communicate with the master R process using network sockets, hence the name. The worker R processes are initialized with the relevant packages loaded, and data partitions are serialized and sent to each worker process. These overheads can be significant, especially in data parallel algorithms where large volumes of data needs to be transferred to the worker processes. Besides parSapply(), parallel also provides the parApply() and parLapply() functions; these functions are analogous to the standard sapply(), apply(), and lapply() functions, respectively. In addition, the parLapplyLB() and parSapplyLB() functions provide load balancing, which is useful when the execution of each parallel task takes variable amounts of time. Finally, parRapply() and parCapply() are parallel row and column apply() functions for matrices. On non-Windows systems, parallel supports another type of cluster that often incurs less overheads — forked clusters. In these clusters, new worker processes are forked from the parent R process with a copy of the data. However, the data is not actually copied in the memory unless it is modified by a child process. This means that, compared to socket-based clusters, initializing child processes is quicker and the memory usage is often lower. Another advantage of using forked clusters is that parallel provides a convenient and concise way to run tasks on them via the mclapply(), mcmapply(), and mcMap() functions. (These functions start with mc because they were originally a part of the multicore package) There is no need to explicitly create and destroy the cluster, as these functions do this automatically. We can simply call mclapply() and state the number of worker processes to fork via the mc.cores argument: system.time(res3 <- unlist(    mclapply(text.partitioned, grepl, pattern = pattern,             mc.cores = detectCores())))##    user  system elapsed ## 127.012   0.350  33.264 This shows a 49 percent reduction in execution time compared to the serial version, and 35 percent reduction compared to parallelizing using a socket-based cluster. For this example, forked clusters provide the best performance. Due to differences in system configuration, you might see very different results when you try the examples in your own environment. When you develop parallel code, it is important to test the code in an environment that is similar to the one that it will eventually run in. Implementing task parallel algorithms Let's now see how to implement a task parallel algorithm using both socket-based and forked clusters. We will look at how to run the same task and different tasks on workers in a cluster. Running the same task on workers in a cluster To demonstrate how to run the same task on a cluster, the task for this example is to generate 500 million Poisson random numbers. We will do this by using L'Ecuyer's combined multiple-recursive generator, which is the only random number generator in base R that supports multiple streams to generate random numbers in parallel. The random number generator is selected by calling the RNGkind() function. We cannot just use any random number generator in parallel because the randomness of the data depends on the algorithm used to generate random data and the seed value given to each parallel task. Most other algorithms were not designed to produce random numbers in multiple parallel streams, and might produce multiple highly correlated streams of numbers, or worse, multiple identical streams! First, we will measure the execution time of the serial algorithm: RNGkind("L'Ecuyer-CMRG")nsamples <- 5e8lambda <- 10system.time(random1 <- rpois(nsamples, lambda))##   user  system elapsed## 51.905   0.636  52.544 To generate the random numbers on a cluster, we will first distribute the task evenly among the workers. In the following code, the integer vector samples.per.process contains the number of random numbers that each worker needs to generate on a four-core CPU. The seq() function produces ncores+1 numbers evenly distributed between 0 and nsamples, with the first number being 0 and the next ncores numbers indicating the approximate cumulative number of samples across the worker processes. The round() function rounds off these numbers into integers and diff() computes the difference between them to give the number of random numbers that each worker process should generate. cores <- detectCores()cl <- makeCluster(ncores)samples.per.process <-    diff(round(seq(0, nsamples, length.out = ncores+1))) Before we can generate the random numbers on a cluster, each worker needs a different seed from which it can generate a stream of random numbers. The seeds need to be set on all the workers before running the task, to ensure that all the workers generate different random numbers. For a socket-based cluster, we can call clusterSetRNGStream() to set the seeds for the workers, then run the random number generation task on the cluster. When the task is completed, we call stopCluster() to shut down the cluster: clusterSetRNGStream(cl)system.time(random2 <- unlist(    parLapply(cl, samples.per.process, rpois,               lambda = lambda)))##  user  system elapsed ## 5.006   3.000  27.436stopCluster(cl) Using four parallel processes in a socket-based cluster reduces the execution time by 48 percent. The performance of this type of cluster for this example is better than that of the data parallel example because there is less data to copy to the worker processes—only an integer that indicates how many random numbers to generate. Next, we run the same task on a forked cluster (again, this is not supported on Windows). The mclapply() function can set the random number seeds for each worker for us, when the mc.set.seed argument is set to TRUE; we do not need to call clusterSetRNGStream(). Otherwise, the code is similar to that of the socket-based cluster: system.time(random3 <- unlist(    mclapply(samples.per.process, rpois,             lambda = lambda,             mc.set.seed = TRUE, mc.cores = ncores))) ##   user  system elapsed ## 76.283   7.272  25.052 On our test machine, the execution time of the forked cluster is slightly faster, but close to that of the socket-based cluster, indicating that the overheads for this task are similar for both types of clusters. Running different tasks on workers in a cluster So far, we have executed the same tasks on each parallel process. The parallel package also allows different tasks to be executed on different workers. For this example, the task is to generate not only Poisson random numbers, but also uniform, normal, and exponential random numbers. As before, we start by measuring the time to perform this task serially: RNGkind("L'Ecuyer-CMRG")nsamples <- 5e7pois.lambda <- 10system.time(random1 <- list(pois = rpois(nsamples,                                          pois.lambda),                            unif = runif(nsamples),                            norm = rnorm(nsamples),                            exp = rexp(nsamples)))##   user  system elapsed ## 14.180   0.384  14.570 In order to run different tasks on different workers on socket-based clusters, a list of function calls and their associated arguments must be passed to parLapply(). This is a bit cumbersome, but parallel unfortunately does not provide an easier interface to run different tasks on a socket-based cluster. In the following code, the function calls are represented as a list of lists, where the first element of each sublist is the name of the function that runs on a worker, and the second element contains the function arguments. The function do.call() is used to call the given function with the given arguments. cores <- detectCores()cl <- makeCluster(cores)calls <- list(pois = list("rpois", list(n = nsamples,                                        lambda = pois.lambda)),              unif = list("runif", list(n = nsamples)),              norm = list("rnorm", list(n = nsamples)),              exp = list("rexp", list(n = nsamples)))clusterSetRNGStream(cl)system.time(    random2 <- parLapply(cl, calls,                         function(call) {                             do.call(call[[1]], call[[2]])                         }))##  user  system elapsed ## 2.185   1.629  10.403stopCluster(cl) On forked clusters on non-Windows machines, the mcparallel() and mccollect() functions offer a more intuitive way to run different tasks on different workers. For each task, mcparallel() sends the given task to an available worker. Once all the workers have been assigned their tasks, mccollect() waits for the workers to complete their tasks and collects the results from all the workers. mc.reset.stream()system.time({    jobs <- list()    jobs[[1]] <- mcparallel(rpois(nsamples, pois.lambda),                            "pois", mc.set.seed = TRUE)    jobs[[2]] <- mcparallel(runif(nsamples),                            "unif", mc.set.seed = TRUE)    jobs[[3]] <- mcparallel(rnorm(nsamples),                            "norm", mc.set.seed = TRUE)    jobs[[4]] <- mcparallel(rexp(nsamples),                            "exp", mc.set.seed = TRUE)    random3 <- mccollect(jobs)})##   user  system elapsed ## 14.535   3.569   7.97 Notice that we also had to call mc.reset.stream() to set the seeds for random number generation in each worker. This was not necessary when we used mclapply(), which calls mc.reset.stream() for us. However, mcparallel() does not, so we need to call it ourselves. Summary In this article, we learned about two classes of parallelism: data parallelism and task parallelism. Data parallelism is good for tasks that can be performed in parallel on partitions of a dataset. The dataset to be processed is split into partitions and each partition is processed on a different worker processes. Task parallelism, on the other hand, divides a set of similar or different tasks to amongst the worker processes. In either case, Amdahl's law states that the maximum improvement in speed that can be achieved by parallelizing code is limited by the proportion of that code that can be parallelized. Resources for Article: Further resources on this subject: Using R for Statistics, Research, and Graphics [Article] Learning Data Analytics with R and Hadoop [Article] Aspects of Data Manipulation in R [Article]
Read more
  • 0
  • 0
  • 3888

article-image-setting-our-development-environment-and-creating-game-activity
Packt
06 Feb 2015
17 min read
Save for later

Setting up our development environment and creating a game activity

Packt
06 Feb 2015
17 min read
In this article by John Horton, author of the book Learning Java by Building Android Games, we will learn how to set up our development environment by installing JDK and Android Studio. We will also learn how to create a new game activity and layout the same on a game screen UI. (For more resources related to this topic, see here.) Setting up our development environment The first thing we need to do is prepare our PC to develop for Android using Java. Fortunately, this is made quite simple for us. The next two tutorials have Windows-specific instructions and screenshots. However, it shouldn't be too difficult to vary the steps slightly to suit Mac or Linux. All we need to do is: Install a software package called the Java Development Kit (JDK), which allows us to develop in Java. Install Android Studio, a program designed to make Android development fast and easy. Android Studio uses the JDK and some other Android-specific tools that automatically get installed when we install Android Studio. Installing the JDK The first thing we need to do is get the latest version of the JDK. To complete this guide, perform the following steps: You need to be on the Java website, so visit http://www.oracle.com/technetwork/java/javase/downloads/index.html. Find the three buttons shown in the following screenshot and click on the one that says JDK (highlighted). They are on the right-hand side of the web page. Click on the DOWNLOAD button under the JDK option: You will be taken to a page that has multiple options to download the JDK. In the Product/File description column, you need to click on the option that matches your operating system. Windows, Mac, Linux and some other less common options are all listed. A common question here is, "do I have 32- or 64-bit windows?". To find out, right-click on your My Computer (This PC on Windows 8) icon, click on the Properties option, and look under the System heading in the System type entry, as shown in the following screenshot: Click on the somewhat hidden Accept License Agreement checkbox: Now click on the download option for your OS and system type as previously determined. Wait for the download to finish. In your Downloads folder, double-click on the file you just downloaded. The latest version at time of writing this for a 64-bit Windows PC was jdk-8u5-windows-x64. If you are using Mac/Linux or have a 32-bit OS, your filename will vary accordingly. In the first of several install dialogs, click on the Next button and you will see the next dialog box: Accept the defaults shown in the previous screenshot by clicking on Next. In the next dialog box, you can accept the default install location by clicking on Next. Next is the last dialog of the Java installer. Click on Close. The JDK is now installed. Next we will make sure that Android Studio is able to use the JDK. Right-click on your My Computer (This PC on Windows 8) icon and navigate to Properties | Advanced system settings | Environment variables | New (under System variables, not under User variables). Now you can see the New System Variable dialog, as shown in the following screenshot: Type JAVA_HOME for Variable name and enter C:Program FilesJavajdk1.8.0_05 for the Variable value field. If you installed the JDK somewhere else, then the file path you enter in the Variable value: field will need to point to wherever you put it. Your exact file path will likely have a different ending to match the latest version of Java at the time you downloaded it. Click on OK to save your new settings. Now click on OK again to clear the Advanced system settings dialog. Now we have the JDK installed on our PC. We are about half way towards starting to learn Java programming, but we need a friendly way to interact with the JDK and to help us make Android games in Java. Android Studio We learned that Android Studio is a tool that simplifies Android development and uses the JDK to allow us to write and build Java programs. There are other tools you can use instead of Android Studio. There are pros and cons in them all. For example, another extremely popular option is Eclipse. And as with so many things in programming, a strong argument can be made as to why you should use Eclipse instead of Android Studio. I use both, but what I hope you will love about Android Studio are the following elements: It is a very neat and, despite still being under development, a very refined and clean interface. It is much easier to get started compared to Eclipse because several Android tools that would otherwise need to be installed separately are already included in the package. Android Studio is being developed by Google, based on another product called IntelliJ IDEA. There is a chance it will be the standard way to develop Android in the not-too-distant future. If you want to use Eclipse, that's fine. However, some the keyboard shortcuts and user interface buttons will obviously be different. If you do not have Eclipse installed already and have no prior experience with Eclipse, then I even more strongly recommend you to go ahead with Android Studio. Installing Android Studio So without any delay, let's get Android Studio installed and then we can begin our first game project. To do this, let's visit https://developer.android.com/sdk/installing/studio.html. Click on the button labeled Download Android Studio to start the Android studio download. This will take you to another web page with a very similar-looking button to the one you just clicked on. Accept the license by checking in the checkbox, commence the download by clicking on the button labeled Download Android Studio for Windows, and wait for the download to complete. The exact text on the button will probably vary depending on the current latest version. In the folder in which you just downloaded Android Studio, right-click on the android-studio-bundle-135.12465-windows.exe file and click on Run as administrator. The end of your filename will vary depending upon the version of Android Studio and your operating system. When asked if you want to Allow the following program from an unknown publisher to make changes to your computer, click on Yes. On the next screen, click on Next. On the screen shown in the following screenshot, you can choose which users of your PC can use Android Studio. Choose whatever is right for you as all options will work, and then click on Next: In the next dialog, leave the default settings and then click on Next. Then on the Choose start menu folder dialog box, leave the defaults and click on Install. On the Installation complete dialog, click on Finish to run Android Studio for the first time. The next dialog is for users who have already used Android Studio, so assuming you are a first time user, select the I do not have a previous version of Android Studio or I do not want to import my settings checkbox, and then click on OK: That was the last piece of software we needed. Math game – asking a question Now that we have all that knowledge under our belts, we can use it to improve our math game. First, we will create a new Android activity to be the actual game screen as opposed to the start menu screen. We will then use the UI designer to lay out a simple game screen so that we can use our Java skills with variables, types, declaration, initialization, operators, and expressions to make our math game generate a question for the player. We can then link the start menu and game screens together with a push button. Creating the new game activity We will first need to create a new Java file for the game activity code and a related layout file to hold the game activity UI. Run Android Studio and select your Math Game Chapter 2 project. It might have been opened by default. Now we will create the new Android activity that will contain the actual game screen, which will run when the player taps the Play button on our main menu screen. To create a new activity, we now need another layout file and another Java file. Fortunately Android Studio will help us do this. To get started with creating all the files we need for a new activity, right-click on the src folder in the Project Explorer and then go to New | Activity. Now click on Blank Activity and then on Next. We now need to tell Android Studio a little bit about our new activity by entering information in the above dialog box. Change the Activity Name field to GameActivity. Notice how the Layout Name field is automatically changed for us to activity_game and the Title field is automatically changed to GameActivity. Click on Finish. Android Studio has created two files for us and has also registered our new activity in a manifest file, so we don't need to concern ourselves with it. If you look at the tabs at the top of the editor window, you will see that GameActivity.java has been opened up ready for us to edit, as shown in the following screenshot: Ensure that GameActivity.java is active in the editor window by clicking on the GameActivity.java tab shown previously. Here, we can see the code that is unnecessary. If we remove it, then it will make our working environment simpler and cleaner. We will simply use the code from MainActivity.java as a template for GameActivity.java. We can then make some minor changes. Click on the MainActivity.java tab in the editor window. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now copy all of the code in the editor window using the Ctrl + C on the keyboard. Now click on the GameActivity.java tab. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now paste the copied code and overwrite the currently highlighted code using Ctrl + V on the keyboard. Notice that there is an error in our code denoted by the red underlining as shown in the following screenshot. This is because we pasted the code referring to MainActivity in our file that is called GameActivity. Simply change the text MainActivity to GameActivity and the error will disappear. Take a moment to see if you can work out what other minor change is necessary, before I tell you. Remember that setContentView loads our UI design. Well what we need to do is change setContentView to load the new design (that we will build next) instead of the home screen design. Change setContentView(R.layout.activity_main); to setContentView(R.layout.activity_game);. Save your work and we are ready to move on. Note the Project Explorer where Android Studio puts the two new files it created for us. I have highlighted two folders in the next screenshot. In future, I will simply refer to them as our java code folder or layout files folder. You might wonder why we didn't simply copy and paste the MainActivity.java file to begin with and saved going through the process of creating a new activity? The reason is that Android Studio does things behind the scenes. Firstly, it makes the layout template for us. It also registers the new activity for use through a file we will see later, called AndroidManifest.xml. This is necessary for the new activity to be able to work in the first place. All things considered, the way we did it is probably the quickest. The code at this stage is exactly the same as the code for the home menu screen. We state the package name and import some useful classes provided by Android: package com.packtpub.mathgamechapter3a.mathgamechapter3a;   import android.app.Activity; import android.os.Bundle; We create a new activity, this time called GameActivity: public class GameActivity extends Activity { Then we override the onCreate method and use the setContentView method to set our UI design as the contents of the player's screen. Currently, however, this UI is empty: super.onCreate(savedInstanceState);setContentView(R.layout.activity_main); We can now think about the layout of our actual game screen. Laying out the game screen UI As we know, our math game will ask questions and offer the player some multiple choices to choose answers from. There are lots of extra features we could add, such as difficulty levels, high scores, and much more. But for now, let's just stick to asking a simple, predefined question and offering a choice of three predefined possible answers. Keeping the UI design to the bare minimum suggests a layout. Our target UI will look somewhat like this: The layout is hopefully self-explanatory, but let's ensure that we are really clear; when we come to building this layout in Android Studio, the section in the mock-up that displays 2 x 2 is the question and will be made up of three text views (both numbers, and the = sign is also a separate view). Finally, the three options for the answer are made up of Button layout elements. This time, as we are going to be controlling them using our Java code, there are a few extra things we need to do to them. So let's go through it step by step: Open the file that will hold our game UI in the editor window. Do this by double-clicking on activity_game.xml. This is located in our UI layout folder, which can be found in the project explorer. Delete the Hello World TextView, as it is not required. Find the Large Text element on the palette. It can be found under the Widgets section. Drag three elements onto the UI design area and arrange them near the top of the design as shown in the next screenshot. It does not have to be exact; just ensure that they are in a row and not overlapping, as shown in the following screenshot: Notice in the Component Tree window that each of the three TextViews has been assigned a name automatically by Android Studio. They are textView , textView2, and textView3: Android Studio refers to these element names as an id. This is an important concept that we will be making use of. So to confirm this, select any one of the textViews by clicking on its name (id), either in the component tree as shown in the preceding screenshot or directly on it in the UI designer shown previously. Now look at the Properties window and find the id property. You might need to scroll a little to do this: Notice that the value for the id property is textView. It is this id that we will use to interact with our UI from our Java code. So we want to change all the IDs of our TextViews to something useful and easy to remember. If you look back at our design, you will see that the UI element with the textView id is going to hold the number for the first part of our math question. So change the id to textPartA. Notice the lowercase t in text, the uppercase P in Part, and the uppercase A. You can use any combination of cases and you can actually name the IDs anything you like. But just as with naming conventions with Java variables, sticking to conventions here will make things less error-prone as our program gets more complicated. Now select textView2 and change id to textOperator. Select the element currently with id textView3 and change it to textPartB. This TextView will hold the later part of our question. Now add another Large Text from the palette. Place it after the row of the three TextViews that we have just been editing. This Large Text will simply hold our equals to sign and there is no plan to ever change it. So we don't need to interact with it in our Java code. We don't even need to concern ourselves with changing the ID or knowing what it is. If this situation changed, we could always come back at a later time and edit its ID. However, this new TextView currently displays Large Text and we want it to display an equals to sign. So in the Properties window, find the text property and enter the value =. We have changed the text property, and you might also like to change the text property for textPartA, textPartB, and textOperator. This is not absolutely essential because we will soon see how we can change it via our Java code; however, if we change the text property to something more appropriate, then our UI designer will look more like it will when the game runs on a real device. So change the text property of textPartA to 2, textPartB to 2, and textOperator to x. Your UI design and Component tree should now look like this: For the buttons to contain our multiple choice answers, drag three buttons in a row, below the = sign. Line them up neatly like our target design. Now, just as we did for the TextViews, find the id properties of each button, and from left to right, change the id properties to buttonChoice1, buttonChoice2, and buttonChoice3. Why not enter some arbitrary numbers for the text property of each button so that the designer more accurately reflects what our game will look like, just as we did for our other TextViews? Again, this is not absolutely essential as our Java code will control the button appearance. We are now actually ready to move on. But you probably agree that the UI elements look a little lost. It would look better if the buttons and text were bigger. All we need to do is adjust the textSize property for each TextView and for each Button. Then, we just need to find the textSize property for each element and enter a number with the sp syntax. If you want your design to look just like our target design from earlier, enter 70sp for each of the TextView textSize properties and 40sp for each of the Buttons textSize properties. When you run the game on your real device, you might want to come back and adjust the sizes up or down a bit. But we have a bit more to do before we can actually try out our game. Save the project and then we can move on. As before, we have built our UI. This time, however, we have given all the important parts of our UI a unique, useful, and easy to identify ID. As we will see we are now able to communicate with our UI through our Java code. Summary In this article, we learned how to set up our development environment by installing JDK and Android Studio. In addition to this, we also learned how to create a new game activity and layout the same on a game screen UI. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] 3D Modeling [article]
Read more
  • 0
  • 0
  • 2078

article-image-run-xcode-run
Packt
05 Feb 2015
9 min read
Save for later

Run Xcode Run

Packt
05 Feb 2015
9 min read
In this article by Jorge Jordán, author of the book Cocos2d Game Development Blueprints, we will see how to run the newly created project in Xcode. (For more resources related to this topic, see here.) Click on Run at the top-left of the Xcode window and it will run the project in the iOS Simulator, which defaults to an iOS 6.1 iPhone: Voilà! You've just built your first Hello World example with Cocos2d v3, but before going further, let's take a look at the code to understand how it works. We will be using iOS Simulator to run the game unless otherwise specified. Understanding the default project We are going to take an overview of the classes available in a new project, but don't worry if you don't understand everything; the objective of this section is just to get familiar with the look of a Cocos2d game. If you open the main.m class under the Supporting Files group, you will see: int main(int argc, char *argv[]) {    @autoreleasepool {        int retVal = UIApplicationMain(argc, argv, nil,         @"AppDelegate");        return retVal;    } } As you can see, the @autorelease block means that ARC is enabled by default on new Cocos2d projects so we don't have to worry about releasing objects or enabling ARC. ARC is the acronym for Automatic Reference Counting and it's a compiler iOS feature to provide automatic memory management of objects. It works by adding code at compile time, ensuring every object lives as long as necessary, but not longer. On the other hand, the block calls AppDelegate, a class that inherits from CCAppDelegate which implements the UIApplicationDelegate protocol. In other words, the starting point of our game and the place to set up our app is located in AppDelegate, like a typical iOS application. If you open AppDelegate.m, you will see the following method, which is called when the game has been launched: -(BOOL)application:(UIApplication *)applicationdidFinishLaunchingWithOptions:(NSDictionary *)launchOptions {    [self setupCocos2dWithOptions:@{          CCSetupShowDebugStats: @(YES),    }];    return YES; } Here, the only initial configuration specified is to enable the debug stats, specifying the option CCSetupShowDebugStats: @(YES), that you can see in the previous block of code. The number on the top indicates the amount of draw calls and the two labels below are the time needed to update the frame and the frame rate respectively. The maximum frame rate an iOS device can have is 60 and it's a measure of the smoothness a game can attain: the higher the frame rate, the smoother the game. You will need to have the top and the bottom values in mind as the number of draw calls and the frame rate will let you know how efficient your game will be. The next thing to take care of is the startScene method: -(CCScene *)startScene {    // The initial scene will be GameScene    return [IntroScene scene]; } This method should be overriden to indicate the first scene we want to display in our game. In this case, it points to IntroScene where the init method looks like the following code: - (id)init {    // Apple recommends assigning self with super's return value    self = [super init];    if (!self) {        return(nil);      }    // Create a colored background (Dark Gray)    CCNodeColor *background = [CCNodeColor nodeWithColor:[CCColorcolorWithRed:0.2f green:0.2f blue:0.2f alpha:1.0f]];    [self addChild:background];    // Hello world    CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World"fontName:@"Chalkduster" fontSize:36.0f];    label.positionType = CCPositionTypeNormalized;    label.color = [CCColor redColor];    label.position = ccp(0.5f, 0.5f); // Middle of screen    [self addChild:label];    // Helloworld scene button    CCButton *helloWorldButton = [CCButton buttonWithTitle:@"[Start ]" fontName:@"Verdana-Bold" fontSize:18.0f];    helloWorldButton.positionType = CCPositionTypeNormalized;    helloWorldButton.position = ccp(0.5f, 0.35f);    [helloWorldButton setTarget:self     selector:@selector(onSpinningClicked:)];    [self addChild:helloWorldButton];    // done    return self; } This code first calls the initialization method for the superclass IntroScene by sending the [super init] message. Then it creates a gray-colored background with a CCNodeColor class, which is basically a solid color node, but this background won't be shown until it's added to the scene, which is exactly what [self addChild:background] does. The red "Hello World" label you can see in the previous screenshot is an instance of the CCLabelTTF class, whose position will be centered on the screen thanks to label.position = ccp(0.5f, 0.5f). Cocos2d provides the cpp(coord_x, coord_y) method, which is a precompiler macro for CGPointMake and both can be used interchangeably. The last code block creates CCButton that will call onSpinningClicked once we click on it. This source code isn't hard at all, but what will happen when we click on the Start button? Don't be shy, go back to the iOS Simulator and find out! If you take a look at the onSpinningClicked method in IntroScene.m, you will understand what happened: - (void)onSpinningClicked:(id)sender {    // start spinning scene with transition    [[CCDirector sharedDirector] replaceScene:[HelloWorldScene     scene]        withTransition:[CCTransitiontransitionPushWithDirection:CCTransitionDirectionLeftduration:1.0f]]; } This code presents the HelloWorldScene scene replacing the current one (InitScene) and it's being done by pushing HelloWorldScene to the top of the scene stack and using a horizontal scroll transition that will last for 1.0 second. Let's take a look at the HelloWorldScene.m to understand the behavior we just experienced: @implementation HelloWorldScene {    CCSprite *_sprite; } - (id)init {    // Apple recommends assigning self with super's return value    self = [super init];    if (!self) {        return(nil);    }    // Enable touch handling on scene node    self.userInteractionEnabled = YES;    // Create a colored background (Dark Gray)    CCNodeColor *background = [CCNodeColor nodeWithColor:[CCColorcolorWithRed:0.2f green:0.2f blue:0.2f alpha:1.0f]];    [self addChild:background];    // Add a sprite    _sprite = [CCSprite spriteWithImageNamed:@"Icon-72.png"];    _sprite.position =     ccp(self.contentSize.width/2,self.contentSize.height/2);    [self addChild:_sprite];    // Animate sprite with action    CCActionRotateBy* actionSpin = [CCActionRotateByactionWithDuration:1.5f angle:360];    [_sprite runAction:[CCActionRepeatForeveractionWithAction:actionSpin]];    // Create a back button    CCButton *backButton = [CCButton buttonWithTitle:@"[ Menu ]"fontName:@"Verdana-Bold" fontSize:18.0f];    backButton.positionType = CCPositionTypeNormalized;    backButton.position = ccp(0.85f, 0.95f); // Top Right ofscreen    [backButton setTarget:self     selector:@selector(onBackClicked:)];    [self addChild:backButton];    // done    return self; } This piece of code is very similar to the one we saw in IntroScene.m, which is why we just need to focus on the differences. If you look at the top of the class, you can see how we are declaring a private instance for a CCSprite class, which is also a subclass of CCNode, and its main role is to render 2D images on the screen. The CCSprite class is one of the most-used classes in Cocos2d game development, as it provides a visual representation and a physical shape to the objects in view. Then, in the init method, you will see the instruction self.userInteractionEnabled = YES, which is used to enable the current scene to detect and manage touches by implementing the touchBegan method. The next thing to highlight is how we initialize a CCSprite class using an image, positioning it in the center of the screen. If you read a couple more lines, you will understand why the icon rotates as soon as the scene is loaded. We create a 360-degree rotation action thanks to CCRotateBy that will last for 1.5 seconds. But why is this rotation repeated over and over? This happens thanks to CCActionRepeatForever, which will execute the rotate action as long as the scene is running. The last piece of code in the init method doesn't need explanation as it creates a CCButton that will execute onBackClicked once clicked. This method replaces the scene HelloWorldScene with IntroScene in a similar way as we saw before, with only one difference: the transition happens from left to right. Did you try to touch the screen? Try it and you will understand why touchBegan has the following code: -(void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event {    CGPoint touchLoc = [touch locationInNode:self];    // Move our sprite to touch location    CCActionMoveTo *actionMove = [CCActionMoveToactionWithDuration:1.0f position:touchLoc];    [_sprite runAction:actionMove]; } This is one of the methods you need to implement to manage touch. The others are touchMoved, touchEnded, and touchCancelled. When the user begins touching the screen, the sprite will move to the registered coordinates thanks to a commonly used action: CCActionMoveto. This action just needs to know the position that we want to move our sprite to and the duration of the movement. Now that we have had an overview of the initial project code, it is time to go deeper into some of the classes we have shown. Did you realize that CCNode is the parent class of several classes we have seen? You will understand why if you keep reading. Summary In this article, we had our first contact with a Cocos2d project. We executed a new project and took an overview of it, understanding some of the classes that are part of this framework. Resources for Article: Further resources on this subject: Dragging a CCNode in Cocos2D-Swift [Article] Animations in Cocos2d-x [Article] Why should I make cross-platform games? [Article]
Read more
  • 0
  • 0
  • 7593
article-image-what-kali-linux
Packt
05 Feb 2015
1 min read
Save for later

What is Kali Linux

Packt
05 Feb 2015
1 min read
This article created by Aaron Johns, the author of Mastering Wireless Penetration Testing for Highly Secured Environments introduces Kali Linux and the steps needed to get started. Kali Linux is a security penetration testing distribution built on Debian Linux. It covers many different varieties of security tools, each of which are organized by category. Let's begin by downloading and installing Kali Linux! (For more resources related to this topic, see here.) Downloading Kali Linux Congratulations, you have now started your first hands-on experience in this article! I'm sure you are excited so let's begin! Visit http://www.kali.org/downloads/. Look under the Official Kali Linux Downloads section: In this demonstration, I will be downloading and installing Kali Linux 1.0.6 32 Bit ISO. Click on the Kali Linux 1.0.6 32 Bit ISO hyperlink to download it. Depending on your Internet connection, this may take an hour to download, so please prepare yourself ahead of time so that you do not have to wait on this download. Those who have a slow Internet connection may want to reconsider downloading from a faster source within the local area. Restrictions on downloading may apply in public locations. Please make sure you have permission to download Kali Linux before doing so. Installing Kali Linux in VMware Player Once you have finished downloading Kali Linux, you will want to make sure you have VMware Player installed. VMware Player is where you will be installing Kali Linux. If you are not familiar with VMware Player, it is simply a type of virtualization software that emulates an operating system without requiring another physical system. You can create multiple operating systems and run them simultaneously. Perform the following steps: Let's start off by opening VMware Player from your desktop: VMware Player should open and display a graphical user interface: Click on Create a New Virtual Machine on the right: Select I will install the operating system later and click on Next. Select Linux and then Debian 7 from the drop-down menu: Click on Next to continue. Type Kali Linux for the virtual machine name. Browse for the Kali Linux ISO file that was downloaded earlier then click on Next. Change the disk size from 25 GB to 50 GB and then click on Next: Click on Finish: Kali Linux should now be displaying in your VMware Player library. From here, you can click on Customize Hardware... to increase the RAM or hard disk space, or change the network adapters according to your system's hardware. Click on Play virtual machine: Click on Player at the top-left and then navigate to Removable Devices | CD/DVD IDE | Settings…: Check the box next to Connected, Select Use ISO image file, browse for the Kali Linux ISO, then click on OK. Click on Restart VM at the bottom of the screen or click on Player, then navigate to Power | Restart Guest; the following screen appears: After restarting the virtual machine, you should see the following: Select Live (686-pae) then press Enter It should boot into Kali Linux and take you to the desktop screen: Congratulations! You have successfully installed Kali Linux. Updating Kali Linux Before we can get started with any of the demonstrations in this book, we must update Kali Linux to help keep the software package up to date. Open VMware Player from your desktop. Select Kali Linux and click on the green arrow to boot it. Once Kali Linux has booted up, open a new Terminal window. Type sudo apt-get update and press Enter: Then type sudo apt-get upgrade and press Enter: You will be prompted to specify if you want to continue. Type y and press Enter: Repeat these commands until there are no more updates: sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade Congratulations! You have successfully updated Kali Linux! Summary This was just the introduction to help prepare you before we get deeper into advanced technical demonstrations and hands-on examples. We did our first hands-on work through Kali Linux to install and update it on VMware Player. Resources for Article: Further resources on this subject: Veil-Evasion [article] Penetration Testing and Setup [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 8711

article-image-chain-responsibility-pattern
Packt
05 Feb 2015
12 min read
Save for later

The Chain of Responsibility Pattern

Packt
05 Feb 2015
12 min read
In this article by Sakis Kasampalis, author of the book Mastering Python Design Patterns, we will see a detailed description of the Chain of Responsibility design pattern with the help of a real-life example as well as a software example. Also, its use cases and implementation are discussed. (For more resources related to this topic, see here.) When developing an application, most of the time we know which method should satisfy a particular request in advance. However, this is not always the case. For example, we can think of any broadcast computer network, such as the original Ethernet implementation [j.mp/wikishared]. In broadcast computer networks, all requests are sent to all nodes (broadcast domains are excluded for simplicity), but only the nodes that are interested in a sent request process it. All computers that participate in a broadcast network are connected to each other using a common medium such as the cable that connects the three nodes in the following figure: If a node is not interested or does not know how to handle a request, it can perform the following actions: Ignore the request and do nothing Forward the request to the next node The way in which the node reacts to a request is an implementation detail. However, we can use the analogy of a broadcast computer network to understand what the chain of responsibility pattern is all about. The Chain of Responsibility pattern is used when we want to give a chance to multiple objects to satisfy a single request, or when we don't know which object (from a chain of objects) should process a specific request in advance. The principle is the same as the following: There is a chain (linked list, tree, or any other convenient data structure) of objects. We start by sending a request to the first object in the chain. The object decides whether it should satisfy the request or not. The object forwards the request to the next object. This procedure is repeated until we reach the end of the chain. At the application level, instead of talking about cables and network nodes, we can focus on objects and the flow of a request. The following figure, courtesy of a title="Scala for Machine Learning" www.sourcemaking.com [j.mp/smchain], shows how the client code sends a request to all processing elements (also known as nodes or handlers) of an application: Note that the client code only knows about the first processing element, instead of having references to all of them, and each processing element only knows about its immediate next neighbor (called the successor), not about every other processing element. This is usually a one-way relationship, which in programming terms means a singly linked list in contrast to a doubly linked list; a singly linked list does not allow navigation in both ways, while a doubly linked list allows that. This chain organization is used for a good reason. It achieves decoupling between the sender (client) and the receivers (processing elements) [GOF95, page 254]. A real-life example ATMs and, in general, any kind of machine that accepts/returns banknotes or coins (for example, a snack vending machine) use the chain of responsibility pattern. There is always a single slot for all banknotes, as shown in the following figure, courtesy of www.sourcemaking.com: When a banknote is dropped, it is routed to the appropriate receptacle. When it is returned, it is taken from the appropriate receptacle [j.mp/smchain], [j.mp/c2chain]. We can think of the single slot as the shared communication medium and the different receptacles as the processing elements. The result contains cash from one or more receptacles. For example, in the preceding figure, we see what happens when we request $175 from the ATM. A software example I tried to find some good examples of Python applications that use the Chain of Responsibility pattern but I couldn't, most likely because Python programmers don't use this name. So, my apologies, but I will use other programming languages as a reference. The servlet filters of Java are pieces of code that are executed before an HTTP request arrives at a target. When using servlet filters, there is a chain of filters. Each filter performs a different action (user authentication, logging, data compression, and so forth), and either forwards the request to the next filter until the chain is exhausted, or it breaks the flow if there is an error (for example, the authentication failed three consecutive times) [j.mp/soservl]. Apple's Cocoa and Cocoa Touch frameworks use Chain of Responsibility to handle events. When a view receives an event that it doesn't know how to handle, it forwards the event to its superview. This goes on until a view is capable of handling the event or the chain of views is exhausted [j.mp/chaincocoa]. Use cases By using the Chain of Responsibility pattern, we give a chance to a number of different objects to satisfy a specific request. This is useful when we don't know which object should satisfy a request in advance. An example is a purchase system. In purchase systems, there are many approval authorities. One approval authority might be able to approve orders up to a certain value, let's say $100. If the order is more than $100, the order is sent to the next approval authority in the chain that can approve orders up to $200, and so forth. Another case where Chain of Responsibility is useful is when we know that more than one object might need to process a single request. This is what happens in an event-based programming. A single event such as a left mouse click can be caught by more than one listener. It is important to note that the Chain of Responsibility pattern is not very useful if all the requests can be taken care of by a single processing element, unless we really don't know which element that is. The value of this pattern is the decoupling that it offers. Instead of having a many-to-many relationship between a client and all processing elements (and the same is true regarding the relationship between a processing element and all other processing elements), a client only needs to know how to communicate with the start (head) of the chain. The following figure demonstrates the difference between tight and loose coupling. The idea behind loosely coupled systems is to simplify maintenance and make it easier for us to understand how they function [j.mp/loosecoup]: Implementation There are many ways to implement Chain of Responsibility in Python, but my favorite implementation is the one by Vespe Savikko [j.mp/savviko]. Vespe's implementation uses dynamic dispatching in a Pythonic style to handle requests [j.mp/ddispatch]. Let's implement a simple event-based system using Vespe's implementation as a guide. The following is the UML class diagram of the system: The Event class describes an event. We'll keep it simple, so in our case an event has only name: class Event: def __init__(self, name): self.name = name def __str__(self): return self.name The Widget class is the core class of the application. The parent aggregation shown in the UML diagram indicates that each widget can have a reference to a parent object, which by convention, we assume is a Widget instance. Note, however, that according to the rules of inheritance, an instance of any of the subclasses of Widget (for example, an instance of MsgText) is also an instance of Widget. The default value of parent is None: class Widget: def __init__(self, parent=None): self.parent = parent The handle() method uses dynamic dispatching through hasattr() and getattr() to decide who is the handler of a specific request (event). If the widget that is asked to handle an event does not support it, there are two fallback mechanisms. If the widget has parent, then the handle() method of parent is executed. If the widget has no parent but a handle_default() method, handle_default() is executed: def handle(self, event): handler = 'handle_{}'.format(event) if hasattr(self, handler): method = getattr(self, handler) method(event) elif self.parent: self.parent.handle(event) elif hasattr(self, 'handle_default'): self.handle_default(event) At this point, you might have realized why the Widget and Event classes are only associated (no aggregation or composition relationships) in the UML class diagram. The association is used to show that the Widget class "knows" about the Event class but does not have any strict references to it, since an event needs to be passed only as a parameter to handle(). MainWIndow, MsgText, and SendDialog are all widgets with different behaviors. Not all these three widgets are expected to be able to handle the same events, and even if they can handle the same event, they might behave differently. MainWIndow can handle only the close and default events: class MainWindow(Widget): def handle_close(self, event): print('MainWindow: {}'.format(event)) def handle_default(self, event): print('MainWindow Default: {}'.format(event)) SendDialog can handle only the paint event: class SendDialog(Widget): def handle_paint(self, event): print('SendDialog: {}'.format(event)) Finally, MsgText can handle only the down event: class MsgText(Widget): def handle_down(self, event): print('MsgText: {}'.format(event)) The main() function shows how we can create a few widgets and events, and how the widgets react to those events. All events are sent to all the widgets. Note the parent relationship of each widget. The sd object (an instance of SendDialog) has as its parent the mw object (an instance of MainWindow). However, not all objects need to have a parent that is an instance of MainWindow. For example, the msg object (an instance of MsgText) has the sd object as a parent: def main(): mw = MainWindow() sd = SendDialog(mw) msg = MsgText(sd) for e in ('down', 'paint', 'unhandled', 'close'): evt = Event(e) print('nSending event -{}- to MainWindow'.format(evt)) mw.handle(evt) print('Sending event -{}- to SendDialog'.format(evt)) sd.handle(evt) print('Sending event -{}- to MsgText'.format(evt)) msg.handle(evt) The following is the full code of the example (chain.py): class Event: def __init__(self, name): self.name = name def __str__(self): return self.name class Widget: def __init__(self, parent=None): self.parent = parent def handle(self, event): handler = 'handle_{}'.format(event) if hasattr(self, handler): method = getattr(self, handler) method(event) elif self.parent: self.parent.handle(event) elif hasattr(self, 'handle_default'): self.handle_default(event) class MainWindow(Widget): def handle_close(self, event): print('MainWindow: {}'.format(event)) def handle_default(self, event): print('MainWindow Default: {}'.format(event)) class SendDialog(Widget): def handle_paint(self, event): print('SendDialog: {}'.format(event)) class MsgText(Widget): def handle_down(self, event): print('MsgText: {}'.format(event)) def main(): mw = MainWindow() sd = SendDialog(mw) msg = MsgText(sd) for e in ('down', 'paint', 'unhandled', 'close'): evt = Event(e) print('nSending event -{}- to MainWindow'.format(evt)) mw.handle(evt) print('Sending event -{}- to SendDialog'.format(evt)) sd.handle(evt) print('Sending event -{}- to MsgText'.format(evt)) msg.handle(evt) if __name__ == '__main__': main() Executing chain.py gives us the following results: >>> python3 chain.py Sending event -down- to MainWindow MainWindow Default: down Sending event -down- to SendDialog MainWindow Default: down Sending event -down- to MsgText MsgText: down Sending event -paint- to MainWindow MainWindow Default: paint Sending event -paint- to SendDialog SendDialog: paint Sending event -paint- to MsgText SendDialog: paint Sending event -unhandled- to MainWindow MainWindow Default: unhandled Sending event -unhandled- to SendDialog MainWindow Default: unhandled Sending event -unhandled- to MsgText MainWindow Default: unhandled Sending event -close- to MainWindow MainWindow: close Sending event -close- to SendDialog MainWindow: close Sending event -close- to MsgText MainWindow: close There are some interesting things that we can see in the output. For instance, sending a down event to MainWindow ends up being handled by the default MainWindow handler. Another nice case is that although a close event cannot be handled directly by SendDialog and MsgText, all the close events end up being handled properly by MainWindow. That's the beauty of using the parent relationship as a fallback mechanism. If you want to spend some more creative time on the event example, you can replace the dumb print statements and add some actual behavior to the listed events. Of course, you are not limited to the listed events. Just add your favorite event and make it do something useful! Another exercise is to add a MsgText instance during runtime that has MainWindow as the parent. Is this hard? Do the same for an event (add a new event to an existing widget). Which is harder? Summary In this article, we covered the Chain of Responsibility design pattern. This pattern is useful to model requests / handle events when the number and type of handlers isn't known in advance. Examples of systems that fit well with Chain of Responsibility are event-based systems, purchase systems, and shipping systems. In the Chain Of Responsibility pattern, the sender has direct access to the first node of a chain. If the request cannot be satisfied by the first node, it forwards to the next node. This continues until either the request is satisfied by a node or the whole chain is traversed. This design is used to achieve loose coupling between the sender and the receiver(s). ATMs are an example of Chain Of Responsibility. The single slot that is used for all banknotes can be considered the head of the chain. From here, depending on the transaction, one or more receptacles is used to process the transaction. The receptacles can be considered the processing elements of the chain. Java's servlet filters use the Chain of Responsibility pattern to perform different actions (for example, compression and authentication) on an HTTP request. Apple's Cocoa frameworks use the same pattern to handle events such as button presses and finger gestures. Resources for Article: Further resources on this subject: Exploring Model View Controller [Article] Analyzing a Complex Dataset [Article] Automating Your System Administration and Deployment Tasks Over SSH [Article]
Read more
  • 0
  • 0
  • 10075
Modal Close icon
Modal Close icon