Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-art-android-development-using-android-studio
Packt
28 Oct 2015
5 min read
Save for later

The Art of Android Development Using Android Studio

Packt
28 Oct 2015
5 min read
 In this article by Mike van Drongelen, the author of the book Android Studio Cookbook, you will see why Android Studio is the number one IDE to develop Android apps. It is available for free for anyone who wants to develop professional Android apps. Android Studio is not just a stable and fast IDE (based on Jetbrains IntelliJ IDEA), it also comes with cool stuff such as Gradle, better refactoring methods, and a much better layout editor to name just a few of them. If you have been using Eclipse before, then you're going to love this IDE. Android Studio tip Want to refactor your code? Use the shortcut CTRL + T (for Windows: Ctrl + Alt + Shift + T) to see what options you have. You can, for example, rename a class or method or extract code from a method. Any type of Android app can be developed using Android Studio. Think of apps for phones, phablets, tablets, TVs, cars, glasses, and other wearables such as watches. Or consider an app that uses a cloud-base backend such as Parse or App Engine, a watch face app, or even a complete media center solution for TV. So, what is in the book? The sky is the limit, and the book will help you make the right choices while developing your apps. For example, on smaller screens, provide smart navigation and use fragments to make apps look great on a tablet too. Or, see how content providers can help you to manage and persist data and how to share data among applications. The observer pattern that comes with content providers will save you a lot of time. Android Studio tip Do you often need to return to a particular place in your code? Create a bookmark with Cmd + F3 (for Windows: F11). To display a list of bookmarks to choose from, use the shortcut: Cmd + F3 (for Windows: Shift + F11). Material design The book will also elaborate on material design. Create cool apps using CardView and RecycleView widgets. Find out how to create special effects and how to perform great transitions. A chapter is dedicated to the investigation of the Camera2 API and how to capture and preview photos. In addition, you will learn how to apply filters and how to share the results on Facebook. The following image is an example of one of the results: Android Studio tip Are you looking for something? Press Shift two times and start typing what you're searching for. Or to display all recent files, use the Cmd + E shortcut (for Windows: Ctrl + E). Quality and performance You will learn about patterns and how support annotations can help you improve the quality of your code. Testing your app is just as important as developing one, and it will take your app to the next level. Aim for a five-star rating in the Google Play Store later. The book shows you how to do unit testing based on jUnit or Robolectric and how to use code analysis tools such as Android Lint. You will learn about memory optimization using the Android Device Monitor, detect issues and learn how to fix them as shown in the following screenshot: Android Studio tip You can easily extract code from a method that has become too large. Just mark the code that you want to move and use the shortcut Cmd + Alt + M (for Windows: Ctrl + Alt + M). Having a physical Android device to test your apps is strongly recommended, but with thousands of Android devices being available, testing on real devices could be pretty expensive. Genymotion is a real, fast, and easy-to-use emulator and comes with many real-world device configurations. Did all your unit tests succeed? There are no more OutOfMemoryExceptions any more? No memory leaks found? Then it is about time to distribute your app to your beta testers. The final chapters explain how to configure your app for a beta release by creating the build types and build flavours that you need. Finally, distribute your app to your beta testers using Google Play to learn from their feedback. Did you know? Android Marshmallow (Android 6.0) introduces runtime permissions, which will change the way users give permission for an app. The book The art of Android development using Android Studio contains around 30 real-world recipes, clarifying all topics being discussed. It is a great start for programmers that have been using Eclipse for Android development before but is also suitable for new Android developers that know about the Java Syntax already. Summary The book nicely explains all the things you need to know to find your way in Android Studio and how to create high-quality and great looking apps. Resources for Article: Further resources on this subject: Introducing an Android platform [article] Testing with the Android SDK [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 16552

article-image-introduction-kibana
Packt
28 Oct 2015
28 min read
Save for later

An Introduction to Kibana

Packt
28 Oct 2015
28 min read
In this article by Yuvraj Gupta, author of the book, Kibana Essentials, explains Kibana is a tool that is part of the ELK stack, which consists of Elasticsearch, Logstash, and Kibana. It is built and developed by Elastic. Kibana is a visualization platform that is built on top of Elasticsearch and leverages the functionalities of Elasticsearch. (For more resources related to this topic, see here.) To understand Kibana better, let's check out the following diagram: This diagram shows that Logstash is used to push data directly into Elasticsearch. This data is not limited to log data, but can include any type of data. Elasticsearch stores data that comes as input from Logstash, and Kibana uses the data stored in Elasticsearch to provide visualizations. So, Logstash provides an input stream of data to Elasticsearch, from which Kibana accesses the data and uses it to create visualizations. Kibana acts as an over-the-top layer of Elasticsearch, providing beautiful visualizations for data (structured or nonstructured) stored in it. Kibana is an open source analytics product used to search, view, and analyze data. It provides various types of visualizations to visualize data in the form of tables, charts, maps, histograms, and so on. It also provides a web-based interface that can easily handle a large amount of data. It helps create dashboards that are easy to create and helps query data in real time. Dashboards are nothing but an interface for underlying JSON documents. They are used for saving, templating, and exporting. They are simple to set up and use, which helps us play with data stored in Elasticsearch in minutes without requiring any coding. Kibana is an Apache-licensed product that aims to provide a flexible interface combined with the powerful searching capabilities of Elasticsearch. It requires a web server (included in the Kibana 4 package) and any modern web browser, that is, a browser that supports industry standards and renders the web page in the same way across all browsers, to work. It connects to Elasticsearch using the REST API. It helps to visualize data in real time with the use of dashboards to provide real-time insights. As Kibana uses the functionalities of Elasticsearch, it is easier to learn Kibana by understanding the core functionalities of Elasticsearch. In this article, we are going to take a look at the following topics: The basic concepts of Elasticsearch Installation of Java Installation of Elasticsearch Installation of Kibana Importing a JSON file into Elasticsearch Understanding Elasticsearch Elasticsearch is a search server built on top of Lucene (licensed under Apache), which is completely written in Java. It supports distributed searches in a multitenant environment. It is a scalable search engine allowing high flexibility of adding machines easily. It provides a full-text search engine combined with a RESTful web interface and JSON documents. Elasticsearch harnesses the functionalities of Lucene Java Libraries, adding up by providing proper APIs, scalability, and flexibility on top of the Lucene full-text search library. All querying done using Elasticsearch, that is, searching text, matching text, creating indexes, and so on, is implemented by Apache Lucene. Without a setup of an Elastic shield or any other proxy mechanism, any user with access to Elasticsearch API can view all the data stored in the cluster. The basic concepts of Elasticsearch Let's explore some of the basic concepts of Elasticsearch: Field: This is the smallest single unit of data stored in Elasticsearch. It is similar to a column in a traditional relational database. Every document contains key-value pairs, which are referred to as fields. Values in a field can contain a single value, such as integer [27], string ["Kibana"], or multiple values, such as array [1, 2, 3, 4, 5]. The field type is responsible for specifying which type of data can be stored in a particular field, for example, integer, string, date, and so on. Document: This is the simplest unit of information stored in Elasticsearch. It is a collection of fields. It is considered similar to a row of a table in a traditional relational database. A document can contain any type of entry, such as a document for a single restaurant, another document for a single cuisine, and yet another for a single order. Documents are in JavaScript Object Notation (JSON), which is a language-independent data interchange format. JSON contains key-value pairs. Every document that is stored in Elasticsearch is indexed. Every document contains a type and an ID. An example of a document that has JSON values is as follows: { "name": "Yuvraj", "age": 22, "birthdate": "2015-07-27", "bank_balance": 10500.50, "interests": ["playing games","movies","travelling"], "movie": {"name":"Titanic","genre":"Romance","year" : 1997} } In the preceding example, we can see that the document supports JSON, having key-value pairs, which are explained as follows: The name field is of the string type The age field is of the numeric type The birthdate field is of the date type The bank_balance field is of the float type The interests field contains an array The movie field contains an object (dictionary) Type: This is similar to a table in a traditional relational database. It contains a list of fields, which is defined for every document. A type is a logical segregation of indexes, whose interpretation/semantics entirely depends on you. For example, you have data about the world and you put all your data into an index. In this index, you can define a type for continent-wise data, another type for country-wise data, and a third type for region-wise data. Types are used with a mapping API; it specifies the type of its field. An example of type mapping is as follows: { "user": { "properties": { "name": { "type": "string" }, "age": { "type": "integer" }, "birthdate": { "type": "date" }, "bank_balance": { "type": "float" }, "interests": { "type": "string" }, "movie": { "properties": { "name": { "type": "string" }, "genre": { "type": "string" }, "year": { "type": "integer" } } } } } } Now, let's take a look at the core data types specified in Elasticsearch, as follows: Type Definition string This contains text, for example, "Kibana" integer This contains a 32-bit integer, for example, 7 long This contains a 64-bit integer float IEEE float, for example, 2.7 double This is a double-precision float boolean This can be true or false date This is the UTC date/time, for example, "2015-06-30T13:10:10" geo_point This is the latitude or longitude Index: This is a collection of documents (one or more than one). It is similar to a database in the analogy with traditional relational databases. For example, you can have an index for user information, transaction information, and product type. An index has a mapping; this mapping is used to define multiple types. In other words, an index can contain single or multiple types. An index is defined by a name, which is always used whenever referring to an index to perform search, update, and delete operations for documents. You can define any number of indexes you require. Indexes also act as logical namespaces that map documents to primary shards, which contain zero or more replica shards for replicating data. With respect to traditional databases, the basic analogy is similar to the following: MySQL => Databases => Tables => Columns/Rows Elasticsearch => Indexes => Types => Documents with Fields You can store a single document or multiple documents within a type or index. As a document is within an index, it must also be assigned to a type within an index. Moreover, the maximum number of documents that you can store in a single index is 2,147,483,519 (2 billion 147 million), which is equivalent to Integer.Max_Value. ID: This is an identifier for a document. It is used to identify each document. If it is not defined, it is autogenerated for every document.The combination of index, type, and ID must be unique for each document. Mapping: Mappings are similar to schemas in a traditional relational database. Every document in an index has a type. A mapping defines the fields, the data type for each field, and how the field should be handled by Elasticsearch. By default, a mapping is automatically generated whenever a document is indexed. If the default settings are overridden, then the mapping's definition has to be provided explicitly. Node: This is a running instance of Elasticsearch. Each node is part of a cluster. On a standalone machine, each Elasticsearch server instance corresponds to a node. Multiple nodes can be started on a single standalone machine or a single cluster. The node is responsible for storing data and helps in the indexing/searching capabilities of a cluster. By default, whenever a node is started, it is identified and assigned a random Marvel Comics character name. You can change the configuration file to name nodes as per your requirement. A node also needs to be configured in order to join a cluster, which is identifiable by the cluster name. By default, all nodes join the Elasticsearch cluster; that is, if any number of nodes are started up on a network/machine, they will automatically join the Elasticsearch cluster. Cluster: This is a collection of nodes and has one or multiple nodes; they share a single cluster name. Each cluster automatically chooses a master node, which is replaced if it fails; that is, if the master node fails, another random node will be chosen as the new master node, thus providing high availability. The cluster is responsible for holding all of the data stored and provides a unified view for search capabilities across all nodes. By default, the cluster name is Elasticsearch, and it is the identifiable parameter for all nodes in a cluster. All nodes, by default, join the Elasticsearch cluster. While using a cluster in the production phase, it is advisable to change the cluster name for ease of identification, but the default name can be used for any other purpose, such as development or testing.The Elasticsearch cluster contains single or multiple indexes, which contain single or multiple types. All types contain single or multiple documents, and every document contains single or multiple fields. Sharding: This is an important concept of Elasticsearch while understanding how Elasticsearch allows scaling of nodes, when having a large amount of data termed as big data. An index can store any amount of data, but if it exceeds its disk limit, then searching would become slow and be affected. For example, the disk limit is 1 TB, and an index contains a large number of documents, which may not fit completely within 1 TB in a single node. To counter such problems, Elasticsearch provides shards. These break the index into multiple pieces. Each shard acts as an independent index that is hosted on a node within a cluster. Elasticsearch is responsible for distributing shards among nodes. There are two purposes of sharding: allowing horizontal scaling of the content volume, and improving performance by providing parallel operations across various shards that are distributed on nodes (single or multiple, depending on the number of nodes running).Elasticsearch helps move shards among multiple nodes in the event of an addition of new nodes or a node failure. There are two types of shards, as follows: Primary shard: Every document is stored within a primary index. By default, every index has five primary shards. This parameter is configurable and can be changed to define more or fewer shards as per the requirement. A primary shard has to be defined before the creation of an index. If no parameters are defined, then five primary shards will automatically be created.Whenever a document is indexed, it is usually done on a primary shard initially, followed by replicas. The number of primary shards defined in an index cannot be altered once the index is created. Replica shard: Replica shards are an important feature of Elasticsearch. They help provide high availability across nodes in the cluster. By default, every primary shard has one replica shard. However, every primary shard can have zero or more replica shards as required. In an environment where failure directly affects the enterprise, it is highly recommended to use a system that provides a failover mechanism to achieve high availability. To counter this problem, Elasticsearch provides a mechanism in which it creates single or multiple copies of indexes, and these are termed as replica shards or replicas. A replica shard is a full copy of the primary shard. Replica shards can be dynamically altered. Now, let's see the purposes of creating a replica. It provides high availability in the event of failure of a node or a primary shard. If there is a failure of a primary shard, replica shards are automatically promoted to primary shards. Increase performance by providing parallel operations on replica shards to handle search requests.A replica shard is never kept on the same node as that of the primary shard from which it was copied. Inverted index: This is also a very important concept in Elasticsearch. It is used to provide fast full-text search. Instead of searching text, it searches for an index. It creates an index that lists unique words occurring in a document, along with the document list in which each word occurs. For example, suppose we have three documents. They have a text field, and it contains the following: I am learning Kibana Kibana is an amazing product Kibana is easy to use To create an inverted index, the text field is broken into words (also known as terms), a list of unique words is created, and also a listing is done of the document in which the term occurs, as shown in this table: Term Doc 1 Doc 2 Doc 3 I X     Am X     Learning X     Kibana X X X Is   X X An   X   Amazing   X   Product   X   Easy     X To     X Use     X Now, if we search for is Kibana, Elasticsearch will use an inverted index to display the results: Term Doc 1 Doc 2 Doc 3 Is   X X Kibana X X X With inverted indexes, Elasticsearch uses the functionality of Lucene to provide fast full-text search results. An inverted index uses an index based on keywords (terms) instead of a document-based index. REST API: This stands for Representational State Transfer. It is a stateless client-server protocol that uses HTTP requests to store, view, and delete data. It supports CRUD operations (short for Create, Read, Update, and Delete) using HTTP. It is used to communicate with Elasticsearch and is implemented by all languages. It communicates with Elasticsearch over port 9200 (by default), which is accessible from any web browser. Also, Elasticsearch can be directly communicated with via the command line using the curl command. cURL is a command-line tool used to send, view, or delete data using URL syntax, as followed by the HTTP structure. A cURL request is similar to an HTTP request, which is as follows: curl -X <VERB> '<PROTOCOL>://<HOSTNAME>:<PORT>/<PATH>?<QUERY_STRING>' -d '<BODY>' The terms marked within the <> tags are variables, which are defined as follows: VERB: This is used to provide an appropriate HTTP method, such as GET (to get data), POST, PUT (to store data), or DELETE (to delete data). PROTOCOL: This is used to define whether the HTTP or HTTPS protocol is used to send requests. HOSTNAME: This is used to define the hostname of a node present in the Elasticsearch cluster. By default, the hostname of Elasticsearch is localhost. PORT: This is used to define the port on which Elasticsearch is running. By default, Elasticsearch runs on port 9200. PATH: This is used to define the index, type, and ID where the documents will be stored, searched, or deleted. It is specified as index/type/ID. QUERY_STRING: This is used to define any additional query parameter for searching data. BODY: This is used to define a JSON-encoded request within the body. In order to put data into Elasticsearch, the following curl command is used: curl -XPUT 'http://localhost:9200/testing/test/1' -d '{"name": "Kibana" }' Here, testing is the name of the index, test is the name of the type within the index, and 1 indicates the ID number. To search for the preceding stored data, the following curl command is used: curl -XGET 'http://localhost:9200/testing/_search? The preceding commands are provided just to give you an overview of the format of the curl command. Prerequisites for installing Kibana 4.1.1 The following pieces of software need to be installed before installing Kibana 4.1.1: Java 1.8u20+ Elasticsearch v1.4.4+ A modern web browser—IE 10+, Firefox, Chrome, Safari, and so on The installation process will be covered separately for Windows and Ubuntu so that both types of users are able to understand the process of installation easily. Installation of Java In this section, JDK needs to be installed so as to access Elasticsearch. Oracle Java 8 (update 20 onwards) will be installed as it is the recommended version for Elasticsearch from version 1.4.4 onwards. Installation of Java on Ubuntu 14.04 Install Java 8 using the terminal and the apt package in the following manner: Add the Oracle Java Personal Package Archive (PPA) to the apt repository list: sudo add-apt-repository -y ppa:webupd8team/java In this case, we use a third-party repository; however, the WebUpd8 team is trusted to install Java. It does not include any Java binaries. Instead, the PPA directly downloads from Oracle and installs it. As shown in the preceding screenshot, you will initially be prompted for the password for running the sudo command (only when you have not logged in as root), and on successful addition to the repository, you will receive an OK message, which means that the repository has been imported. Update the apt package database to include all the latest files under the packages: sudo apt-get update Install the latest version of Oracle Java 8: sudo apt-get -y install oracle-java8-installer Also, during the installation, you will be prompted to accept the license agreement, which pops up as follows: To check whether Java has been successfully installed, type the following command in the terminal:java –version This signifies that Java has been installed successfully. Installation of Java on Windows We can install Java on windows by going through the following steps: Download the latest version of the Java JDK from the Sun Microsystems site at http://www.oracle.com/technetwork/java/javase/downloads/index.html:                                                                                     As shown in the preceding screenshot, click on the DOWNLOAD button of JDK to download. You will be redirected to the download page. There, you have to first click on the Accept License Agreement radio button, followed by the Windows version to download the .exe file, as shown here: Double-click on the file to be installed and it will open as an installer. Click on Next, accept the license by reading it, and keep clicking on Next until it shows that JDK has been installed successfully. Now, to run Java on Windows, you need to set the path of JAVA in the environment variable settings of Windows. Firstly, open the properties of My Computer. Select the Advanced system settings and then click on the Advanced tab, wherein you have to click on the environment variables option, as shown in this screenshot: After opening Environment Variables, click on New (under the System variables) and give the variable name as JAVA_HOME and variable value as C:Program FilesJavajdk1.8.0_45 (do check in your system where jdk has been installed and provide the path corresponding to the version installed as mentioned in system directory), as shown in the following screenshot: Then, double-click on the Path variable (under the System variables) and move towards the end of textbox. Insert a semicolon if it is not already inserted, and add the location of the bin folder of JDK, like this: %JAVA_HOME%bin. Next, click on OK in all the windows opened. Do not delete anything within the path variable textbox. To check whether Java is installed or not, type the following command in Command Prompt: java –version This signifies that Java has been installed successfully. Installation of Elasticsearch In this section, Elasticsearch, which is required to access Kibana, will be installed. Elasticsearch v1.5.2 will be installed, and this section covers the installation on Ubuntu and Windows separately. Installation of Elasticsearch on Ubuntu 14.04 To install Elasticsearch on Ubuntu, perform the following steps: Download Elasticsearch v 1.5.2 as a .tar file using the following command on the terminal:  curl -L -O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz Curl is a package that may not be installed on Ubuntu by the user. To use curl, you need to install the curl package, which can be done using the following command: sudo apt-get -y install curl Extract the downloaded .tar file using this command: tar -xvzf elasticsearch-1.5.2.tar.gzThis will extract the files and folder into the current working directory. Navigate to the bin directory within the elasticsearch-1.5.2 directory: cd elasticsearch-1.5.2/bin Now run Elasticsearch to start the node and cluster, using the following command:./elasticsearch The preceding screenshot shows that the Elasticsearch node has been started, and it has been given a random Marvel Comics character name. If this terminal is closed, Elasticsearch will stop running as this node will shut down. However, if you have multiple Elasticsearch nodes running, then shutting down a node will not result in shutting down Elasticsearch. To verify the Elasticsearch installation, open http://localhost:9200 in your browser. Installation of Elasticsearch on Windows The installation on Windows can be done by following similar steps as in the case of Ubuntu. To use curl commands on Windows, we will be installing GIT. GIT will also be used to import a sample JSON file into Elasticsearch using elasticdump, as described in the Importing a JSON file into Elasticsearch section. Installation of GIT To run curl commands on Windows, first download and install GIT, then perform the following steps: Download the GIT ZIP package from https://git-scm.com/download/win. Double-click on the downloaded file, which will walk you through the installation process. Keep clicking on Next by not changing the default options until the Finish button is clicked on. To validate the GIT installation, right-click on any folder in which you should be able to see the options of GIT, such as GIT Bash, as shown in the following screenshot: The following are the steps required to install Elasticsearch on Windows: Open GIT Bash and enter the following command in the terminal:  curl –L –O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.2.zip Extract the downloaded ZIP package by either unzipping it using WinRar, 7Zip, and so on (if you don't have any of these, download one of them) or using the following command in GIT Bash: unzip elasticsearch-1.5.2.zip This will extract the files and folder into the directory. Then click on the extracted folder and navigate through it to reach the bin folder. Click on the elasticsearch.bat file to run Elasticsearch. The preceding screenshot shows that the Elasticsearch node has been started, and it is given a random Marvel Comics character's name. Again, if this window is closed, Elasticsearch will stop running as this node will shut down. However, if you have multiple Elasticsearch nodes running, then shutting down a node will not result in shutting down Elasticsearch. To verify the Elasticsearch installation, open http://localhost:9200 in your browser. Installation of Kibana In this section, Kibana will be installed. We will install Kibana v4.1.1, and this section covers installations on Ubuntu and Windows separately. Installation of Kibana on Ubuntu 14.04 To install Kibana on Ubuntu, follow these steps: Download Kibana version 4.1.1 as a .tar file using the following command in the terminal:  curl -L -O https://download.elasticsearch.org/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz Extract the downloaded .tar file using this command: tar -xvzf kibana-4.1.1-linux-x64.tar.gz The preceding command will extract the files and folder into the current working directory. Navigate to the bin directory within the kibana-4.1.1-linux-x64 directory: cd kibana-4.1.1-linux-x64/bin Now run Kibana to start the node and cluster using the following command: Make sure that Elasticsearch is running. If it is not running and you try to start Kibana, the following error will be displayed after you run the preceding command: To verify the Kibana installation, open http://localhost:5601 in your browser. Installation of Kibana on Windows To install Kibana on Windows, perform the following steps: Open GIT Bash and enter the following command in the terminal:  curl -L -O https://download.elasticsearch.org/kibana/kibana/kibana-4.1.1-windows.zip Extract the downloaded ZIP package by either unzipping it using WinRar or 7Zip (download it if you don't have it), or using the following command in GIT Bash: unzip kibana-4.1.1-windows.zip This will extract the files and folder into the directory. Then click on the extracted folder and navigate through it to get to the bin folder. Click on the kibana.bat file to run Kibana. Make sure that Elasticsearch is running. If it is not running and you try to start Kibana, the following error will be displayed after you click on the kibana.bat file: Again, to verify the Kibana installation, open http://localhost:5601 in your browser. Additional information You can change the Elasticsearch configuration for your production environment, wherein you have to change parameters such as the cluster name, node name, network address, and so on. This can be done using the information mentioned in the upcoming sections.. Changing the Elasticsearch configuration To change the Elasticsearch configuration, perform the following steps: Run the following command in the terminal to open the configuration file: sudo vi ~/elasticsearch-1.5.2/config/elasticsearch.yml Windows users can open the elasticsearch.yml file from the config folder. This will open the configuration file as follows: The cluster name can be changed, as follows: #cluster.name: elasticsearch to cluster.name: "your_cluster_name". In the preceding figure, the cluster name has been changed to test. Then, we save the file. To verify that the cluster name has been changed, run Elasticsearch as mentioned in the earlier section. Then open http://localhost:9200 in the browser to verify, as shown here: In the preceding screenshot, you can notice that cluster_name has been changed to test, as specified earlier. Changing the Kibana configuration To change the Kibana configuration, follow these steps: Run the following command in the terminal to open the configuration file: sudo vi ~/kibana-4.1.1-linux-x64/config/kibana.yml Windows users can open the kibana.yml file from the config folder In this file, you can change various parameters such as the port on which Kibana works, the host address on which Kibana works, the URL of Elasticsearch that you wish to connect to, and so on For example, the port on which Kibana works can be changed by changing the port address. As shown in the following screenshot, port: 5601 can be changed to any other port, such as port: 5604. Then we save the file. To check whether Kibana is running on port 5604, run Kibana as mentioned earlier. Then open http://localhost:5604 in the browser to verify, as follows: Importing a JSON file into Elasticsearch To import a JSON file into Elasticsearch, we will use the elasticdump package. It is a set of import and export tools used for Elasticsearch. It makes it easier to copy, move, and save indexes. To install elasticdump, we will require npm and Node.js as prerequisites. Installation of npm In this section, npm along with Node.js will be installed. This section covers the installation of npm and Node.js on Ubuntu and Windows separately. Installation of npm on Ubuntu 14.04 To install npm on Ubuntu, perform the following steps: Add the official Node.js PPA: sudo curl --silent --location https://deb.nodesource.com/setup_0.12 | sudo bash - As shown in the preceding screenshot, the command will add the official Node.js repository to the system and update the apt package database to include all the latest files under the packages. At the end of the execution of this command, we will be prompted to install Node.js and npm, as shown in the following screenshot: Install Node.js by entering this command in the terminal: sudo apt-get install --yes nodejs This will automatically install Node.js and npm as npm is bundled within Node.js. To check whether Node.js has been installed successfully, type the following command in the terminal: node –v Upon successful installation, it will display the version of Node.js. Now, to check whether npm has been installed successfully, type the following command in the terminal: npm –v Upon successful installation, it will show the version of npm. Installation of npm on Windows To install npm on Windows, follow these steps: Download the Windows Installer (.msi) file by going to https://nodejs.org/en/download/. Double-click on the downloaded file and keep clicking on Next to install the software. To validate the successful installation of Node.js, right-click and select GIT Bash. In GIT Bash, enter this: node –v Upon successful installation, you will be shown the version of Node.js. To validate the successful installation of npm, right-click and select GIT Bash. In GIT Bash, enter the following line: npm –v Upon successful installation, it will show the version of npm. Installing elasticdump In this section, elasticdump will be installed. It will be used to import a JSON file into Elasticsearch. It requires npm and Node.js installed. This section covers the installation on Ubuntu and Windows separately. Installing elasticdump on Ubuntu 14.04 Perform these steps to install elasticdump on Ubuntu: Install elasticdump by typing the following command in the terminal: sudo npm install elasticdump -g Then run elasticdump by typing this command in the terminal: elasticdump Import a sample data (JSON) file into Elasticsearch, which can be downloaded from https://github.com/guptayuvraj/Kibana_Essentials and is named tweet.json. It will be imported into Elasticsearch using the following command in the terminal: elasticdump --bulk=true --input="/home/yuvraj/Desktop/tweet.json" --output=http://localhost:9200/ Here, input provides the location of the file, as shown in the following screenshot: As you can see, data is being imported to Elasticsearch from the tweet.json file, and the dump complete message is displayed when all the records are imported to Elasticsearch successfully. Elasticsearch should be running while importing the sample file. Installing elasticdump on Windows To install elasticdump on Windows, perform the following steps: Install elasticdump by typing the following command in GIT Bash: npm install elasticdump -g                                                                                                           Then run elasticdump by typing this command in GIT Bash: elasticdump Import the sample data (JSON) file into Elasticsearch, which can be downloaded from https://github.com/guptayuvraj/Kibana_Essentials and is named tweet.json. It will be imported to Elasticsearch using the following command in GIT Bash: elasticdump --bulk=true --input="C:UsersyguptaDesktoptweet.json" --output=http://localhost:9200/ Here, input provides the location of the file. The preceding screenshot shows data being imported to Elasticsearch from the tweet.json file, and the dump complete message is displayed when all the records are imported to Elasticsearch successfully. Elasticsearch should be running while importing the sample file. To verify that the data has been imported to Elasticsearch, open http://localhost:5601 in your browser, and this is what you should see: When Kibana is opened, you have to configure an index pattern. So, if data has been imported, you can enter the index name, which is mentioned in the tweet.json file as index: tweet. After the page loads, you can see to the left under Index Patterns the name of the index that has been imported (tweet). Now mention the index name as tweet. It will then automatically detect the timestamped field and will provide you with an option to select the field. If there are multiple fields, then you can select them by clicking on Time-field name, which will provide a drop-down list of all fields available, as shown here: Finally, click on Create to create the index in Kibana. After you have clicked on Create, it will display the various fields present in this index. If you do not get the options of Time-field name and Create after entering the index name as tweet, it means that the data has not been imported into Elasticsearch. Summary In this article, you learned about Kibana, along with the basic concepts of Elasticsearch. These help in the easy understanding of Kibana. We also looked at the prerequisites for installing Kibana, followed by a detailed explanation of how to install each component individually in Ubuntu and Windows. Resources for Article: Further resources on this subject: Understanding Ranges [article] Working On Your Bot [article] Welcome to the Land of BludBorne [article]
Read more
  • 0
  • 0
  • 5214

article-image-putting-your-database-heart-azure-solutions
Packt
28 Oct 2015
19 min read
Save for later

Putting Your Database at the Heart of Azure Solutions

Packt
28 Oct 2015
19 min read
In this article by Riccardo Becker, author of the book Learning Azure DocumentDB, we will see how to build a real scenario around an Internet of Things scenario. This scenario will build a basic Internet of Things platform that can help to accelerate building your own. In this article, we will cover the following: Have a look at a fictitious scenario Learn how to combine Azure components with DocumentDB Demonstrate how to migrate data to DocumentDB (For more resources related to this topic, see here.) Introducing an Internet of Things scenario Before we start exploring different capabilities to support a real-life scenario, we will briefly explain the scenario we will use throughout this article. IoT, Inc. IoT, Inc. is a fictitious start-up company that is planning to build solutions in the Internet of Things domain. The first solution they will build is a registration hub, where IoT devices can be registered. These devices can be diverse, ranging from home automation devices up to devices that control traffic lights and street lights. The main use case for this solution is offering the capability for devices to register themselves against a hub. The hub will be built with DocumentDB as its core component and some Web API to expose this functionality. Before devices can register themselves, they need to be whitelisted in order to prevent malicious devices to start registering. In the following screenshot, we see the high-level design of the registration requirement: The first version of the solution contains the following components: A Web API containing methods to whitelist, register, unregister, and suspend devices DocumentDB, containing all the device information including information regarding other Microsoft Azure resources Event Hub, a Microsoft Azure asset that enables scalable publish-subscribe mechanism to ingress and egress millions of events per second Power BI, Microsoft’s online offering to expose reporting capabilities and the ability to share reports Obviously, we will focus on the core of the solution which is DocumentDB but it is nice to touch some of the Azure components, as well to see how well they co-operate and how easy it is to set up a demonstration for IoT scenarios. The devices on the left-hand side are chosen randomly and will be mimicked by an emulator written in C#. The Web API will expose the functionality required to let devices register themselves at the solution and start sending data afterwards (which will be ingested to the Event Hub and reported using Power BI). Technical requirements To be able to service potentially millions of devices, it is necessary that registration request from a device is being stored in a separate collection based on the country where the device is located or manufactured. Every device is being modeled in the same way, whereas additional metadata can be provided upon registration or afterwards when updating. To achieve country-based partitioning, we will create a custom PartitionResolver to achieve this goal. To extend the basic security model, we reduce the amount of sensitive information in our configuration files. Enhance searching capabilities because we want to service multiple types of devices each with their own metadata and device-specific information. Querying on all the information is desired to support full-text search and enable users to quickly search and find their devices. Designing the model Every device is being modeled similar to be able to service multiple types of devices. The device model contains at least the deviceid and a location. Furthermore, the device model contains a dictionary where additional device properties can be stored. The next code snippet shows the device model: [JsonProperty("id")]         public string DeviceId { get; set; }         [JsonProperty("location")]         public Point Location { get; set; }         //practically store any metadata information for this device         [JsonProperty("metadata")]         public IDictionary<string, object> MetaData { get; set; } The Location property is of type Microsoft.Azure.Documents.Spatial.Point because we want to run spatial queries later on in this section, for example, getting all the devices within 10 kilometers of a building. Building a custom partition resolver To meet the first technical requirement (partition data based on the country), we need to build a custom partition resolver. To be able to build one, we need to implement the IPartitionResolver interface and add some logic. The resolver will take the Location property of the device model and retrieves the country that corresponds with the latitude and longitude provided upon registration. In the following code snippet, you see the full implementation of the GeographyPartitionResolver class: public class GeographyPartitionResolver : IPartitionResolver     {         private readonly DocumentClient _client;         private readonly BingMapsHelper _helper;         private readonly Database _database;           public GeographyPartitionResolver(DocumentClient client, Database database)         {             _client = client;             _database = database;             _helper = new BingMapsHelper();         }         public object GetPartitionKey(object document)         {             //get the country for this document             //document should be of type DeviceModel             if (document.GetType() == typeof(DeviceModel))             {                 //get the Location and translate to country                 var country = _helper.GetCountryByLatitudeLongitude(                     (document as DeviceModel).Location.Position.Latitude,                     (document as DeviceModel).Location.Position.Longitude);                 return country;             }             return String.Empty;         }           public string ResolveForCreate(object partitionKey)         {             //get the country for this partitionkey             //check if there is a collection for the country found             var countryCollection = _client.CreateDocumentCollectionQuery(database.SelfLink).            ToList().Where(cl => cl.Id.Equals(partitionKey.ToString())).FirstOrDefault();             if (null == countryCollection)             {                 countryCollection = new DocumentCollection { Id = partitionKey.ToString() };                 countryCollection =                     _client.CreateDocumentCollectionAsync(_database.SelfLink, countryCollection).Result;             }             return countryCollection.SelfLink;         }           /// <summary>         /// Returns a list of collectionlinks for the designated partitionkey (one per country)         /// </summary>         /// <param name="partitionKey"></param>         /// <returns></returns>         public IEnumerable<string> ResolveForRead(object partitionKey)         {             var countryCollection = _client.CreateDocumentCollectionQuery(_database.SelfLink).             ToList().Where(cl => cl.Id.Equals(partitionKey.ToString())).FirstOrDefault();               return new List<string>             {                 countryCollection.SelfLink             };         }     } In order to have the DocumentDB client use this custom PartitionResolver, we need to assign it. The code is as follows: GeographyPartitionResolver resolver = new GeographyPartitionResolver(docDbClient, _database);   docDbClient.PartitionResolvers[_database.SelfLink] = resolver; //Adding a typical device and have the resolver sort out what //country is involved and whether or not the collection already //exists (and create a collection for the country if needed), use //the next code snippet. var deviceInAmsterdam = new DeviceModel             {                 DeviceId = Guid.NewGuid().ToString(),                 Location = new Point(4.8951679, 52.3702157)             };   Document modelAmsDocument = docDbClient.CreateDocumentAsync(_database.SelfLink,                 deviceInAmsterdam).Result;             //get all the devices in Amsterdam            var doc = docDbClient.CreateDocumentQuery<DeviceModel>(                 _database.SelfLink, null, resolver.GetPartitionKey(deviceInAmsterdam)); Now that we have created a country-based PartitionResolver, we can start working on the Web API that exposes the registration method. Building the Web API A Web API is an online service that can be used by any clients running any framework that supports the HTTP programming stack. Currently, REST is a way of interacting with APIs so that we will build a REST API. Building a good API should aim for platform independence. A well-designed API should also be able to extend and evolve without affecting existing clients. First, we need to whitelist the devices that should be able to register themselves against our device registry. The whitelist should at least contain a device ID, a unique identifier for a device that is used to match during the whitelisting process. A good candidate for a device ID is the mac address of the device or some random GUID. Registering a device The registration Web API contains a POST method that does the actual registration. First, it creates access to an Event Hub (not explained here) and stores the credentials needed inside the DocumentDB document. The document is then created inside the designated collection (based on the location). To learn more about Event Hubs, please visit https://azure.microsoft.com/en-us/services/event-hubs/.  [Route("api/registration")]         [HttpPost]         public async Task<IHttpActionResult> Post([FromBody]DeviceModel value)         {             //add the device to the designated documentDB collection (based on country)             try             { var serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", serviceBusNamespace,                     String.Format("{0}/publishers/{1}", "telemetry", value.DeviceId))                     .ToString()                     .Trim('/');                 var sasToken = SharedAccessSignatureTokenProvider.GetSharedAccessSignature(EventHubKeyName,                     EventHubKey, serviceUri, TimeSpan.FromDays(365 * 100)); // hundred years will do                 //this token can be used by the device to send telemetry                 //this token and the eventhub name will be saved with the metadata of the document to be saved to DocumentDB                 value.MetaData.Add("Namespace", serviceBusNamespace);                 value.MetaData.Add("EventHubName", "telemetry");                 value.MetaData.Add("EventHubToken", sasToken);                 var document = await docDbClient.CreateDocumentAsync(_database.SelfLink, value);                 return Created(document.ContentLocation, value);            }             catch (Exception ex)             {                 return InternalServerError(ex);             }         } After this registration call, the right credentials on the Event Hub have been created for this specific device. The device is now able to ingress data to the Event Hub and have consumers like Power BI consume the data and present it. Event Hubs is a highly scalable publish-subscribe event ingestor. It can collect millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Once collected into Event Hubs, you can transform and store the data by using any real-time analytics provider or with batching/storage adapters. At the time of writing, Microsoft announced the release of Azure IoT Suite and IoT Hubs. These solutions offer internet of things capabilities as a service and are well-suited to build our scenario as well. Increasing searching We have seen how to query our documents and retrieve the information we need. For this approach, we need to understand the DocumentDB SQL language. Microsoft has an online offering that enables full-text search called Azure Search service. This feature enables us to perform full-text searches and it also includes search behaviours similar to search engines. We could also benefit from so called type-ahead query suggestions based on the input of a user. Imagine a search box on our IoT Inc. portal that offers free text searching while the user types and search for devices that include any of the search terms on the fly. Azure Search runs on Azure; therefore, it is scalable and can easily be upgraded to offer more search and storage capacity. Azure Search stores all your data inside an index, offering full-text search capabilities on your data. Setting up Azure Search Setting up Azure Search is pretty straightforward and can be done by using the REST API it offers or on the Azure portal. We will set up the Azure Search service through the portal and later on, we will utilize the REST API to start configuring our search service. We set up the Azure Search service through the Azure portal (http://portal.azure.com). Find the Search service and fill out some information. In the following screenshot, we can see how we have created the free tier for Azure Search: You can see that we use the Free tier for this scenario and that there are no datasources configured yet. We will do that know by using the REST API. We will use the REST API, since it offers more insight on how the whole concept works. We use Fiddler to create a new datasource inside our search environment. The following screenshot shows how to use Fiddler to create a datasource and add a DocumentDB collection: In the Composer window of Fiddler, you can see we need to POST a payload to the Search service we created earlier. The Api-Key is mandatory and also set the content type to be JSON. Inside the body of the request, the connection information to our DocumentDB environment is need and the collection we want to add (in this case, Netherlands). Now that we have added the collection, it is time to create an Azure Search index. Again, we use Fiddler for this purpose. Since we use the free tier of Azure Search, we can only add five indexes at most. For this scenario, we add an index on ID (device ID), location, and metadata. At the time of writing, Azure Search does not support complex types. Note that the metadata node is represented as a collection of strings. We could check in the portal to see if the creation of the index was successful. Go to the Search blade and select the Search service we have just created. You can check the indexes part to see whether the index was actually created. The next step is creating an indexer. An indexer connects the index with the provided data source. Creating this indexer takes some time. You can check in the portal if the indexing process was successful. We actually find that documents are part of the index now. If your indexer needs to process thousands of documents, it might take some time for the indexing process to finish. You can check the progress of the indexer using the REST API again. https://iotinc.search.windows.net/indexers/deviceindexer/status?api-version=2015-02-28 Using this REST call returns the result of the indexing process and indicates if it is still running and also shows if there are any errors. Errors could be caused by documents that do not have the id property available. The final step involves testing to check whether the indexing works. We will search for a device ID, as shown in the next screenshot: In the Inspector tab, we can check for the results. It actually returns the correct document also containing the location field. The metadata is missing because complex JSON is not supported (yet) at the time of writing. Indexing complex JSON types is not supported yet. It is possible to add SQL queries to the data source. We could explicitly add a SELECT statement to surface the properties of the complex JSON we have like metadata or the Point property. Try adding additional queries to your data source to enable querying complex JSON types. Now that we have created an Azure Search service that indexes our DocumentDB collection(s), we can build a nice query-as-you-type field on our portal. Try this yourself. Enhancing security Microsoft Azure offers a capability to move your secrets away from your application towards Azure Key Vault. Azure Key Vault helps to protect cryptographic keys, secrets, and other information you want to store in a safe place outside your application boundaries (connectionstring are also good candidates). Key Vault can help us to protect the DocumentDB URI and its key. DocumentDB has no (in-place) encryption feature at the time of writing, although a lot of people already asked for it to be on the roadmap. Creating and configuring Key Vault Before we can use Key Vault, we need to create and configure it first. The easiest way to achieve this is by using PowerShell cmdlets. Please visit https://msdn.microsoft.com/en-us/mt173057.aspx to read more about PowerShell. The following PowerShell cmdlets demonstrate how to set up and configure a Key Vault: Command Description Get-AzureSubscription This command will prompt you to log in using your Microsoft Account. It returns a list of all Azure subscriptions that are available to you. Select-AzureSubscription -SubscriptionName "Windows Azure MSDN Premium" This tells PowerShell to use this subscription as being subject to our next steps. Switch-AzureMode AzureResourceManager New-AzureResourceGroup –Name 'IoTIncResourceGroup' –Location 'West Europe' This creates a new Azure Resource Group with a name and a location. New-AzureKeyVault -VaultName 'IoTIncKeyVault' -ResourceGroupName 'IoTIncResourceGroup' -Location 'West Europe' This creates a new Key Vault inside the resource group and provide a name and location. $secretvalue = ConvertTo-SecureString '<DOCUMENTDB KEY>' -AsPlainText –Force This creates a security string for my DocumentDB key. $secret = Set-AzureKeyVaultSecret -VaultName 'IoTIncKeyVault' -Name 'DocumentDBKey' -SecretValue $secretvalue This creates a key named DocumentDBKey into the vault and assigns it the secret value we have just received. Set-AzureKeyVaultAccessPolicy -VaultName 'IoTIncKeyVault' -ServicePrincipalName <SPN> -PermissionsToKeys decrypt,sign This configures the application with the Service Principal Name <SPN> to get the appropriate rights to decrypt and sign Set-AzureKeyVaultAccessPolicy -VaultName 'IoTIncKeyVault' -ServicePrincipalName <SPN> -PermissionsToSecrets Get This configures the application with SPN to also be able to get a key. Key Vault must be used together with Azure Active Directory to work. The SPN we need in the steps for powershell is actually is a client ID of an application I have set up in my Azure Active Directory. Please visit https://azure.microsoft.com/nl-nl/documentation/articles/active-directory-integrating-applications/ to see how you can create an application. Make sure to copy the client ID (which is retrievable afterwards) and the key (which is not retrievable afterwards). We use these two pieces of information to take the next step. Using Key Vault from ASP.NET In order to use the Key Vault we have created in the previous section, we need to install some NuGet packages into our solution and/or projects: Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory -Version 2.16.204221202   Install-Package Microsoft.Azure.KeyVault These two packages enable us to use AD and Key Vault from our ASP.NET application. The next step is to add some configuration information to our web.config file: <add key="ClientId" value="<CLIENTID OF THE APP CREATED IN AD" />     <add key="ClientSecret" value="<THE SECRET FROM AZURE AD PORTAL>" />       <!-- SecretUri is the URI for the secret in Azure Key Vault -->     <add key="SecretUri" value="https://iotinckeyvault.vault.azure.net:443/secrets/DocumentDBKey" /> If you deploy the ASP.NET application to Azure, you could even configure these settings from the Azure portal itself, completely removing this from the web.config file. This technique adds an additional ring of security around your application. The following code snippet shows how to use AD and Key Vault inside the registration functionality of our scenario: //no more keys in code or .config files. Just a appid, secret and the unique URL to our key (SecretUri). When deploying to Azure we could             //even skip this by setting appid and clientsecret in the Azure Portal.             var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(Utils.GetToken));             var sec = kv.GetSecretAsync(WebConfigurationManager.AppSettings["SecretUri"]).Result.Value; The Utils.GetToken method is shown next. This method retrieves an access token from AD by supplying the ClientId and the secret. Since we configured Key Vault to allow this application to get the keys, the call to GetSecretAsync() will succeed. The code is as follows: public async static Task<string> GetToken(string authority, string resource, string scope)         {             var authContext = new AuthenticationContext(authority);             ClientCredential clientCred = new ClientCredential(WebConfigurationManager.AppSettings["ClientId"],                         WebConfigurationManager.AppSettings["ClientSecret"]);             AuthenticationResult result = await authContext.AcquireTokenAsync(resource, clientCred);               if (result == null)                 throw new InvalidOperationException("Failed to obtain the JWT token");             return result.AccessToken;         } Instead of storing the key to DocumentDB somewhere in code or in the web.config file, it is now moved away to Key Vault. We could do the same with the URI to our DocumentDB and with other sensitive information as well (for example, storage account keys or connection strings). Encrypting sensitive data The documents we created in the previous section contains sensitive data like namespaces, Event Hub names, and tokens. We could also use Key Vault to encrypt those specific values to enhance our security. In case someone gets hold of a document containing the device information, he is still unable to mimic this device since the keys are encrypted. Try to use Key Vault to encrypt the sensitive information that is stored in DocumentDB before it is saved in there. Migrating data This section discusses how to use a tool to migrate data from an existing data source to DocumentDB. For this scenario, we assume that we already have a large datastore containing existing devices and their registration information (Event Hub connection information). In this section, we will see how to migrate an existing data store to our new DocumentDB environment. We use the DocumentDB Data Migration Tool for this. You can download this tool from the Microsoft Download Center (http://www.microsoft.com/en-us/download/details.aspx?id=46436) or from GitHub if you want to check the code. The tool is intuitive and enables us to migrate from several datasources: JSON files MongoDB SQL Server CSV files Azure Table storage Amazon DynamoDB HBase DocumentDB collections To demonstrate the use, we migrate our existing Netherlands collection to our United Kingdom collection. Start the tool and enter the right connection string to our DocumentDB database. We do this for both our source and target information in the tool. The connection strings you need to provide should look like this: AccountEndpoint=https://<YOURDOCDBURL>;AccountKey=<ACCOUNTKEY>;Database=<NAMEOFDATABASE>. You can click on the Verify button to make sure these are correct. In the Source Information field, we provide the Netherlands as being the source to pull data from. In the Target Information field, we specify the United Kingdom as the target. In the following screenshot, you can see how these settings are provided in the migration tool for the source information: The following screenshot shows the settings for the target information: It is also possible to migrate data to a collection that is not created yet. The migration tool can do this if you enter a collection name that is not available inside your database. You also need to select the pricing tier. Optionally, setting the partition key could help to distribute your documents based on this key across all collections you add in this screen. This information is sufficient to run our example. Go to the Summary tab and verify the information you entered. Press Import to start the migration process. We can verify a successful import on the Import results pane. This example is a simple migration scenario but the tool is also capable of using complex queries to only migrate those documents that need to moved or migrated. Try migrating data from an Azure Table storage table to DocumentDB by using this tool. Summary In this article, we saw how to integrate DocumentDB with other Microsoft Azure features. We discussed how to setup the Azure Search service and how create an index to our collection. We also covered how to use the Azure Search feature to enable full-text search on our documents which could enable users to query while typing. Next, we saw how to add additional security to our scenario by using Key Vault. We also discussed how to create and configure Key Vault by using PowerShell cmdlets, and we saw how to enable our ASP.NET scenario application to make use of the Key Vault .NET SDK. Then, we discussed how to retrieve the sensitive information from Key Vault instead of configuration files. Finally, we saw how to migrate an existing data source to our collection by using the DocumentDB Data Migration Tool. Resources for Article: Further resources on this subject: Microsoft Azure – Developing Web API For Mobile Apps [article] Introduction To Microsoft Azure Cloud Services [article] Security In Microsoft Azure [article]
Read more
  • 0
  • 0
  • 27672

article-image-making-3d-visualizations
Packt
26 Oct 2015
5 min read
Save for later

Making 3D Visualizations

Packt
26 Oct 2015
5 min read
 Python has become the preferred language of data scientists for data analysis, visualization, and machine learning. It features numerical and mathematical toolkits such as: Numpy, Scipy, Sci-kit learn, Matplotlib and Pandas, as well as a R-like environment with IPython, all used for data analysis, visualization and machine learning. In this article by Dimitry Foures and Giuseppe Vettigli, authors of the book Python Data Visualization Cookbook, Second Edition, we will see how visualization in 3D is sometimes effective and sometimes inevitable. In this article, you will learn the how 3D bars are created. (For more resources related to this topic, see here.) Creating 3D bars Although matplotlib is mainly focused on plotting and 2D, there are different extensions that enable us to plot over geographical maps, to integrate more with Excel, and plot in 3D. These extensions are called toolkits in matplotlib world. A toolkit is a collection of specific functions that focuses on one topic, such as plotting in 3D. Popular toolkits are Basemap, GTK Tools, Excel Tools, Natgrid, AxesGrid, and mplot3d. We will explore more of mplot3d in this recipe. The mpl_toolkits.mplot3d toolkit provides some basic 3D plotting. Plots supported are scatter, surf, line, and mesh. Although this is not the best 3D plotting library, it comes with matplotlib, and we are already familiar with this interface.   Getting ready Basically, we still need to create a figure and add desired axes to it. Difference is that we specify 3D projection for the figure, and the axes we add is Axes3D. Now, we can almost use the same functions for plotting. Of course, the difference is the arguments passed. For we now have three axes, which we need to provide data for. For example, the mpl_toolkits.mplot3d.Axes3D.plot function specifies the xs, ys, zs, and zdir arguments. All others are transferred directly to matplotlib.axes.Axes.plot. We will explain these specific arguments: xs,ys: These are coordinates for X and Y axis zs: These are value(s) for Z axis. Can be one for all points, or one for each point zdir: These values choose what will be the z-axis dimension (usually this is zs, but can be xs, or ys) There is a rotate_axes method in module mpl_toolkits.mplot3d.art3d that contains 3D artist code and functions to convert 2D artists into 3D versions, which can be added to an Axes3D to reorder coordinates so that the axes are rotated with zdir along. The default value is z. Prepending the axis with a '-' does the inverse transform, so zdir can be x, -x, y, -y, z, or -z. How to do it... This is the code to demonstrate the plotting concept explained in the preceding section: import random import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.dates as mdates from mpl_toolkits.mplot3d import Axes3D mpl.rcParams['font.size'] = 10 fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for z in [2011, 2012, 2013, 2014]: xs = xrange(1,13) ys = 1000 * np.random.rand(12) color = plt.cm.Set2(random.choice(xrange(plt.cm.Set2.N))) ax.bar(xs, ys, zs=z, zdir='y', color=color, alpha=0.8) ax.xaxis.set_major_locator(mpl.ticker.FixedLocator(xs)) ax.yaxis.set_major_locator(mpl.ticker.FixedLocator(ys)) ax.set_xlabel('Month') ax.set_ylabel('Year') ax.set_zlabel('Sales Net [usd]') plt.show() This code produces the following figure: How it works... We had to do the same prep work as in 2D world. Difference here is that we needed to specify what "kind of backend." Then, we generate random data for supposed 4 years of sale (2011–2014). We needed to specify Z values to be the same for the 3D axis. The color we picked randomly from the color map set, and then we associated each Z order collection of xs, ys pairs we would render the bar series. There's more... Other plotting from 2D matplotlib are available here. For example, scatter() has a similar interface to plot(), but with added size of the point marker. We are also familiar with contour, contourf, and bar. New types that are available only in 3D are wireframe, surface, and tri-surface plots. For example, this code example, plots tri-surface plot of popular Pringle functions or, more mathematically, hyperbolic paraboloid:  from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import matplotlib.pyplot as plt import numpy as np n_angles = 36 n_radii = 8 # An array of radii # Does not include radius r=0, this is to eliminate duplicate points radii = np.linspace(0.125, 1.0, n_radii) # An array of angles angles = np.linspace(0, 2*np.pi, n_angles, endpoint=False) # Repeat all angles for each radius angles = np.repeat(angles[...,np.newaxis], n_radii, axis=1) # Convert polar (radii, angles) coords to cartesian (x, y) coords # (0, 0) is added here. There are no duplicate points in the (x, y) plane x = np.append(0, (radii*np.cos(angles)).flatten()) y = np.append(0, (radii*np.sin(angles)).flatten()) # Pringle surface z = np.sin(-x*y) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_trisurf(x, y, z, cmap=cm.jet, linewidth=0.2) plt.show()  The code will give the following output:    Summary Python Data Visualization Cookbook, Second Edition, is for developers that already know about Python programming in general. If you have heard about data visualization but you don't know where to start, then the book will guide you from the start and help you understand data, data formats, data visualization, and how to use Python to visualize data. Many more visualization techniques have been illustrated in a step-by-step recipe-based approach to data visualization in the book. The topics are explained sequentially as cookbook recipes consisting of a code snippet and the resulting visualization. Resources for Article: Further resources on this subject: Basics of Jupyter Notebook and Python [article] Asynchronous Programming with Python [article] Introduction to Data Analysis and Libraries [article]
Read more
  • 0
  • 0
  • 3644

article-image-search-algorithms-game-play-going-b
Daan van
26 Oct 2015
7 min read
Save for later

Search Algorithms for Game Play: Going from A to B

Daan van
26 Oct 2015
7 min read
In a lot of games, for example tower defense games or other real time strategy games, enemies need to Progress from A, over the playing field towards B. One game element could be obstructing the path of the enemies so that there is more time to attack. If you are interested in created these sort of games yourself, you need to have a clear understanding how an enemy could navigate her way around the game. In this blog post we are going to discuss an algorithm to determine the shortest path from A to B. The notion of a graph is used to formalize our thinking. Most importantly A and B will be vertices of a graph, and we construct a path that follows some of the edges of the graph starting from A until we reach B. We will allow the edges to be weighted, to signify the difficulty of traversing that particular edge. The algorithm will be described in a platform independent way. It can be easily translated into various languages and frameworks. Graphs One helpful tool in finding the shortest path is graphs. Graphs are a set of vertices or points connected with edges or arcs. You are allowed to go from one vertex to an other vertex if it is connected with an edge. Below you can see an example of a graph that is layout like a hexagonal grid. In this image the circles represent the vertices and the lines represent edges. Path In the image above two vertices are special. One is colored red, the other is colored green. We would like to know a shortest path from the red vertex to the green vertex. If we have found a shortest path we will indicate that by highlighting the vertices that we follow along the path. Algorithm The following series of images will visualize the algorithm we will use to find a path from the red vertex to the green vertex. It starts out by picking a vertex we will examine closer. Because we are at the start we will examine the red vertex. We will look at all the neighbors, i.e. vertices that are connected by an edge, of the vertex we are examining. For the neighbors we now know a path from the red vertex to that particular vertex. Because we are not at the green vertex yet, we are going to include the neighbors of the vertex we are examining into the frontier. The frontier are the vertices for which we know a path from the red vertex, but we have not examined yet. In other words, they are candidates to examine next. Next we will pick a vertex of the frontier and continue the process. We will have something to say about how to pick a vertex from the frontier shortly. For now we will just pick one. From this vertex we will examine its neighbors we have not yet visited. And add those to the frontier. If we continue this process: We will eventually have visited the green vertex and we know a shortest path from the red vertex to the green vertex. Pseudocode We will write down, in pseudocode, an algorithm that can find a shortest path. We assume we are given a graph G, a start vertex start of G and a finish vertex finish of G. We are interested in a shortest path from start to finish and the following algorithm will provide us with one. for (var v in G.vertices) { v.distance = Number.POSITIVE_INFINITY; } start.distance = 0; var frontier = [start]; var visited = []; while (frontier.length > 0) { var current = pickOneOf(frontier); for (var neighbour in current.neighbors()) { if (neighbour not in visited) { neighbour.distance = Math.min(neighbour.distance, current.distance + 1); frontier.push(neighbour) } } visited.push(current); } We will now annotate the algorithm. Initialization We need to initialize some variables that are needed throughout the algorithm. for (var v in G.vertices) { v.distance = Number.POSITIVE_INFINITY; } start.distance = 0; var frontier = [start]; var visited = []; We first set the distance of all vertices, besides the start vertex to ∞. The distance to the start vertex is set to zero. The frontier will be the collection of vertices for which we know a path but still need to be examined. We initialize it to include the start vertex. Visited will be used to keep track of all vertices that have been examined. Because we still need to examine the start vertex we leave it empty for now. Loop We are going to loop until the frontier is empty. while (frontier.length > 0) { // examine a particular vertex in the frontier } Because it is possible for the hexagonal grid to reach every vertex from the start vertex, we will end up knowing a shortest path for every vertex. If we are only interested in the shortest path to the finish vertex the condition could be !(finish in visited), i.e. continue as long as we have not visited the finish vertex. Pick Vertex from Frontier Within the loop we first pick a vertex from the frontier to examine. var current = pickOneOf(frontier); This is the heart of the algorithm. Dijkstra, a famous computer scientist, proved that if we pick a vertex of the frontier with the smallest distance, we will end up with a shortest path. Pseudocode for the pickOnOf function could look like: function pickOneOf(frontier){ var best = frontier[0]; for (var candidate in frontier) { if (candidate.distance < best.distance) { best = candidate; } } return best; } Process Neighbors The current vertex is a vertex with the smallest distance to the start vertex. So we now can determine the distance to the start vertex of the neighbor of the current vertex. We only need to include vertices that we have not visited yet. for (var neighbour in current.neighbors()) { if (neighbour not in visited) { /* update neighbour info */ } } Update Neighbour Info We can now update the information about the neighbour. For instance, if we have found a shorter path we want to update the distance. And we want to add the neighbour to the frontier. neighbour.distance = Math.min(neighbour.distance, current.distance + 1); frontier.push(neighbour) Mark current visited Finally when we are done examining the current vertex, we add current to the collection of visited. visited.push(current); Edge Weights We have included an image where this distance is shown for each vertex. The story gets interesting when we alter the weights of the edges, i.e. the cost to travel over that particular edge. The algorithm needs a small change. When we update the neighbour info we need to use the edge.weight instead of default weight of 1. In the picture below we have altered the weights of edges, still the algorithm finds a shortest path. The weights of the edges is indicated by the color. A black edge has weight 1, a blue edge has weight 3, an orange edge has weight 5 and a red edge has weight 10. Live Seeing an algorithm in action can help you to understand it. You can try this out live in your browser with the following visualization. Conclusion We learned that Dijkstra Algorithm can be used to find a shortest path between two vertices of a graph. This in turn can be used to guide enemies over the playing field. About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 4537

article-image-getting-hands-io-redirection-pipes-and-filters
Packt
26 Oct 2015
28 min read
Save for later

Getting Hands-on with I/O, Redirection Pipes, and Filters

Packt
26 Oct 2015
28 min read
In this article by Sinny Kumari, author of the book Linux Shell Scripting Essentials, we will cover I/O redirection pipes and filters. In day-to-day work, we come across different kinds of files such as text files, source code files from different programming languages (for example, file.sh, file.c, and file.cpp), and so on. While working, we more often perform various operations on files or directories such as searching for a given string or pattern, replacing strings, printing few lines of a file, and so on. Performing these operations is not easy if we have to do it manually. Manual searching for a string or pattern in a directory having thousands of files can take months, and has high chances of making errors. Shell provides many powerful commands to make our work easier, faster, and error-free. Shell commands have the ability to manipulate and filter text from different streams such as standard input, file, and so on. Some of these commands are grep, sed, head, tr, sort, and so on. Shell also comes with a feature of redirecting output from one command to another with the pipe ('|'). Using pipe helps to avoids creation of unnecessary temporary files. One of the best qualities of these commands is that they come along with the man pages. We can directly go to the man page and see what all features they provide by running the man command. Most of the commands have options such as --help to find the help usage and --version to know the version number of the command. This article will cover the following topics in detail: Standard I/O and error streams Redirecting the standard I/O and error streams Pipe and pipelines—connecting commands Regular expressions Filtering output using grep (For more resources related to this topic, see here.) Standard I/O and error streams In shell programming, there are different ways to provide an input (for example, via a keyboard and terminal) and display an output (for example, terminal and file) and error (for example, terminal), if any, during the execution of a command or program. The following examples show the input, output, and error while running the commands: The input from a user by a keyboard and the input obtained by a program via a standard input stream, that is terminal, is taken as follows: $ read -p "Enter your name:" Enter your name:Foo The output printed on the standard output stream, that is terminal, is as follows: $ echo "Linux Shell Scripting" Linux Shell Scripting The error message printed on the standard error stream, that is terminal, is as follows: $ cat hello.txt cat: hello.txt: No such file or directory When a program executes, by default, three files get opened with it which are stdin, stdout, and stderr. The following table provides a short description about them: File descriptor number File name Description 0 stdin This is standard input being read from the terminal 1 stdout This is standard output to the terminal 2 stderr This is standard error to the terminal File descriptors File descriptors are integer numbers representing opened files in an operating system. The unique file descriptor numbers are provided to each opened files. File descriptors' numbers go up from 0. Whenever a new process in Linux is created, then standard input, output, and error files are provided to it along with other needed opened files to process. To know what all open file descriptors are associated with a process, we will consider the following example: Run an application and get its process ID first. Consider running bash as an example to get PID of bash: $ pidof bash 2508 2480 2464 2431 1281 We see that multiple bash processes are running. Take one of the bash PID example, 2508, and run the following command: $ ls -l /proc/2508/fd total 0 lrwx------. 1 sinny sinny 64 May 20 00:03 0 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 20 00:03 1 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 19 23:22 2 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 20 00:03 255 -> /dev/pts/5 We see that 0, 1, and 2 opened file descriptors are associated with process bash. Currently, all of them are pointing to /dev/pts/5. pts that is pseudo terminal slave. So, whatever we will do in this bash, input, output, and error related to this PID, output will be written to the /dev/pts/5 file. However, the pts files are pseudo files and contents are in memory, so you won't see anything when you open the file. Redirecting the standard I/O and error streams We have an option to redirect standard input, output, and errors, for example, to a file, another command, intended stream, and so on. Redirection is useful in different ways. For example, I have a bash script whose output and errors are displayed on a standard output—that is, terminal. We can avoid mixing an error and output by redirecting one of them or both to a file. Different operators are used for redirection. The following table shows some of operators used for redirection, along with its description: Operator Description >  This redirects a standard output to a file >>  This appends a standard output to a file <  This redirects a standard input from a file >& This redirects a standard output and error to a file >>& This appends a standard output and error to a file | This redirects an output to another command Redirecting standard output: An output of a program or command can be redirected to a file. Saving an output to a file can be useful when we have to look into the output in the future. A large number of output files for a program that runs with different inputs can be used in studying program output behavior. For example, showing redirecting echo output to output.txt is as follows: $ echo "I am redirecting output to a file" > output.txt $ We can see that no output is displayed on the terminal. This is because output was redirected to output.txt. The operator '>' (greater than) tells the shell to redirect the output to whatever filename mentioned after the operator. In our case, its output.txt: $ cat output.txt I am redirecting output to a file Now, let's add some more output to the output.txt file: $ echo "I am adding another line to file" > output.txt $ cat output.txt I am adding another line to file We noticed that the previous content of the output.txt file got erased and it only has the latest redirected content. To retain the previous content and append the latest redirected output to a file, use the operator '>>': $ echo "Adding one more line" >> output.txt $ cat output.txt I am adding another line to file Adding one more line We can also redirect an output of a program/command to another command in bash using the operator '|' (pipe): $ ls /usr/lib64/ | grep libc.so libc.so libc.so.6 In this example, we gave the output of ls to the grep command using the '|' (pipe) operator, and grep gave the matching search result of the libc.so library: Redirecting standard input Instead of getting an input from a standard input to a command, it can be redirected from a file using the < (less than) operator. For example, we want to count the number of words in the output.txt file created from the Redirecting standard output section: $ cat output.txt I am adding another line to file Adding one more line $ wc -w < output.txt 11 We can sort the content of output.txt: $ sort < output.txt # Sorting output.txt on stdout Adding one more line I am adding another line to file We can also give a patch file as an input to the patch command in order to apply a patch.diff in a source code. The command patch is used to apply additional changes made in a file. Additional changes are provided as a diff file. A diff file contains the changes between the original and the modified file by running the diff command. For example, I have a patch file to apply on output.txt: $ cat patch.diff # Content of patch.diff file 2a3 > Testing patch command $ patch output.txt < patch.diff # Applying patch.diff to output.txt $ cat output.txt # Checking output.txt content after applying patch I am adding another line to file Adding one more line Testing patch command Redirecting standard error There is a possibility of getting an error while executing a command/program in bash because of different reasons such as invalid input, insufficient arguments, file not found, bug in program, and so on: $ cd /root # Doing cd to root directory from a normal user bash: cd: /root/: Permission denied Bash prints the error on a terminal saying, permission denied. In general, errors are printed on a terminal so that it's easy for us to know the reason of an error. Printing both the errors and output on the terminal can be annoying because we have to manually look into each line and check whether the program encountered any error: $ cd / ; ls; cat hello.txt; cd /bin/; ls *.{py,sh} We ran a series of commands in the preceding section. First cd to /, ls content of /, cat file hello.txt, cd to /bin and see files matching *.py and *.sh in /bin/. The output will be as follows: bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var cat: hello.txt: No such file or directory alsa-info.sh kmail_clamav.sh sb_bnfilter.py sb_mailsort.py setup-nsssysinit.sh amuFormat.sh kmail_fprot.sh sb_bnserver.pysb_mboxtrain.py struct2osd.sh core_server.py kmail_sav.shsb_chkopts.py sb_notesfilter.py We see that hello.txt doesn't exist in the / directory and because of this there is an error printed on the terminal as well, along with other output. We can redirect the error as follows: $ (cd / ; ls; cat hello.txt; cd /bin/; ls *.{py,sh}) 2> error.txt bin boot dev etc home lib lib64 lost+found media mnt opt proc rootrun sbin srv sys tmp usr var alsa-info.sh kmail_clamav.sh sb_bnfilter.py sb_mailsort.py setup-nsssysinit.sh amuFormat.sh kmail_fprot.sh sb_bnserver.pysb_mboxtrain.py struct2osd.sh core_server.py kmail_sav.shsb_chkopts.py sb_notesfilter.py We can see that the error has been redirected to the error.txt file. To verify, check the error.txt content: $ cat error.txt cat: hello.txt: No such file or directory Multiple redirection: We can redirect stdin, stdout, and stderr together in a command or script or a combination of some of them. The following command redirects both stdout and stder: $ (ls /home/ ;cat hello.txt;) > log.txt 2>&1 Here, stdout is redirected to log.txt and error messages are redirected to log.txt as well. In 2>&1, 2> means redirect an error and &1 means redirect to stdout. In our case, we have already redirected stdout to the log.txt file. So, now both the stdout and stderr outputs will be written into log.txt and nothing will be printed on the terminal. To verify, we will check the content of log.txt: $ cat log.txt lost+found sinny cat: hello.txt: No such file or directory The following example shows the stdin, stdout, and stderr redirection: $ cat < ~/.bashrc > out.txt 2> err.txt Here, the .bashrc file present in the home directory acts as an input to the cat command and its output is redirected to the out.txt file. Any error encountered in between is redirected to the err.txt file. The following bash script will explain stdin, stdout, stderr, and their redirection with even more clarity: #!/bin/bash # Filename: redirection.sh # Description: Illustrating standard input, output, error # and redirecting them ps -A -o pid -o command > p_snapshot1.txt echo -n "Running process count at snapshot1: " wc -l < p_snapshot1.txt echo -n "Create a new process with pid = " tail -f /dev/null & echo $! # Creating a new process echo -n "Running process count at snapshot2: " ps -A -o pid -o command > p_snapshot2.txt wc -l < p_snapshot2.txt echo echo "Diff bewteen two snapshot:" diff p_snapshot1.txt p_snapshot2.txt This script saves two snapshots of running all the currently running processes in the system and generates diff. The output after running the process will look somewhat as follows: $ sh redirection.sh Running process count at snapshot1: 246 Create a new process with pid = 23874 Running process count at snapshot2: 247 Diff bewteen two snapshot: 246c246,247 < 23872 ps -A -o pid -o command --- > 23874 tail -f /dev/null > 23875 ps -A -o pid -o command Pipe and pipelines – connecting commands The outputs of the programs are generally saved in files for further use. Sometimes, temporary files are created in order to use an output of a program as an input to another program. We can avoid creating temporary files and feed the output of a program as an input to another program using bash pipe and pipelines. Pipe The pipe denoted by the operator | connects the standard output of a process in the left to the standard input in the right process by inter process communication mechanism. In other words, the | (pipe) connects commands by providing the output of a command as the input to another command. Consider the following example: $ cat /proc/cpuinfo | less Here, the cat command, instead of displaying the content of the /proc/cpuinfo file on stdout, passes its output as an input to the less command. The less command takes the input from cat and displays on the stdout per page. Another example using pipe is as follows: $ ps -aux | wc -l # Showing number of currently running processes in system 254 Pipeline Pipeline is a sequence of programs/commands separated by the operator ' | ' where the output of execution of each command is given as an input to the next command. Each command in a pipeline is executed in a new subshell. The syntax will be as follows: command1 | command2 | command3 … Examples showing pipeline are as follows: $ ls /usr/lib64/*.so | grep libc | wc -l 13 Here, we are first getting a list of files from the /usr/lib64 directory that has the.so extension. The output obtained is passed as an input to the next grep command to look for the libc string. The output is further given to the wc command to count the number of lines. Regular expression Regular expression (also known as regex or regexp) provides a way of specifying a pattern to be matched in a given big chunk of text data. It supports a set of characters to specify the pattern. It is widely used for a text search and string manipulation. A lot of shell commands provide an option to specify regex such as grep, sed, find, and so on. The regular expression concept is also used in other programming languages such as C++, Python, Java, Perl, and so on. Libraries are available in different languages to support regular expression's features. Regular expression metacharacters The metacharacters used in regular expressions are explained in the following table: Metacharacters Description * (Asterisk) This matches zero or more occurrences of the previous character + (Plus) This matches one or more occurrences of the previous character ? This matches zero or one occurrence of the previous element . (Dot) This matches any one character ^ This matches the start of the line $ This matches the end of line [... ] This matches any one character within a square bracket [^... ] This matches any one character that is not within a square bracket | (Bar) This matches either the left side or the right side element of | {X} This matches exactly X occurrences of the previous element {X,} This matches X or more occurrences of the previous element {X,Y} This matches X to Y occurrences of the previous element (...) This groups all the elements < This matches the empty string at the beginning of a word > This matches the empty string at the end of a word This disables the special meaning of the next character Character ranges and classes When we look into a human readable file or data, its major content contains alphabets (a to z) and numbers (0-9). While writing regex for matching a pattern consisting of alphabets or numbers, we can make use character ranges or classes. Character ranges We can use character ranges in a regular expression as well. We can specify a range by a pair of characters separated by a hyphen. Any characters that fall in between that range, inclusive, are matched. Character ranges are enclosed inside square brackets. The following table shows some of character ranges: Character range Description [a-z] This matches any single lowercase letter from a to z [A-Z] This matches any single uppercase letter from A to Z [0-9] This matches any single digit from 0 to 9 [a-zA-Z0-9] This matches any single alphabetic or numeric characters [h-k] This matches any single letter from h to k [2-46-8j-lB-M] This matches any single digit from 2 to 4 or 6 to 8 or any letter from j to l or B to M Character classes: Another way of specifying a range of character matches is by using Character classes. It is specified within the square brackets [:class:]. The possible class value is mentioned in the following table: Character Class Description [:alnum:] This matches any single alphabetic or numeric character; for example, [a-zA-Z0-9] [:alpha:] This matches any single alphabetic character; for example, [a-zA-Z] [:digit:] This matches any single digit; for example, [0-9] [:lower:] This matches any single lowercase alphabet; for example, [a-z] [:upper:] This matches any single uppercase alphabet; for example, [A-Z] [:blank:] This matches a space or tab [:graph:] This matches a character in the range of ASCII—for example 33-126—excluding a space character [:print:] This matches a character in the range of ASCII—for example. 32-126—including a space character [:punct:] This matches any punctuation marks such as '?', '!', '.', ',', and so on [:xdigit:] This matches any hexadecimal characters; for example, [a-fA-F0-9] [:cntrl:] This matches any control characters Creating your own regex: In the previous sections of regular expression, we discussed about metacharacters, character ranges, character class, and their usage. Using these concepts, we can create powerful regex that can be used to filter out text data as per our need. Now, we will create a few regex using the concepts we have learned. Matching dates in mm-dd-yyyy format We will consider our valid date starting from UNIX Epoch—that is, 1st January 1970. In this example, we will consider all the dates between UNIX Epoch and 30th December 2099 as valid dates. An explanation of forming its regex is given in the following subsections: Matching a valid month 0[1-9] matches 01st to 09th month 1[0-2] matches 10th, 11th, and 12th month '|' matches either left or right expression Putting it all together, the regex for matching a valid month of date will be 0[1-9]|1[0-2]. Matching a valid day 0[1-9] matches 01st to 09th day [12][0-9] matches 10th to 29th day 3[0-1] matches 30th to 31st day '|' matches either left or right expression 0[1-9]|[12][0-9]|3[0-1] matches all the valid days in a date Matching the valid year in a date 19[7-9][[0-9] matches years from 1970 to 1999 20[0-9]{2} matches years from 2000 to 2099 '|' matches either left or right expression 19[7-9][0-9]|20[0-9]{2} matches all the valid years between 1970 to 2099 Combining valid months, days, and years regex to form valid dates Our date will be in mm-dd-yyyy format. By putting together regex formed in the preceding sections for months, days, and years, we will get regex for the valid date: (0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[0-1])-(19[7-9][0-9]|20[0-9]{2}) There is a nice website http://regexr.com/, where you can also validate regular expression. The following screenshot shows the matching of the valid date among the given input: Regex for a valid shell variable A valid variable name can contain a character from alphanumeric and underscore, and the first letter of the variable can't be a digit. Keeping these rules in mind, a valid shell variable regex can be written as follows: ^[_a-zA-Z][_a-zA-Z0-9]*$ Here, ^ (caret) matches the start of a line. The regex [_a-zA-Z] matches _ or any upper or lower case alphabet [_a-zA-Z0-9]* matches zero or multiple occurrences of _,any digit or upper and lower case alphabet $ (Dollar) matches the end of the line. In character class format, we can write regex as ^[_[:alpha:]][_[:alnum:]]*$. The following screenshot shows valid shell variables using regex formed: Enclose regular expression in single quotes (') to avoid pre-shell expansion. Use back slash () before a character to escape the special meaning of metacharacters. Metacharacters such as ?, +, {, |, (, and ) are known to be extended regex. They lose their special meaning when used in basic regex. To avoid this, use them with backslash '?', '+', '{', '|', '(', and ')'. Filtering an output using grep One of the powerful and widely used command in shell is grep. It searches in an input file and matches lines in which the given pattern is found. By default, all the matched patterns are printed on stdout that is usually terminal. We can also redirect the matched output to other streams such as file. Instead of giving an input from a file, grep can also take the input from the redirected output of the command executed on the left-hand side of '|'. Syntax The syntax of using the grep command is as follows: grep [OPTIONS] PATTERN [FILE...] Here, FILE can be multiple files for a search. If no file is given as an input for a search, it will search the standard input. PATTERN can be any valid regular expression. Put PATTERN within single quotes (') or double quotes (") as per need. For example, use single quotes (') to avoid any bash expansion and double quotes (") for expansion. A lot of OPTIONS are available in grep. Some of the important and widely used options are discussed in the following table.   Option Usage -i This enforces case insensitive match in both pattern and input file(s) -v This displays the non-matching line -o This displays only the matched part in the matching line -f FILE This obtains a pattern from a file, one per line -e PATTERN This specifies multiple search pattern -E This considers pattern as an extended regex (egrp) -r This reads all the files in a directory recursively, excluding resolving of symbolic links unless explicitly specified as an input file -R This reads all the files in a directory recursively and resolving symbolic if any -a This processes binary file as a text file -n This prefixes each matched line along with a line number -q Don't print anything on stdout -s Don't print error messages -c This prints the count of matching lines of each input file -A NUM This prints NUM lines after the actual string match. No effect with the -o option -B NUM This prints NUM lines before the actual string match. No effect with the -o option -C NUM This prints NUM lines after and before the actual string match. No effect with the -o option  Looking for a pattern in a file: A lot of times we have to search for a given string or a pattern in a file. The grep command provides us the capability to do it in a single line. Let's see the following example: The input file for our example will be input1.txt: $ cat input1.txt # Input file for our example This file is a text file to show demonstration of grep command. grep is a very important and powerful command in shell. This file has been used in chapter 2 We will try to get the following information from the input1.txt file using the grep command: Number of lines Line starting with a capital letter Line ending with a period (.) Number of sentences Searching sub-string "sent" Lines that don't have a periodNumber of times the string "file" is used  The following shell script demonstrates how to do the above mentioned tasks: #!/bin/bash #Filename: pattern_search.sh #Description: Searching for a pattern using input1.txt file echo "Number of lines = `grep -c '.*' input1.txt`" echo "Line starting with capital letter:" grep -c ^[A-Z].* input1.txt echo echo "Line ending with full stop (.):" grep '.*.$' input1.txt echo echo -n "Number of sentence = " grep -c '.' input1.txt echo "Strings matching sub-string sent:" grep -o "sent" input1.txt echo echo "Lines not having full stop are:" grep -v '.' input1.txt echo echo -n "Number of times string file used: = " grep -o "file" input1.txt | wc -w  The output after running the pattern_search.sh shell script will be as follows: Number of lines = 4 Line starting with capital letter: 2 Line ending with full stop (.): powerful command in shell. Number of sentence = 2 Strings matching sub-string sent: Lines not having full stop are: This file is a text file to show demonstration This file has been used in chapter 2 Number of times string file used: = 3 Looking for a pattern in multiple files The grep command also allow us to search for a pattern in multiple files as an input. To explain this in detail, we will head directly to the following example. The input files, in our case, will be input1.txt and input2.txt. We will reuse the content of the input1.txt file from the previous example. The content of input2.txt is as follows: $ cat input2.txt Another file for demonstrating grep CommaNd usage. It allows us to do CASE Insensitive string test as well. We can also do recursive SEARCH in a directory using -R and -r Options. grep allows to give a regular expression to search for a PATTERN. Some special characters like . * ( ) { } $ ^ ? are used to form regexp. Range of digit can be given to regexp e.g. [3-6], [7-9], [0-9] We will try to get the following information from the input1.txt and input2.txt files using the grep command: Search for the string command Case-insensitive search of the string command Print the line number where the string grep matches Search for punctuation marks Print one line followed by the matching lines while searching for the string important  The following shell script demonstrates how to follow the preceding steps: #!/bin/bash # Filename: multiple_file_search.sh # Description: Demonstrating search in multiple input files echo "This program searches in files input1.txt and input2.txt" echo "Search result for string "command":" grep "command" input1.txt input2.txt echo echo "Case insensitive search of string "command":" # input{1,2}.txt will be expanded by bash to input1.txt input2.txt grep -i "command" input{1,2}.txt echo echo "Search for string "grep" and print matching line too:" grep -n "grep" input{1,2}.txt echo echo "Punctuation marks in files:" grep -n [[:punct:]] input{1,2}.txt echo echo "Next line content whose previous line has string "important":" grep -A 1 'important' input1.txt input2.txt The following screenshot is the output after running the shell script pattern_search.sh. The matched pattern string has been highlighted: A few more grep usages The following subsections will cover a few more usages of the grep command. Searching in a binary file So far, we have seen all the grep examples running on text files. We can also search for a pattern in binary files using grep. For this, we have to tell the grep command to treat a binary file as a text file too. The option -a or –text tells grep to consider a binary file as a test file. We know that the grep command itself is a binary file that executes and gives a search result. One of options in grep is --text. The string --text should be somewhere available in the grep binary file. Let's search for it as follows: $ grep --text '--text' /usr/bin/grep -a, --text equivalent to –binary-files=text We saw that the string --text is found in the search path /usr/bin/grep. The character backslash ('') is used to escape its special meaning. Now, let's search for the -w string in the wc binary. We know that the wc command has an option -w that counts the number of words in an input text. $ grep -a '-w' /usr/bin/wc -w, --words print the word counts Searching in a directory We can also tell grep to search into all files/directories in a directory recursively using the option -R. This avoids the hassle of specifying each file as an input text file to grep. For example, we are interested in knowing at how many places #include <stdio.h> is used in a standard include directory: $ grep -R '#include <stdio.h>' /usr/include/ | wc -l 77 This means that the #include <stdio.h> string is found at 77 places in the /usr/include directory. In another example, we want to know how many python files (the extension .py) in /usr/lib64/python2.7/ does "import os". We can check that as follows: $ grep -R "import os" /usr/lib64/python2.7/*.py | wc -l 93 Excluding files/directories from search We can also specify the grep command to exclude a particular directory or file from search. This is useful when we don't want grep to look into a file or directory that has some confidential information. This is also useful in the case where we are sure that searching into a certain directory will be of no use. So, excluding them will reduce search time. Suppose, there is a source code directory called s0, which uses the git version control. Now, we are interested in searching for a text or pattern in source files. In this case, searching in the .git subdirectory will be of no use. We can exclude .git from search as follows: $ grep -R --exclude-dir=.git "search_string" s0 Here, we are searching for the search_string string in the s0 directory and telling grep to not to search in the .git directory. Instead of excluding a directory, to exclude a file, use the --exclude-from=FILE option. Display a filename with a matching pattern In some use-cases, we don't bother with where the search matched and at how many places the search matched in a file. Instead, we are interested in knowing only the filename where at least one search matched. For example, I want to save filenames that have a particular search pattern found in a file, or redirect to some other command for further processing. We can achieve this using the -l option: $ grep -Rl "import os" /usr/lib64/python2.7/*.py > search_result.txt $ wc -l search_result.txt 79 This example gets name of the file in which import os is written and saves result in file search_result.txt. Matching an exact word The exact matching of the word is also possible using word boundary that is b on both the sides of the search pattern. Here, we will reuse the input1.txt file and its content: $ grep -i --color "bab" input1.txt The --color option allows colored printing of the matched search result. The "bab" option tells grep to only look for the character a that is alone. In search results, it won't match the character a present as a sub-string in a string. The following screenshot shows the output: This is an input file. It conatins special character like ?, ! etc &^var is an invalid shll variable. _var1_ is a valid shell variable To delete characters other than alphanumeric, newline, and white-space, we can run the following command: tr -cd '[:alnum:] n' < tr2.txt This is an input file It conatins special character like etc var is an invalid shll variable var1 is a valid shell variable Summary After reading this article, you know how to provide an input to commands and print or save its result. You are also familiar with redirecting an output and input from one command to another. Now, you can easily search, replace strings or pattern in a file, and filter out data based on needs. From this article, we now have a good control on transforming/filtering text data. Resources for Article: Further resources on this subject: Linux Shell Scripting [article] Embedded Linux and Its Elements [article] Getting started with Cocos2d-x [article]
Read more
  • 0
  • 0
  • 9140
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introduction-mapbox
Packt
26 Oct 2015
7 min read
Save for later

Introduction to MapBox

Packt
26 Oct 2015
7 min read
In this article by Bill Kastanakis, author of the book MapBox Cookbook, he has given an introduction to MapBox. Most of the websites we visit everyday us maps in order to display information about locations or point of interests to the user. It's amazing how this technology has evolved over the past decades. In the early days with the introduction of the Internet, maps used to be static images. Users were unable to interact with maps, and they were limited to just displaying static information. Interactive maps were available only to mapping professionals and accessed via very expensive GIS software. Cartographers have used this type of software to create or improve maps, usually for an agency or an organization. Again, if the location information was to be made available to the public, there were only two options: static images or a printed version. (For more resources related to this topic, see here.) Improvements made on Internet technologies opened up several possibilities for interactive content. It was a natural transition for maps to become live, respond to search queries, and allow user interactions (such as panning and changing the zoom level). Mobile devices were just starting to evolve, and a new age of smartphones was just about to begin. It was natural for maps to become even more important to consumers. Interactive maps are now in their pockets. More importantly, they can tell the users location. These maps also have the ability to display a great variety of data. In the age where smartphones and tables have become aware of the location, information has become even more important to companies. They use it to improve user experience. From general purpose websites (such as Google Maps) to more focused apps (such as Four Square and Facebook), maps are now a crucial component in the digital world. The popularity of mapping technologies is increasing over the years. From free open source solutions to commercial services for web and mobile developers and even services specialized for cartographers and visualization professionals, a number of services have become available to developers. Currently, there is an option for developers to choose from a variety of services that will work better on their specific task, and best of all, if you don't have increased traffic requirements, most of them will offer free plans for their consumers. What is MapBox? The issue with most of the solutions available is that they look extremely similar. Observing the most commonly used websites and services that implement a map, you can easily verify that they completely lack personality. Maps have the same colors and are present with the same features, such as roads, buildings, and labels. Currently, displaying road addresses in a specific website doesn't make sense. Customizing maps is a tedious task and is the main reason why it's avoided. What if the map that is provided by a service is not working well with the color theme used in your website or app? MapBox is a service provider that allows users to select a variety of customization options. This is one of the most popular features that has set it apart from competition. The power to fully customize your map in every detail, including the color theme, features you want to present to the user, information displayed, and so on, is indispensable. MapBox provides you with tools to fully write CartoCSS, the language behind the MapBox customization, SDKs, and frameworks to integrate their maps into your website with minimal effort and a lot more tools to assist you in your task to provide a unique experience to your users. Data Let's see what MapBox has to offer, and we will begin with three available datasets: MapBox Streets is the core technology behind MapBox street data. It's powered by open street maps and has an extremely vibrant community of 1.5 million cartographers and users, which constantly refine and improve map data in real time, as shown in the following screenshot: MapBox Terrain is composed of datasets fetched from 24 datasets owned by 13 organizations. You will be able to access elevation data, hill shades, and topography lines, as shown in the following screenshot: MapBox Satellite offers high-resolution cloudless datasets with satellite imagery, as shown in the following image: MapBox Editor MapBox Editor is an online editor where you can easily create and customize maps. It's purpose is to easily customize the map color theme by choosing from presets or creating your own styles. Additionally, you can add features, such as Markers, Lines, or define areas using polygons. Maps are also multilingual; currently, there are four different language options to choose from when you work with MapBox Editor. Although adding data manually in MapBox Editor is handy, it also offers the ability to batch import data, and it supports the most commonly used formats. The user interface is strictly visual; no coding skills is needed in order to create, customize, and present a map. It is very ideal if you want to quickly create and share maps. The user interface also supports sharing to all the major platforms, such as WordPress, and embedding in forums or on a website using iFrames. CartoCSS CartoCSS is a powerful open source style sheet language developed by MapBox and is widely supported by several other mapping and visualization platforms. It's extremely similar to CSS, and if you ever used CSS, it will be very easy to adapt. Take a look at the following code: #layer { line-color: #C00; line-width: 1; } #layer::glow { line-color: #0AF; line-opacity: 0.5; line-width: 4; } TileMill TileMill is a free open source desktop editor that you can use to write CartoCSS and fully customize your maps. The customization is done by adding layers of data from various sources and then customizing the layer properties using CartoCSS, a CSS-like style sheet language. When you complete the editing of the map, you can then export the tiles and upload them to your MapBox account in order to use the map on your website. TileMill was used as a standard solution for this type of work, but it uses raster data. This changed recently with the introduction of MapBox Studio, which uses vector data. MapBox Studio MapBox Studio is the new open source toolbox that was created by the MapBox team to customize maps, and the plan is to slowly replace TileMill. The advantage is that it uses vector tiles instead of raster. Vector tiles are superior because they hold infinite detail; they are not dependent on the resolution found in a fixed size image. You can still use CartoCSS to customize the map, and as with TileMill, at any point, you can export and share the map on your website. The API and SDK Accessing MapBox data using various APIs is also very easy. You can use JavaScript, WebGL, or simply access the data using REST service calls. If you are into mobile development, they offer separate SDKs to develop native apps for iOS and Android that take advantage of the amazing MapBox technologies and customization while maintaining a native look and feel. MapBox allows you to use your own sources. You can import a custom dataset and overlay the data to Mapbox streets, terrains, or satellite. Another noteworthy feature is that you are not limited to fetching data from various sources, but you can also query the tile metadata. Summary In this article, we learned what Mapbox, Mapbox Editor, CartoCSS, TileMill and MapBox Studio is all about. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [article] Designing Site Layouts in Inkscape [article] Displaying SQL Server Data using a Linq Data Source [article]
Read more
  • 0
  • 0
  • 2538

article-image-icons
Packt
26 Oct 2015
21 min read
Save for later

PrimeFaces Theme Development: Icons

Packt
26 Oct 2015
21 min read
In this article by Andy Bailey and Sudheer Jonna, the authors of the book, PrimeFaces Theme Development, we'll cover icons, which add a lot of value to an application based on the principle that a picture is worth a thousand words. Equally important is the fact that they can, when well designed, please the eye and serve as memory joggers for your user. We humans strongly associate symbols with actions. For example, a save button with a disc icon is more evocative. The association becomes even stronger when we use the same icon for the same action in menus and button bars. It is also possible to use icons in place of text labels. It is an important thing to keep in mind when designing the user interface of your application that the navigational and action elements (such as buttons) should not be so intrusive that the application becomes too cluttered with the things that can be done. The user wants to be able to see the information that they want to see and use input dialogs to add more. What they don't want is to be distracted with links, lots of link and button text, and glaring visuals. In this article, we will cover the following topics: The standard theme icon set Creating a set of icons of our own Adding new icons to a theme Using custom icons in a commandButton component Using custom icons in a menu component The FontAwesome icons as an alternative to the ThemeRoller icons (For more resources related to this topic, see here.) Introducing the standard theme icon set jQuery UI provides a big set of standard icons that can be applied by just adding icon class names to HTML elements. The full list of icons is available at its official site, which can be viewed by visiting http://api.jqueryui.com/theming/icons/. Also, available in some of the published icon cheat sheets at http://www.petefreitag.com/cheatsheets/jqueryui-icons/. The icon class names follow the following syntax in order to add them for HTML elements: .ui-icon-{icon type}-{icon sub description}-{direction} .ui-icon-{icon type}-{icon sub description}-{direction} For example, the following span element will display an icon of a triangle pointing to the south: <span class="ui-icon ui-icon-triangle-1-s"></span> Other icons such as ui-icon-triangle-1-n, ui-icon-triangle-1-e, and ui-icon-triangle-1-w represent icons of triangles pointing to the north, east, and west respectively. The direction element is optional, and it is available only for a few icons such as a triangle, an arrow, and so on. These theme icons will be integrated in a number of jQuery UI-based widgets such as buttons, menus, dialogs, date picker components, and so on. The aforementioned standard set of icons is available in the ThemeRoller as one image sprite instead of a separate image for each icon. That is, ThemeRoller is designed to use the image sprites technology for icons. The different image sprites that vary in color (based on the widget state) are available in the images folder of each downloaded theme. An image sprite is a collection of images put into a single image. A webpage with many images may take a long time to load and generate multiple server requests. For a high-performance application, this idea will reduce the number of server requests and bandwidth. Also, it centralizes the image locations so that all the icons can be found at one location. The basic image sprite for the PrimeFaces Aristo theme looks like this: The image sprite's look and feel will vary based on the screen area of the widget and its components such as the header and content and widget states such as hover, active, highlight, and error styles. Let us now consider a JSF/PF-based example, where we can add a standard set of icons for UI components such as the commandButton and menu bar. First, we will create a new folder in web pages called chapter6. Then, we will create a new JSF template client called standardThemeIcons.xhtml and add a link to it in the chaptersTemplate.xhtml template file. When adding a submenu, use Chapter 6 for the label name and for the menu item, use Standard Icon Set as its value. In the title section, replace the text title with the respective topic of this article, which is Standard Icons: <ui:define name="title">   Standard Icons </ui:define> In the content section, replace the text content with the code for commandButton and menu components. Let's start with the commandButton components. The set of commandButton components uses the standard theme icon set with the help of the icon attribute, as follows: <h:panelGroup style="margin-left:830px">   <h3 style="margin-top: 0">Buttons</h3>   <p:commandButton value="Edit" icon="ui-icon-pencil"     type="button" />   <p:commandButton value="Bookmark" icon="ui-icon-bookmark"     type="button" />   <p:commandButton value="Next" icon="ui-icon-circle-arrow-e"     type="button" />   <p:commandButton value="Previous" icon="ui-icon-circle-arrow-w"     type="button" /> </h:panelGroup> The generated HTML for the first commandButton that is used to display the standard icon will be as follows: <button id="mainForm:j_idt15" name="mainForm:j_idt15" class="ui-   button ui-widget ui-state-default ui-corner-all ui-button-text-   icon-left" type="button" role="button" aria-disabled="false">   <span class="ui-button-icon-left ui-icon ui-c   ui-icon-     pencil"></span>   <span class="ui-button-text ui-c">Edit</span> </button> The PrimeFaces commandButton renderer appends the icon position CSS class based on the icon position (left or right) to the HTML button element, apart from the icon CSS class in one child span element and text CSS class in another child span element. This way, it displays the icon on commandButton based on the icon position property. By default, the position of the icon is left. Now, we will move on to the menu components. A menu component uses the standard theme icon set with the help of the menu item icon attribute. Add the following code snippets of the menu component to your page: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-plus" />     <p:menuitem value="Delete" url="#" icon="ui-icon-close" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-person" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contact" />   </p:submenu> </p:menu> You may have observed from the preceding code snippets that each icon from ThemeRoller starts with ui-icon for consistency. Now, run the application and navigate your way to the newly created page, and you should see the standard ThemeRoller icons applied to buttons and menu items, as shown in the following screenshot: For further information, you can use PrimeFaces showcase (http://www.primefaces.org/showcase/), where you can see the default icons used for components, applying standard theme icons with the help of the icon attribute, and so on. Creating a set of icons of our own In this section, we are going to discuss how to create our own icons for the PrimeFaces web application. Instead of using images, you need to use image sprites by considering the impact of application performance. Most of the time, we might be interested in adding custom icons to UI components apart from the regular standard icon set. Generally, in order to create our own custom icons, we need to provide CSS classes with the background-image property, which is referred to the image in the theme images folder. For example, the following commandButton components will use a custom icon: <p:commandButton value="With Icon" icon="disk"/> <p:commandButton icon="disk"/> The disk icon is created by adding the .disk CSS class with the background image property. In order to display the image, you need to provide the correct relative path of the image from the web application, as follows: .disk {   background-image: url('disk.png') !important; } However, as discussed earlier, we are going to use the image sprite technology instead of a separate image for each icon to optimize web performance. Before creating an image sprite, you need to select all the required images and convert those images (PNG, JPG, and so on) to the icon format with a size almost equal to to that of the ThemeRoller icons. In this article, we used the Paint.NET tool to convert images to the ICO format with a size of 16 by 16 pixels. Paint.NET is a free raster graphics editor for Microsoft Windows, and it is developed on the .NET framework. It is a good replacement for the Microsoft Paint program in an editor with support for layers blending, transparency, and plugins. If the ICO format is not available, then you have to add the file type plug-in for the Paint.NET installation directory. So, this is just a two-step process for the conversion: The image (PNG, JPG, and so on) need to be saved as the Icons (*.ico) option from the Save as type dropdown. Then, select 16 by 16 dimensions with the supported bit system (8-bit, 32-bit, and so on). All the PrimeFaces theme icons are designed to have the same dimensions. There are many online and offline tools available that can be used to create an image sprite. I used Instant Sprite, an open source CSS sprite generator tool, to create an image sprite in this article. You can have a look at the official site for this CSS generator tool by visiting http://instantsprite.com/. Let's go through the following step-by-step process to create an image sprite using the Instant Sprite tool: First, either select multiple icons from your computer, or drag and drop icons on to the tool page. In the Thumbnails section, just drag and drop the images to change their order in the sprite. Change the offset (in pixels), direction (horizontal, vertical, and diagonal), and the type (.png or .gif) values in the Options section. In the Sprite section, right-click on the image to save it on your computer. You can also save the image in a new window or as a base64 type. In the Usage section, you will find the generated sprite CSS classes and HTML. Once the image is created, you will be able to see the image in the preview section before finalizing the image. Now, let's start creating the image sprite for button bar and menu components, which are going to be used in later sections. First, download or copy the required individual icons on the computer. Then, select all those files and drag and drop them in a particular order, as follows: We can also configure a few options, such as an offset of 10 px for icon padding, direction as horizontal to display them horizontally, and then finally selecting the image as the PNG type: The image sprite is generated in the sprite section, as follows: Right-click on the image to save it on your computer. Now, we have created a custom image sprite from the set of icons. Once the image sprite has been created, change the sprite name to ui-custom-icons and copy the generated CSS styles for later. In the generated HTML, note that each div class is appended with the ui-icon class to display the icon with a width of 16 px and height of 16 px. Adding the new icons to your theme In order to apply the custom icons to your web page, we first need to copy the generated image sprite file and then add the generated CSS classes from the previous section. The following generated sprite file has to be added to the images folder of the primefaces-moodyBlue2 custom theme. Let's name the file ui-custom-icons: After this, copy the generated CSS rules from the previous section. The first CSS class (ui-icon) contains the image sprite to display custom icons using the background URL property and dimensions such as the width and height properties for each icon. But since we are going to add the image reference in widget state style classes, you need to remove the background image URL property from the ui-icon class. Hence, the ui-icon class contains only the width and height dimensions: .ui-icon {   width: 16px;   height: 16px; } Later, modify the icon-specific CSS class names as shown in the following format. Each icon has its own icon name: .ui-icon-{icon name} The following CSS classes are used to refer individual icons with the help of the background-position property. Now after modification, the positioning CSS classes will look like this: .ui-icon-edit { background-position: 0 0; } .ui-icon-bookmark { background-position: -26px 0; } .ui-icon-next { background-position: -52px 0; } .ui-icon-previous { background-position: -78px 0; } .ui-icon-new { background-position: -104px 0; } .ui-icon-delete { background-position: -130px 0; } .ui-icon-refresh { background-position: -156px 0; } .ui-icon-print { background-position: -182px 0; } .ui-icon-home { background-position: -208px 0; } .ui-icon-admin { background-position: -234px 0; } .ui-icon-contactus { background-position: -260px 0; } Apart from the preceding CSS classes, we have to add the component state CSS classes. Widget states such as hover, focus, highlight, active, and error need to refer to different image sprites in order to display the component state behavior for user interactions. For demonstration purposes, we created only one image sprite and used it for all the CSS classes. But in real-time development, the image will vary based on the widget state. The following widget states refer to image sprites for different widget states: .ui-icon, .ui-widget-content .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-widget-header .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-default .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-hover .ui-icon, .ui-state-focus .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-active .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-highlight .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-error .ui-icon, .ui-state-error-text .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } In the JSF ecosystem, image references in the theme.css file must be converted to an expression that JSF resource loading can understand. So at first, in the preceding CSS classes, all the image URLs are appeared in the following expression: background-image: url("images/ui-custom-icons.png"); The preceding expression, when modified, looks like this: background-image: url("#{resource['primefaces-   moodyblue2:images/ui-custom-icons.png']}");  We need to make sure that the default state classes are commented out in the theme.css (the moodyblue2 theme) file to display the custom icons. By default, custom theme classes (such as the state classes and icon classes available under custom states and images and custom icons positioning) are commented out in the source code of the GitHub project. So, we need to uncomment these sections and comment out the default theme classes (such as the state classes and icon classes available under states and images and positioning). This means that the default or custom style classes only need to be available in the theme.css file. (OR) You can see all these changes in moodyblue3 theme as well. The custom icons appeared in Custom Icons screen by just changing the current theme to moodyblue3. Using custom icons in the commandButton components After applying the new icons to the theme, you are ready to use them on the PrimeFaces components. In this section, we will add custom icons to command buttons. Let's add a link named Custom Icons to the chaptersTemplate.xhtml file. The title of this page is also named Custom Icons. The following code snippets show how custom icons are added to command buttons using the icon attribute: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="ui-icon-edit" type="button" /> <p:commandButton value="Bookmark" icon="ui-icon-bookmark"   type="button" /> <p:commandButton value="Next" icon="ui-icon-next" type="button" /> <p:commandButton value="Previous" icon="ui-icon-previous"   type="button" /> Now, run the application and navigate to the newly created page. You should see the custom icons applied to the command buttons, as shown in the following screenshot: The commandButton component also supports the iconpos attribute if you wish to display the icon either to the left or right side. The default value is left. Using custom icons in a menu component In this section, we are going to add custom icons to a menu component. The menuitem tag supports the icon attribute to attach a custom icon. The following code snippets show how custom icons are added to the menu component: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-new" />     <p:menuitem value="Delete" url="#" icon="ui-icon-delete" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon-home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-admin" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contactus" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You will see the custom icons applied to the menu component, as shown in the following screenshot: Thus, you can apply custom icons on a PrimeFaces component that supports the icon attribute. The FontAwesome icons as an alternative to the ThemeRoller icons In addition to the default ThemeRoller icon set, the PrimeFaces team provided and supported a set of alternative icons named the FontAwesome iconic font and CSS framework. Originally, it was designed for the Twitter Bootstrap frontend framework. Currently, it works well with all frameworks. The official site for the FontAwesome toolkit is http://fortawesome.github.io/Font-Awesome/. The features of FontAwesome that make it a powerful iconic font and CSS toolkit are as follows: One font, 519 icons: In a single collection, FontAwesome is a pictographic language of web-related actions No JavaScript required: It has minimum compatibility issues because FontAwesome doesn't required JavaScript Infinite scalability: SVG (short for Scalable Vector Graphics) icons look awesome in any size Free to use: It is completely free and can be used for commercial usage CSS control: It's easy to style the icon color, size, shadow, and so on Perfect on retina displays: It looks gorgeous on high resolution displays It can be easily integrated with all frameworks Desktop-friendly Compatible with screen readers FontAwesome is an extension to Bootstrap by providing various icons based on scalable vector graphics. This FontAwesome feature is available from the PrimeFaces 5.2 release onwards. These icons can be customized in terms of size, color, drop and shadow and so on with the power of CSS. The full list of icons is available at both the official site of FontAwesome (http://fortawesome.github.io/Font-Awesome/icons/) as well as the PrimeFaces showcase (http://www.primefaces.org/showcase/ui/misc/fa.xhtml). In order to enable this feature, we have to set primefaces.FONT_AWESOME context param in web.xml to true, as follows: <context-param>   <param-name>primefaces.FONT_AWESOME</param-name>   <param-value>true</param-value> </context-param> The usage is as simple as using the standard ThemeRoller icons. PrimeFaces components such as buttons or menu items provide an icon attribute, which accepts an icon from the FontAwesome icon set. Remember that the icons should be prefixed by fa in a component. The general syntax of the FontAwesome icons will be as follows: fa fa-[name]-[shape]-[o]-[direction] Here, [name] is the name of the icon, [shape] is the optional shape of the icon's background (either circle or square), [o] is the optional outlined version of the icon, and [direction] is the direction in which certain icons point. Now, we first create a new navigation link named FontAwesome under chapter6 inside the chapterTemplate.xhtml template file. Then, we create a JSF template client called fontawesome.xhtml, where it explains the FontAwesome feature with the help of buttons and menu. This page has been added as a menu item for the top-level menu bar. In the content section, replace the text content with the following code snippets. The following set of buttons displays the FontAwesome icons with the help of the icon attribute. You may have observed that the fa-fw style class used to set icons at a fixed width. This is useful when variable widths throw off alignment: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="fa fa-fw fa-edit"   type="button" /> <p:commandButton value="Bookmark" icon="fa fa-fw fa-bookmark"   type="button" /> <p:commandButton value="Next" icon="fa fa-fw fa-arrow-right"   type="button" /> <p:commandButton value="Previous" icon="fa fa-fw fa-arrow-  left"   type="button" /> After this, apply the FontAwesome icons to navigation lists, such as the menu component, to display the icons just to the left of the component text content, as follows: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="fa fa-plus" />     <p:menuitem value="Delete" url="#" icon="fa fa-close" />     <p:menuitem value="Refresh" url="#" icon="fa fa-refresh" />     <p:menuitem value="Print" url="#" icon="fa fa-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="fa fa-home" />     <p:menuitem value="Admin" url="#" icon="fa fa-user" />     <p:menuitem value="Contact Us" url="#" icon="fa fa-       picture-o" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You should see the FontAwesome icons applied to buttons and menu items, as shown in the following screenshot: Note that the 40 shiny new icons of FontAwesome are available only in the PrimeFaces Elite 5.2.2 release and the community PrimeFaces 5.3 release because PrimeFaces was upgraded to FontAwesome 4.3 version since its 5.2.2 release. Summary In this article, we explored the standard theme icon set and how to use it on various PrimeFaces components. We also learned how to create our own set of icons in the form of the image sprite technology. We saw how to create image sprites using open source online tools and add them on a PrimeFaces theme. Finally, we had a look at the FontAwesome CSS framework, which was introduced as an alternative to the standard ThemeRoller icons. To ensure best practice, we learned how to use icons on commandButton and menu components. Now that you've come to the end of this article, you should be comfortable using web icons for PrimeFaces components in different ways. Resources for Article: Further resources on this subject: Introducing Primefaces [article] Setting Up Primefaces [article] Components Of Primefaces Extensions [article]
Read more
  • 0
  • 0
  • 5942

article-image-guidelines-creating-responsive-forms
Packt
23 Oct 2015
12 min read
Save for later

Guidelines for Creating Responsive Forms

Packt
23 Oct 2015
12 min read
In this article by Chelsea Myers, the author of the book, Responsive Web Design Patterns, covers the guidelines for creating responsive forms. Online forms are already modular. Because of this, they aren't hard to scale down for smaller screens. The little boxes and labels can naturally shift around between different screen sizes since they are all individual elements. However, form elements are naturally tiny and very close together. Small elements that you are supposed to click and fill in, whether on a desktop or mobile device, pose obstacles for the user. If you developed a form for your website, you more than likely want people to fill it out and submit it. Maybe the form is a survey, a sign up for a newsletter, or a contact form. Regardless of the type of form, online forms have a purpose; get people to fill them out! Getting people to do this can be difficult at any screen size. But when users are accessing your site through a tiny screen, they face even more challenges. As designers and developers, it is our job to make this process as easy and accessible as possible. Here are some guidelines to follow when creating a responsive form: Give all inputs breathing room. Use proper values for input's type attribute. Increase the hit states for all your inputs. Stack radio inputs and checkboxes on small screens. Together, we will go over each of these guidelines and see how to apply them. (For more resources related to this topic, see here.) The responsive form pattern Before we get started, let's look at the markup for the form we will be using. We want to include a sample of the different input options we can have. Our form will be very basic and requires simple information from the users such as their name, e-mail, age, favorite color, and favorite animal. HTML: <form> <!—- text input --> <label class="form-title" for="name">Name:</label> <input type="text" name="name" id="name" /> <!—- email input --> <label class="form-title" for="email">Email:</label> <input type="email" name="email" id="email" /> <!—- radio boxes --> <label class="form-title">Favorite Color</label> <input type="radio" name="radio" id="red" value="Red" /> <label>Red</label> <input type="radio" name="radio" id="blue" value="Blue" /><label>Blue</label> <input type="radio" name="radio" id="green" value="Green" /><label>Green</label> <!—- checkboxes --> <label class="form-title" for="checkbox">Favorite Animal</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> <input type="checkbox" name="checkbox" id="cat" value="Cat" /><label>Cat</label> <input type="checkbox" name="checkbox" id="other" value="Other" /><label>Other</label> <!—- drop down selection --> <label class="form-title" for="select">Age:</label> <select name="select" id="select"> <option value="age-group-1">1-17</option> <option value="age-group-2">18-50</option> <option value="age-group-3">&gt;50</option> </select> <!—- textarea--> <label class="form-title" for="textarea">Tell us more:</label> <textarea cols="50" rows="8" name="textarea" id="textarea"></textarea> <!—- submit button --> <input type="submit" value="Submit" /> </form> With no styles applied, our form looks like the following screenshot: Several of the form elements are next to each other, making the form hard to read and almost impossible to fill out. Everything seems tiny and squished together. We can do better than this. We want our forms to be legible and easy to fill. Let's go through the guidelines and make this eyesore of a form more approachable. #1 Give all inputs breathing room In the preceding screenshot, we can't see when one form element ends and the other begins. They are showing up as inline, and therefore displaying on the same line. We don't want this though. We want to give all our form elements their own line to live on and not share any space to the right of each type of element. To do this, we add display: block to all our inputs, selects, and text areas. We also apply display:block to our form labels using the class .form-title. We will be going more into why the titles have their own class for the fourth guideline. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; } .form-title { display:block; font-weight: bold; } As mentioned, we are applying display:block to text and e-mail inputs. We are also applying it to textarea and select elements. Just having our form elements display on their own line is not enough. We also give everything a margin-bottom element of 10px to give the elements some breathing room between one another. Next, we apply display:block to all the form titles and make them bold to add more visual separation. #2 Use proper values for input's type attribute Technically, if you are collecting a password from a user, you are just asking for text. E-mail, search queries, and even phone numbers are just text too. So, why would we use anything other than <input type="text"…/>? You may not notice the difference on your desktop computer between these form elements, but the change is the biggest on mobile devices. To show you, we have two screenshots of what the keyboard looks like on an iPhone while filling out the text input and the e-mail input: In the left image, we are focused on the text input for entering your name. The keyboard here is normal and nothing special. In the right image, we are focused on the e-mail input and can see the difference on the keyboard. As the red arrow points out, the @ key and the . key are now present when typing in the e-mail input. We need both of those to enter in a valid e-mail, so the device brings up a special keyboard with those characters. We are not doing anything special other than making sure the input has type="email" for this to happen. This works because type="email" is a new HTML5 attribute. HTML5 will also validate that the text entered is a correct email format (which used to be done with JavaScript). Here are some other HTML5 type attribute values from the W3C's third HTML 5.1 Editor's Draft (http://www.w3.org/html/wg/drafts/html/master/semantics.html#attr-input-type-keywords): color date datetime email month number range search tel time url week #3 Increase the hit states for all your inputs It would be really frustrating for the user if they could not easily select an option or click a text input to enter information. Making users struggle isn't going to increase your chance of getting them to actually complete the form. The form elements are naturally very small and not large enough for our fingers to easily tap. Because of this, we should increase the size of our form inputs. Having form inputs to be at least 44 x 44 px large is a standard right now in our industry. This is not a random number either. Apple suggests this size to be the minimum in their iOS Human Interface Guidelines, as seen in the following quote: "Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 x 44 points." As you can see, this does not apply to only form elements. Apple's suggest is for all clickable items. Now, this number may change along with our devices' resolutions in the future. Maybe it will go up or down depending on the size and precision of our future technology. For now though, it is a good place to start. We need to make sure that our inputs are big enough to tap with a finger. In the future, you can always test your form inputs on a touchscreen to make sure they are large enough. For our form, we can apply this minimum size by increasing the height and/or padding of our inputs. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; font-size: 1em; padding:5px; min-height: 2.75em; width: 100%; max-width: 300px; } The first two styles are from the first guideline. After this, we are increasing the font-size attribute of the inputs, giving the inputs more padding, and setting a min-height attribute for each input. Finally, we are making the inputs wider by setting the width to 100%, but also applying a max-width attribute so the inputs do not get too unnecessarily wide. We want to increase the size of our submit button as well. We definitely don't want our users to miss clicking this: input[type="submit"] { min-height: 3em; padding: 0 2.75em; font-size: 1em; border:none; background: mediumseagreen; color: white; } Here, we are also giving the submit button a min-height attribute, some padding, and increasing the font-size attribute. We are striping the browser's native border style on the button with border:none. We also want to make this button very obvious, so we apply a mediumseagreen color as the background and white text color. If you view the form so far in the browser, or look at the image, you will see all the form elements are bigger now except for the radio inputs and checkboxes. Those elements are still squished together. To make our radio and checkboxes bigger in our example, we will make the option text bigger. Doesn't it make sense that if you want to select red as your favorite color, you would be able to click on the word "red" too, and not just the box next to the word? In the HTML for the radio inputs and the checkboxes, we have markup that looks like this: <input type="radio" name="radio" id="red" value="Red" /><label>Red</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> To make the option text clickable, all we have to do is set the for attribute on the label to match the id attribute of the input. We will wrap the radio and checkbox inputs inside of their labels so that we can easily stack them for guideline #4. We will also give the labels a class of choice to help style them. <label class="choice" for="red"><input type="radio" name="radio" id="red" value="Red" />Red</label> <label class="choice" for="dog"><input type="checkbox" name="checkbox" id="dog" value="Dog" />Dog</label> Now, the option text and the actual input are both clickable. After doing this, we can apply some more styles to make selecting a radio or checkbox option even easier: label input { margin-left: 10px; } .choice { margin-right: 15px; padding: 5px 0; } .choice + .form-title { margin-top: 10px; } With label input, we are giving the input and the label text a little more space between each other. Then, using the .choice class, we are spreading out each option with margin-right: 15px and making the hit states bigger with padding: 5px 0. Finally, with .choice + .form-title, we are giving any .form-title element that comes after an element with a class of .choice more breathing room. This is going back to the responsive form guideline #1. There is only one last thing we need to do. On small screens, we want to stack the radio and checkbox inputs. On large screens, we want to keep them inline. To do this, we will add display:block to the .choice class. We will then be using a media query to change it back: @media screen and (min-width: 600px){ .choice { display: inline; } } With each input on its own line for smaller screens, they are easier to select. But we don't need to take up all that vertical space on wider screens. With this, our form is done. You can see our finished form, as shown in the following screenshot: Much better, wouldn't you say? No longer are all the inputs tiny and mushed together. The form is easy to read, tap, and begin entering in information. Filling in forms is not considered a fun thing to do, especially on a tiny screen with big thumbs. But there are ways in which we can make the experience easier and a little more visually pleasing. Summary A classic user experience challenge is to design a form that encourages completion. When it comes to fact, figures, and forms, it can be hard to retain the user's attention. This does not mean it is impossible. Having a responsive website does make styling tables and forms a little more complex. But what is the alternative? Nonresponsive websites make you pinch and zoom endlessly to fill out a form or view a table. Having a responsive website gives you the opportunity to make this task easier. It takes a little more code, but in the end, your users will greatly benefit from it. With this article, we have wrapped up guidelines for creating responsive forms. Resources for Article: Further resources on this subject: Securing and Authenticating Web API [article] Understanding CRM Extendibility Architecture [article] CSS3 – Selectors and nth Rules [article]
Read more
  • 0
  • 0
  • 10879

article-image-writing-3d-space-rail-shooter-threejs-part-3
Martin Naumann
23 Oct 2015
7 min read
Save for later

Writing a 3D space rail shooter in Three.js, Part 3

Martin Naumann
23 Oct 2015
7 min read
In the course of this three part series, you will learn how to write a simple 3D space shooter game with Three.js. The game will introduce the basic concepts of a Three.js application, how to write modular code and the core principles of a game, such as camera, player motion and collision detection. In Part 1 we set up our package and created the world of our game. In Part 2, we added the spaceship and the asteroids for our game. In this final Part 3 of the series, we will set the collision detection, add weapons to our craft and add a way to score and manage our game health as well. Collisions make things go boom Okay, now we'll need to set up collision detection and shooting. Let's start with collision detection! We will be using a technique called hitbox, where we'll create bounding boxes for the asteroids and the spaceship and check for intersections. Luckily, Three.js has a THREE.Box3 class to help us with this. The additions to the Player module: var Player = function(parent) { var loader = newObjMtlLoader(), self = this this.loaded = false this.hitbox = newTHREE.Box3() this.update = function() { if(!spaceship) return this.hitbox.setFromObject(spaceship) } This adds the hitbox and an update method that updates the hitbox by using the spaceship object to get dimensions and position for the box. Now we'll adjust the Asteroid module to do the same: var Asteroid = function(rockType) { var mesh = newTHREE.Object3D(), self = this // Speed of motion and rotation mesh.velocity = Math.random() * 2 + 2 mesh.vRotation = newTHREE.Vector3(Math.random(), Math.random(), Math.random()) this.hitbox = newTHREE.Box3() and tweak the update method: this.update = function(z) { mesh.position.z += mesh.velocity mesh.rotation.x += mesh.vRotation.x * 0.02; mesh.rotation.y += mesh.vRotation.y * 0.02; mesh.rotation.z += mesh.vRotation.z * 0.02; if(mesh.children.length > 0) this.hitbox.setFromObject(mesh.children[0]) if(mesh.position.z > z) { this.reset(z) } } You may have noticed the reset method that isn't implemented yet. It'll come in handy later - so let's make that method: this.reset = function(z) { mesh.velocity = Math.random() * 2 + 2 mesh.position.set(-50 + Math.random() * 100, -50 + Math.random() * 100, z - 1500 - Math.random() * 1500) } This method allows us to quickly push an asteroid back into action whenever we need to. On to the render loop: function render() { cam.position.z -= 1 tunnel.update(cam.position.z) player.update() for(var i=0;i<NUM_ASTEROIDS;i++) { if(!asteroids[i].loaded) continue asteroids[i].update(cam.position.z) if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) } } } So for each asteroid that is loaded, we're checking if the hitbox of our player is intersecting (i.e. colliding) with the hitbox of the asteroid. If so, we'll reset (i.e. push into the vortex ahead of us) the asteroid, based on the camera offset. Pew! Pew! Pew! Now on to get us some weaponry! Let's create a Shot module: var THREE = require('three') var shotMtl = newTHREE.MeshBasicMaterial({ color: 0xff0000, transparent: true, opacity: 0.5 }) var Shot = function(initialPos) { this.mesh = newTHREE.Mesh( newTHREE.SphereGeometry(3, 16, 16), shotMtl ) this.mesh.position.copy(initialPos) this.getMesh = function() { returnthis.mesh } this.update = function(z) { this.mesh.position.z -= 5 if(Math.abs(this.mesh.position.z - z) > 1000) { returnfalse deletethis.mesh } returntrue } returnthis } module.exports = Shot In this module we're creating a translucent, red sphere, spawned at the initial position given to the constructor function. The update method is a bit different from those we've seen so far as it returns either true (still alive) or false (dead, remove now) based on the position. Once the shot is too far from the camera, it gets cleaned up. Now back to our main.js: var shots = [] functionrender() { cam.position.z -= 1 tunnel.update(cam.position.z) player.update() for(var i=0; i<shots.length; i++) { if(!shots[i].update(cam.position.z)) { World.getScene().remove(shots[i].getMesh()) shots.splice(i, 1) } } This snippet is adding in a loop over all the shots and updates them, removing them if needed. But we also have to check for collisions with the asteroids: for(var i=0;i<NUM_ASTEROIDS;i++) { if(!asteroids[i].loaded) continue asteroids[i].update(cam.position.z) if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) } for(var j=0; j<shots.length; j++) { if(asteroids[i].hitbox.isIntersectionBox(shots[j].hitbox)) { asteroids[i].reset(cam.position.z) World.getScene().remove(shots[j].getMesh()) shots.splice(j, 1) break } } } Last but not least we need some code to take keyboard input to fire the shots: window.addEventListener('keyup', function(e) { switch(e.keyCode) { case32: // Space var shipPosition = cam.position.clone() shipPosition.sub(newTHREE.Vector3(0, 25, 100)) var shot = newShot(shipPosition) shots.push(shot) World.add(shot.getMesh()) break } }) This code - when the spacebar key is pressed - is adding a new shot to the array, which will then be updated in the render loop. Move it, move it! Cool, but while we're at the keyboard handler, let's make things moving a bit more! window.addEventListener('keydown', function(e) { if(e.keyCode == 37) { cam.position.x -= 5 } elseif(e.keyCode == 39) { cam.position.x += 5 } if(e.keyCode == 38) { cam.position.y += 5 } elseif(e.keyCode == 40) { cam.position.y -= 5 } }) This code uses the arrow keys to move the camera around. Finishing touches Now the last bits come into play: Score and health management as well. Start with defining the two variables in main.js: var score = 0, health = 100 and change these values where appropriate: if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) health -= 20 document.getElementById('health').textContent = health if(health < 1) { World.pause() alert('Game over! You scored ' + score + ' points') window.location.reload() } } This decreases the health by 20 points whenever the spaceship hits an asteroid and shows a "Game over" box and reloads the game afterwards. for(var j=0; j<shots.length; j++) { if(asteroids[i].hitbox.isIntersectionBox(shots[j].hitbox)) { score += 10 document.getElementById("score").textContent = score asteroids[i].reset(cam.position.z) World.getScene().remove(shots[j].getMesh()) shots.splice(j, 1) break } } This increases the score by 10 whenever a shot hits an asteroid. You may have noticed the two document.getElementById calls that will not work just yet. Those are for two UI elements that we'll add to the index.html to show the player the current health and score situation: <body> <div id="bar"> Health: <span id="health">100</span>% &nbsp;&nbsp; Score: <span id="score">0</span> </div> <script src="app.js"></script> </body> And throw in some CSS, too: @import url(http://fonts.googleapis.com/css?family=Orbitron); #bar{ font-family: Orbitron, sans-serif; position:absolute; left:0; right:0; height:1.5em; background:black; color:white; line-height:1.5em; } Wrap up With all 3 Parts of this series, we now with the help of Three.js have a basic 3D game running in the browser. There's a bunch of improvements to be made - the controls, mobile input compatibility and performance, but the basic concepts are in place. Now have fun playing! About the author Martin Naumann is an open source contributor and web evangelist by heart from Zurich with a decade of experience from the trenches of software engineering in multiple fields. He works as a software engineer at Archilogic in front and backend. He devotes his time to moving the web forward, fixing problems, building applications and systems and breaking things for fun & profit. Martin believes in the web platform and is working with bleeding edge technologies that will allow the web to prosper.
Read more
  • 0
  • 0
  • 4741
article-image-internet-connected-smart-water-meter-0
Packt
23 Oct 2015
15 min read
Save for later

Internet-Connected Smart Water Meter

Packt
23 Oct 2015
15 min read
In this article by Pradeeka Seneviratne author of the book, Internet of Things with Arduino Blueprints, explains that for many years and even now, water meter readings have been collected manually. To do this, a person has to visit the location where the water meter is installed. In this article, you will learn how to make a smart water meter with an LCD screen that has the ability to connect to the internet and serve meter readings to the consumer through the Internet. In this article, you shall do the following: Learn about water flow sensors and its basic operation Learn how to mount and plumb a water flow meter on and into the pipeline Read and count the water flow sensor pulses Calculate the water flow rate and volume Learn about LCD displays and connecting with Arduino Convert a water flow meter to a simple web server and serve meter readings through the Internet (For more resources related to this topic, see here.) Prerequisites An Arduino UNO R3 board (http://store.arduino.cc/product/A000066) Arduino Ethernet Shield R3 (https://www.adafruit.com/products/201) A liquid flow sensor (http://www.futurlec.com/FLOW25L0.shtml) A Hitachi HD44780 DRIVER compatible LCD Screen (16 x 2) (https://www.sparkfun.com/products/709) A 10K ohm resistor A 10K ohm potentiometer (https://www.sparkfun.com/products/9806) Few Jumper wires with male and female headers (https://www.sparkfun.com/products/9140) A breadboard (https://www.sparkfun.com/products/12002) Water flow sensors The heart of a water flow sensor consists of a Hall effect sensor (https://en.wikipedia.org/wiki/Hall_effect_sensor) that outputs pulses for magnetic field changes. Inside the housing, there is a small pinwheel with a permanent magnet attached to it. When the water flows through the housing, the pinwheel begins to spin, and the magnet attached to it passes very close to the Hall effect sensor in every cycle. The Hall effect sensor is covered with a separate plastic housing to protect it from the water. The result generates an electric pulse that transitions from low voltage to high voltage, or high voltage to low voltage, depending on the attached permanent magnet's polarity. The resulting pulse can be read and counted using the Arduino. For this project, we will use a Liquid Flow sensor from Futurlec (http://www.futurlec.com/FLOW25L0.shtml). The following image shows the external view of a Liquid Flow Sensor: Liquid flow sensor – the flow direction is marked with an arrow The following image shows the inside view of the liquid flow sensor. You can see a pinwheel that is located inside the housing: Pinwheel attached inside the water flow sensor Wiring the water flow sensor with Arduino The water flow sensor that we are using with this project has three wires, which are the following: Red (or it may be a different color) wire, which indicates the Positive terminal Black (or it may be a different color) wire, which indicates the Negative terminal Brown (or it may be a different color) wire, which indicates the DATA terminal All three wire ends are connected to a JST connector. Always refer to the datasheet of the product for wiring specifications before connecting them with the microcontroller and the power source. When you use jumper wires with male and female headers, do the following: Connect positive terminal of the water flow sensor to Arduino 5V. Connect negative terminal of the water flow sensor to Arduino GND. Connect DATA terminal of the water flow sensor to Arduino digital pin 2. Water flow sensor connected with Arduino Ethernet Shield using three wires You can directly power the water flow sensor using Arduino since most residential type water flow sensors operate under 5V and consume a very low amount of current. Read the product manual for more information about the supply voltage and supply current range to save your Arduino from high current consumption by the water flow sensor. If your water flow sensor requires a supply current of more than 200mA or a supply voltage of more than 5v to function correctly, then use a separate power source with it. The following image illustrates jumper wires with male and female headers: Jumper wires with male and female headers Reading pulses The water flow sensor produces and outputs digital pulses that denote the amount of water flowing through it. These pulses can be detected and counted using the Arduino board. Let's assume the water flow sensor that we are using for this project will generate approximately 450 pulses per liter (most probably, this value can be found in the product datasheet). So 1 pulse approximately equals to [1000 ml/450 pulses] 2.22 ml. These values can be different depending on the speed of the water flow and the mounting polarity of the water flow sensor. Arduino can read digital pulses generating by the water flow sensor through the DATA line. Rising edge and falling edge There are two type of pulses, as listed here:. Positive-going pulse: In an idle state, the logic level is normally LOW. It goes HIGH state, stays there for some time, and comes back to the LOW state. Negative-going pulse: In an idle state, the logic level is normally HIGH. It goes LOW state, stays LOW state for time, and comes back to the HIGH state. The rising and falling edges of a pulse are vertical. The transition from LOW state to HIGH state is called rising edge and the transition from HIGH state to LOW state is called falling edge. Representation of Rising edge and Falling edge in digital signal You can capture digital pulses using either the rising edge or the falling edge. In this project, we will use the rising edge. Reading and counting pulses with Arduino In the previous step, you attached the water flow sensor to Arduino UNO. The generated pulse can be read by Arduino digital pin 2 and the interrupt 0 is attached to it. The following Arduino sketch will count the number of pulses per second and display it on the Arduino Serial Monitor: Open a new Arduino IDE and copy the sketch named B04844_03_01.ino. Change the following pin number assignment if you have attached your water flow sensor to a different Arduino pin: int pin = 2; Verify and upload the sketch on the Arduino board: int pin = 2; //Water flow sensor attached to digital pin 2 volatile unsigned int pulse; const int pulses_per_litre = 450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse = 0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); } void count_pulse() { pulse++; } Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second in each loop The attachInterrupt() function is responsible for handling the count_pulse() function. When the interrupts() function is called, the count_pulse() function will start to collect the pulses generated by the liquid flow sensor. This will continue for 1000 milliseconds, and then the noInterrupts() function is called to stop the operation of count_pulse() function. Then, the pulse count is assigned to the pulse variable and prints it on the serial monitor. This will repeat again and again inside the loop() function until you press the reset button or disconnect the Arduino from the power. Calculating the water flow rate The water flow rate is the amount of water flowing in at a given point of time and can be expressed in gallons per second or liters per second. The number of pulses generated per liter of water flowing through the sensor can be found in the water flow sensor's specification sheet. Let's say there are m pulses per liter of water. You can also count the number of pulses generated by the sensor per second: Let's say there are n pulses per second. The water flow rate R can be expressed as: In litres per second Also, you can calculate the water flow rate in liters per minute using the following formula: For example, if your water flow sensor generates 450 pulses for one liter of water flowing through it, and you get 10 pulses for the first second, then the elapsed water flow rate is: 10/450 = 0.022 liters per second or 0.022 * 1000 = 22 milliliters per second. The following steps will explain you how to calculate the water flow rate using a simple Arduino sketch: Open a new Arduino IDE and copy the sketch named B04844_03_02.ino. Verify and upload the sketch on the Arduino board. The following code block will calculate the water flow rate in milliliters per second: Serial.print("Water flow rate: "); Serial.print(pulse * 1000/pulses_per_litre); Serial.println("milliliters per second"); Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second and the water flow rate in milliliters per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second and water flow rate in each loop Calculating the water flow volume The water flow volume can be calculated by summing up the product of flow rate and the time interval: Volume = ∑ Flow Rate * Time_Interval The following Arduino sketch will calculate and output the total water volume since the device startup: Open a new Arduino IDE and copy the sketch named B04844_03_03.ino. The water flow volume can be calculated using following code block: volume = volume + flow_rate * 0.1; //Time Interval is 0.1 second Serial.print("Volume: "); Serial.print(volume); Serial.println(" milliliters"); Verify and upload the sketch on the Arduino board. Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second, water flow rate in milliliters per second, and total volume of water in milliliters will be printed on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second, water flow rate and in each loop and sum of volume  To accurately measure water flow rate and volume, the water flow sensor needs to be carefully calibrated. The hall effect sensor inside the housing is not a precision sensor, and the pulse rate does vary a bit depending on the flow rate, fluid pressure, and sensor orientation. Adding an LCD screen to the water meter You can add an LCD screen to your newly built water meter to display readings, rather than displaying them on the Arduino serial monitor. You can then disconnect your water meter from the computer after uploading the sketch on to your Arduino. Using a Hitachi HD44780 driver compatible LCD screen and Arduino Liquid Crystal library, you can easily integrate it with your water meter. Typically, this type of LCD screen has 16 interface connectors. The display has two rows and 16 columns, so each row can display up to 16 characters. The following image represents the top view of a Hitachi HD44760 driver compatible LCD screen. Note that the 16-pin header is soldered to the PCB to easily connect it with a breadboard. Hitachi HD44780 driver compatible LCD screen (16 x 2)—Top View The following image represents the bottom view of the LCD screen. Again, you can see the soldered 16-pin header. Hitachi HD44780 driver compatible LCD screen (16x2)—Bottom View Wire your LCD screen with Arduino as shown in the next diagram. Use the 10k potentiometer to control the contrast of the LCD screen. Now, perform the following steps to connect your LCD screen with your Arduino: LCD RS pin (pin number 4 from left) to Arduino digital pin 8. LCD ENABLE pin (pin number 6 from left) to Arduino digital pin 7. LCD READ/WRITE pin (pin number 5 from left) to Arduino GND. LCD DB4 pin (pin number 11 from left) to Arduino digital pin 6. LCD DB5 pin (pin number 12 from left) to Arduino digital pin 5. LCD DB6 pin (pin number 13 from left) to Arduino digital pin 4. LCD DB7 pin (pin number 14 from left) to Arduino digital pin 3. Wire a 10K pot between Arduino +5V and GND, and wire its wiper (center pin) to LCD screen V0 pin (pin number 3 from left). LCD GND pin (pin number 1 from left) to Arduino GND. LCD +5V pin (pin number 2 from left) to Arduino 5V pin. LCD Backlight Power pin (pin number 15 from left) to Arduino 5V pin. LCD Backlight GND pin (pin number 16 from left) to Arduino GND. Fritzing representation of the circuit Open a new Arduino IDE and copy the sketch named B04844_03_04.ino. First initialize the Liquid Crystal library using following line: #include <LiquidCrystal.h> To create a new LCD object with following parameters, the syntax is LiquidCrystal lcd (RS, ENABLE, DB4, DB5, DB6, DB7): LiquidCrystal lcd(8, 7, 6, 5, 4, 3); Then initialize number of rows and columns in the LCD. Syntax is lcd.begin(number_of_columns, number_of_rows): lcd.begin(16, 2); You can set the starting location to print a text on the LCD screen using following function, syntax is lcd.setCursor(column, row): lcd.setCursor(7, 1); The column and row numbers are 0 index based and the following line will start to print a text in the intersection of the 8th column and 2nd row. Then, use the lcd.print() function to print some text on the LCD screen: lcd.print(" ml/s"); Verify and upload the sketch on the Arduino board. Blow some air through the water flow sensor using your mouth. You can see some information on the LCD screen such as pulses per second, water flow rate, and total water volume from the beginning of the time:  LCD screen output Converting your water meter to a web server In the previous steps, you learned how to display your water flow sensor's readings and calculate water flow rate and total volume on the Arduino serial monitor. In this step, you will learn how to integrate a simple web server to your water flow sensor and remotely read your water flow sensor's readings. You can make an Arduino web server with Arduino WiFi Shield or Arduino Ethernet shield. The following steps will explain how to convert the Arduino water flow meter to a web server with Arduino Wi-Fi shield: Remove all the wires you have connected to your Arduino in the previous sections in this article. Stack the Arduino WiFi shield on the Arduino board using wire wrap headers. Make sure the Arduino WiFi shield is properly seated on the Arduino board. Now, reconnect the wires from water flow sensor to the Wi-Fi shield. Use the same pin numbers as used in the previous steps. Connect the 9VDC power supply to the Arduino board. Connect your Arduino to your PC using the USB cable and upload the next sketch. Once the upload is completed, remove your USB cable from the Arduino. Open a new Arduino IDE and copy the sketch named B04844_03_05.ino. Change the following two lines according to your WiFi network settings, as shown here: char ssid[] = "MyHomeWiFi"; char pass[] = "secretPassword"; Verify and upload the sketch on the Arduino board. Blow the air through the water flow sensor using your mouth, or it would be better if you can connect the water flow sensor to a water pipeline to see the actual operation with the water. Open your web browser, type the WiFi shield's IP address assigned by your network, and hit the Enter key: http://192.168.1.177 You can see your water flow sensor's pulses per second, flow rate, and total volume on the Web page. The page refreshes every 5 seconds to display updated information. You can add an LCD screen to the Arduino WiFi shield as discussed in the previous step. However, remember that you can't use some of the pins in the Wi-Fi shield because they are reserved for SD (pin 4), SS (pin 10), and SPI (pin 11, 12, 13). We have not included the circuit and source code here in order to make the Arduino sketch simple. A little bit about plumbing Typically, the direction of the water flow is indicated by an arrow mark on top of the water flow meter's enclosure. Also, you can mount the water flow meter either horizontally or vertically according to its specifications. Some water flow meters can mount both horizontally and vertically. You can install your water flow meter to a half-inch pipeline using normal BSP pipe connectors. The outer diameter of the connector is 0.78" and the inner thread size is half-inch. The water flow meter has threaded ends on both sides. Connect the threaded side of the PVC connectors to both ends of the water flow meter. Use a thread seal tape to seal the connection, and then connect the other ends to an existing half-inch pipeline using PVC pipe glue or solvent cement. Make sure that you connect the water flow meter with the pipe line in the correct direction. See the arrow mark on top of the water flow meter for flow direction. BNC pipe line connector made by PVC Securing the connection between the water flow meter and BNC pipe connector using thread seal PVC solvent cement. Image taken from https://www.flickr.com/photos/ttrimm/7355734996/ Summary In this article, you gained hands-on experience and knowledge about water flow sensors and counting pulses while calculating and displaying them. Finally, you made a simple web server to allow users to read the water meter through the Internet. You can apply this to any type of liquid, but make sure to select the correct flow sensor because some liquids react chemically with the material that the sensor is made of. You can Google and find which flow sensors support your preferred liquid type. Resources for Article: Further resources on this subject: The Arduino Mobile Robot [article] Arduino Development [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 15034

article-image-welcome-land-bludborne
Packt
23 Oct 2015
12 min read
Save for later

Welcome to the Land of BludBorne

Packt
23 Oct 2015
12 min read
In this article by Patrick Hoey, the author of Mastering LibGDX Game Development, we will jump into creating the world of BludBourne (that's our game!). We will first learn some concepts and tools related to creating tile based maps and then we will look into starting with BludBorne! We will cover the following topics in this article: Creating and editing tile based maps Implementing the starter classes for BludBourne (For more resources related to this topic, see here.) Creating and editing tile based maps For the BludBourne project map locations, we will be using tilesets, which are terrain and decoration sprites in the shape of squares. These are easy to work with since LibGDX supports tile-based maps with its core library. The easiest method to create these types of maps is to use a tile-based editor. There are many different types of tilemap editors, but there are two primary ones that are used with LibGDX because they have built in support: Tiled: This is a free and actively maintained tile-based editor. I have used this editor for the BludBourne project. Download the latest version from http://www.mapeditor.org/download.html. Tide: This is a free tile-based editor built using Microsoft XNA libraries. The targeted platforms are Windows, Xbox 360, and Windows Phone 7. Download the latest version from http://tide.codeplex.com/releases. For the BludBourne project, we will be using Tiled. The following figure is a screenshot from one of the editing sessions when creating the maps for our game:    The following is a quick guide for how we can use Tiled for this project: Map View (1): The map view is the part of the Tiled editor where you display and edit your individual maps. Numerous maps can be loaded at once, using a tab approach, so that you can switch between them quickly. There is a zoom feature available for this part of Tiled in the lower right hand corner, and can be easily customized depending on your workflow. The maps are provided in the project directory (under coreassetsmaps), but when you wish to create your own maps, you can simply go to File | New. In the New Map dialog box, first set the Tile size dimensions, which, for our project, will be a width of 16 pixels and a height of 16 pixels. The other setting is Map size which represents the size of your map in unit size, using the tile size dimensions as your unit scale. An example would be creating a map that is 100 units by 100 units, and if our tiles have a dimension of 16 pixels by 16 pixels then this would give is a map size of 1600 pixels by 1600 pixels. Layers (2): This represents the different layers of the currently loaded map. You can think of creating a tile map like painting a scene, where you paint the background first and build up the various elements until you get to the foreground. Background_Layer: This tile layer represents the first layer created for the tilemap. This will be the layer to create the ground elements, such as grass, dirt paths, water, and stone walkways. Nothing else will be shown below this layer. Ground_Layer: This tile layer will be the second later created for the tilemap. This layer will be buildings built on top of the ground, or other structures like mountains, trees, and villages. The primary reason is convey a feeling of depth to the map, as well as the fact that structural tiles such as walls have a transparency (alpha channel) so that they look like they belong on the ground where they are being created. Decoration_Layer: This third tile layer will contain elements meant to decorate the landscape in order to remove repetition and make more interesting scenes. These elements include rocks, patches of weeds, flowers, and even skulls. MAP_COLLISION_LAYER: This fourth layer is a special layer designated as an object layer. This layer does not contain tiles, but will have objects, or shapes. This is the layer that you will configure to create areas in the map that the player character and non-player characters cannot traverse, such as walls of buildings, mountain terrain, ocean areas, and decorations such as fountains. MAP_SPAWNS_LAYER: This fifth layer is another special object layer designated only for player and non-playable character spawns, such as people in the towns. These spawns will represent the various starting locations where these characters will first be rendered on the map. MAP_PORTAL_LAYER: This sixth layer is the last object layer designated for triggering events in order to move from one map into another. These will be locations where the player character walks over, triggering an event which activates the transition to another map. An example would be in the village map, when the player walks outside of the village map, they will find themselves on the larger world map. Tilesets (3): This area of Tiled represents all of the tilesets you will work with for the current map. Each tileset, or spritesheet, will get its own tab in this interface, making it easy to move between them. Adding a new tileset is as easy as clicking the New icon in the Tilesets area, and loading the tileset image in the New Tileset dialog. Tiled will also partition out the tilemap into the individual tiles after you configure the tile dimensions in this dialog. Properties (4): This area of Tiled represents the different additional properties that you can set for the currently selected map element, such as a tile or object. An example of where these properties can be helpful is when we create a portal object on the portal layer. We can create a property defining the name of this portal object that represents the map to load. So, when we walk over a small tile that looks like a town in the world overview map, and trigger the portal event, we know that the map to load is TOWN because the name property on this portal object is TOWN. After reviewing a very brief description of how we can use the Tiled editor for BludBourne, the following screenshots show the three maps that we will be using for this project. The first screenshot is of the TOWN map which will be where our hero will discover clues from the villagers, obtain quests, and buy armor and weapons. The town has shops, an inn, as well as a few small homes of local villagers:    The next screenshot is of the TOP_WORLD map which will be the location where our hero will battle enemies, find clues throughout the land, and eventually make way to the evil antagonist held up in his castle. The hero can see how the pestilence of evil has started to spread across the lands and lay ruin upon the only harvestable fields left:    Finally, we make our way to the CASTLE_OF_DOOM map, which will be where our hero, once leveled enough, will battle the evil antagonist held up in the throne room of his own castle. Here, the hero will find many high level enemies, as well as high valued items for trade:     Implementing the starter classes for BludBourne Now that we have created the maps for the different locations of BludBourne, we can now begin to develop the initial pieces of our source code project in order to load these maps, and move around in our world. The following diagram represents a high level view of all the relevant classes that we will be creating:   This class diagram is meant to show not only all the classes we will be reviewing in this article, but also the relationships that these classes share so that we are not developing them in a vacuum. The main entry point for our game (and the only platform specific class) is DesktopLauncher, which will instantiate BludBourne and add it along with some configuration information to the LibGDX application lifecycle. BludBourne will derive from Game to minimize the lifecycle implementation needed by the ApplicationListener interface. BludBourne will maintain all the screens for the game. MainGameScreen will be the primary gameplay screen that displays the different maps and player character moving around in them. MainGameScreen will also create the MapManager, Entity, and PlayerController. MapManager provides helper methods for managing the different maps and map layers. Entity will represent the primary class for our player character in the game. PlayerController implements InputProcessor and will be the class that controls the players input and controls on the screen. Finally, we have some asset manager helper methods in the Utility class used throughout the project. DesktopLauncher The first class that we will need to modify is DesktopLauncher, which the gdx-setup tool generated: package com.packtpub.libgdx.bludbourne.desktop; import com.badlogic.gdx.Application; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; import com.packtpub.libgdx.bludbourne.BludBourne; The Application class is responsible for setting up a window, handling resize events, rendering to the surfaces, and managing the application during its lifetime. Specifically, Application will provide the modules for dealing with graphics, audio, input and file I/O handling, logging facilities, memory footprint information, and hooks for extension libraries. The Gdx class is an environment class that holds static instances of Application, Graphics, Audio, Input, Files, and Net modules as a convenience for access throughout the game. The LwjglApplication class is the backend implementation of the Application interface for the desktop. The backend package that LibGDX uses for the desktop is called LWJGL. This implementation for the desktop will provide cross-platform access to native APIs for OpenGL. This interface becomes the entry point that the platform OS uses to load your game. The LwjglApplicationConfiguration class provides a single point of reference for all the properties associated with your game on the desktop: public class DesktopLauncher { public static void main (String[] arg) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); config.title = "BludBourne"; config.useGL30 = false; config.width = 800; config.height = 600; Application app = new LwjglApplication(new BludBourne(), config); Gdx.app = app; //Gdx.app.setLogLevel(Application.LOG_INFO); Gdx.app.setLogLevel(Application.LOG_DEBUG); //Gdx.app.setLogLevel(Application.LOG_ERROR); //Gdx.app.setLogLevel(Application.LOG_NONE); } } The config object is an instance of the LwjglApplicationConfiguration class where we can set top level game configuration properties, such as the title to display on the display window, as well as display window dimensions. The useGL30 property is set to false, so that we use the much more stable and mature implementation of OpenGL ES, version 2.0. The LwjglApplicationConfiguration properties object, as well as our starter class instance, BludBourne, are then passed to the backend implementation of the Application class, and an object reference is then stored in the Gdx class. Finally, we will set the logging level for the game. There are four values for the logging levels which represent various degrees of granularity for application level messages output to standard out. LOG_NONE is a logging level where no messages are output. LOG_ERROR will only display error messages. LOG_INFO will display all messages that are not debug level messages. Finally, LOG_DEBUG is a logging level that displays all messages. BludBourne The next class to review is BludBourne. The class diagram for BludBourne shows the attributes and method signatures for our implementation: The import packages for BludBourne are as follows: package com.packtpub.libgdx.bludbourne; import com.packtpub.libgdx.bludbourne.screens.MainGameScreen; import com.badlogic.gdx.Game; The Game class is an abstract base class which wraps the ApplicationListener interface and delegates the implementation of this interface to the Screen class. This provides a convenience for setting the game up with different screens, including ones for a main menu, options, gameplay, and cutscenes. The MainGameScreen is the primary gameplay screen that the player will see as they move their hero around in the game world: public class BludBourne extends Game { public static final MainGameScreen _mainGameScreen = new MainGameScreen(); @Override public void create(){ setScreen(_mainGameScreen); } @Override public void dispose(){ _mainGameScreen.dispose(); } } The gdx-setup tool generated our starter class BludBourne. This is the first place where we begin to set up our game lifecycle. An instance of BludBourne is passed to the backend constructor of LwjglApplication in DesktopLauncher which is how we get hooks into the lifecycle of LibGDX. BludBourne will contain all of the screens used throughout the game, but for now we are only concerned with the primary gameplay screen, MainGameScreen. We must override the create() method so that we can set the initial screen for when BludBourne is initialized in the game lifecycle. The setScreen() method will check to see if a screen is already currently active. If the current screen is already active, then it will be hidden, and the screen that was passed into the method will be shown. In the future, we will use this method to start the game with a main menu screen. We should also override dispose() since BludBourne owns the screen object references. We need to make sure that we dispose of the objects appropriately when we are exiting the game. Summary In this article, we first learned about tile based maps and how to create them with the Tiled editor. We then learned about the high level architecture of the classes we will have to create and implemented starter classes which allowed us to hook into the LibGDX application lifecycle. Have a look at Mastering LibGDX Game Development to learn about textures, TMX formatted tile maps, and how to manage them with the asset manager. Also included is how the orthographic camera works within our game, and how to display the map within the render loop. You can learn to implement a map manager that deals with collision layers, spawn points, and a portal system which allows us to transition between different locations seamlessly. Lastly, you can learn to implement a player character with animation cycles and input handling for moving around the game map. Resources for Article: Further resources on this subject: Finding Your Way [article] Getting to Know LibGDX [article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 18086

article-image-build-remote-control-car-zigbee-part-2
Bill Pretty
22 Oct 2015
5 min read
Save for later

Build a Remote Control Car with Zigbee Part 2

Bill Pretty
22 Oct 2015
5 min read
In Part 1 we talked about some of the hardware that I used to create my Zigbee (XBee) controlled RC vehicles. In this part we will see how to use the software that come for free from Digi.com. Figure 1 XCTU Boot Screen When you start XCTU, you will always see the screen shown above. The first thing we have to do is to find out which serial port is connected to the Zigbee module. There are a number of ways to communicate serially with module. The hardware that I would recommend is shown below and is available from Sparkfun Electronics. Figure 2 XBee Explorer USB This adapter has a built in FTDI USB adapter, which will install itself as a serial port on your system. Figure 3 Select Serial Port In this case we are using Com 9 – USB Serial Port. The next thing we have to do is to select the baud rate and other parameters. Basically 9600 baud, 8 bits, no parity and no flow control. Figure 4 Port Parameters Now that we have the serial port configured, we can search for XBee modules to configure. The XBee module is powered by the USB port of you host computer, so you should see some lights flash when you first plug in the adapter. Setting Up the XBee Controller Figure 5 Controller Setup The figure above shows the first setup portion for the controller (the box with the joystick). The important thing to note is the DL or Destination Address Low Byte; this is the address of the XBee module in the robot. This is the module where the inputs to this XBee will be “mirrored” on the outputs of the robot’s XBee. But first we have to set them up as inputs on the controller. Figure 6 I/O Settings There are a few important things to be aware of in the figure above. First of all inputs D0 – D4 and D6 and D7 are configured as inputs. But the most important thing is the “DIO Change Detect” value. This value acts like an interrupt mask. The “7E” value tells the XBee to only scan the inputs we setup for a change. And these inputs must be pulled to a steady state, either high or low, or the mast will send spurious commands to the robot. Setting Up the XBee Receiver Figure 7 XBee Receiver Setup This is the setup screen for the robot. My robot’s name in this case is “Gaucho”. As you can see the corresponding I/O pins are configured as outputs on this XBee module. Also note that the lower address byte of the MAC Address is “40BF1EF1”. This is the DL byte that we have to enter into the controller setup. We are using “star” network configuration, so the controller can only talk to one robot at a time. If you want to control several robots, you will have to change this part of the address to talk to a specific robot. The XBee module is capable of driving and LED directly, so if you buy an adapter like the one below, you can test the controller before you install the XBee receiver in your robot. This is something I HIGHLY recommend. Figure 8 XBee Breakout Board Summary In this part the blog, I showed you how to set up the XBee modules using the free software from the folks at Digi. So at this point you have most of the information to start modifying / building your own XBee controlled robot. In part three of this article we will take a look at a very large robot that I have built and christened “Gaucho”, because it began as a child’s ride on electric car (Called Gaucho)! Figure 9 Gaucho About the Author Bill began his career in electronics in the early 80’s with a small telecom startup company that would eventually become a large multinational. He left there to pursue a career in commercial aviation in Canada’s north. From there he joined the Ontario Center for Microelectronics, a provincially funded research and development center. Bill left there for a career in the military as a civilian contractor at what was then called Defense Research Establishment Ottawa. That began a career which was to span the next 25 years, and continues today. Over the years Bill has acquired extensive knowledge in the field of technical security and started his own company in 2010. That company is called William Pretty Security Inc. and provides support in the form of research and development, to various law enforcement and private security agencies. Bill has published and presented a number of white papers on the subject of technical security. Bill was also a guest presenter for a number of years at the Western Canada Technical Conference, a law enforcement only conference held every year in western Canada. A selection of these papers is available for download from his web site. www.williamprettysecurity.com If you’re interested in building more of your own projects, then be sure to check out Bill’s titles available now in both print and eBook format! If you’re new to working with microcontrollers be sure to pick up Getting Started with Electronic Projects to start creating a whole host of great projects you can do in a single weekend with LM555, ZigBee, and BeagleBone components! If you’re looking for something more advanced to tinker with, then Bill’s other title - Building a Home Security System with BeagleBone – is perfect for hobbyists looking to make a bigger project!
Read more
  • 0
  • 1
  • 2918
article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 0
  • 7059

article-image-monitoring-and-troubleshooting-networking
Packt
21 Oct 2015
21 min read
Save for later

Monitoring and Troubleshooting Networking

Packt
21 Oct 2015
21 min read
This article by Muhammad Zeeshan Munir, author of the book VMware vSphere Troubleshooting, includes troubleshooting vSphere virtual distributed switches, vSphere standard virtual switches, vLANs, uplinks, DNS, and routing, which is one of the core issues a seasonal system engineer has to deal with on a daily basis. This article will cover all these topics and give you hands-on step-by-step instructions to manage and monitor your network resources. The following topics will be covered in this article: Different network troubleshooting commands VLANs troubleshooting Verification of physical trunks and VLAN configuration Testing of VM connectivity VMkernel interface troubleshooting Configuration command (Vicfg-vmknic and esxcli network ip interface) Use of Direct Console User Interface (DCUI) to verify configuration (For more resources related to this topic, see here.) Network troubleshooting commands Some of the commands that can be used for networking troubleshooting include net-dvs, Esxcli network, vicfg-route, vicfg-vmknic, vicfg-dns, vicfg-nics, and vicfg-vswitch. You can use the net-dvs command to troubleshoot VMware distributed dvSwitches. The command shows all the information regarding the VMware distributed dvSwtich configuration. The net-dvs command reads the information from the /etc/vmware/dvsdata.db file and displays all the data in the console. A vSphere host keeps updating its dvsdata.db file every five minutes. Connect to a vSphere host using PuTTY. Enter your user name and password when prompted. Type the following command in the CLI: net-dvs You will see something similar to the following screenshot: In the preceding screenshot, you can see that the first line represents the UUID of a VMware distributed switch. The second line shows the maximum number of ports a distributed switch can have. The line com.vmware.common.alias = dvswitch-Network-Pools represents the name of a distributed switch. The next line com.vmware.common.uplinkPorts: dvUplink1 to dvUplinkn shows the uplink ports a distributed switch has. The distributed switch MTU is set to 1,600 and you can see the information about CDP just below it. CDP information can be useful to troubleshoot connectivity issues. You can see com.vmware.common.respools.list listing networking resource pools, while com.vmware.common.host.uplinkPorts shows the ports numbers assigned to uplink ports. Further details about these uplink ports are explained as follows for each uplink port by their port number. You can also see the port statistics as displayed in the following screenshot. When you perform troubleshooting, these statistics can help you to check the behavior of the distributed switch and the ports. From these statistics, you can diagnose if the data packets are going in and out. As you can see in the following screenshot, all the metrics regarding packet drops are zero. If you find in your troubleshooting that the packets are being dropped, you can easily start finding the root cause of the problem: Unfortunately, the net-dvs command is very poorly documented, and usually, it is hard to find useful references. Moreover, it is not supported by VMware. However, you can use it with –h switch to display more options. Repairing a dvsdata.db file Sometimes, the dvsdata.db file of a vSphere host becomes corrupted and you face different types of distributed switch errors, for example, unable to create proxy DVS. In this case, when you try to run the net-dvs command on a vSphere host, it will fail with an error as well. As I have mentioned earlier, the net-dvs command reads data from the /etc/vmware/dvsdata.db file—it fails because it is unable to read data from the file. The possible cause for the corruption of the dvsdata.db file could be network outage; or when a vSphere host is disconnected from vCenter and deleted, it might have the information in its cache. You can resolve this issue by restoring the dvsdata.db file by following these steps: Through PuTTY, connect to a functioning vSphere host in your infrastructure. Copy the dvsdata.db file from the vSphere host. The file can be found in /etc/vmware/dvsdata.db. Transfer the copied dvsdata.db file to the corrupted vSphere host and overwrite it. Restart your vSphere host. Once the vSphere host is up and running, use PuTTY to connect to it. Run the net-dvs command. The command should be executed successfully this time without any errors. ESXCLI network The esxcli network command is a longtime friend of the system administrator and the support staff for troubleshooting network related issues. The esxcli network command will be used to examine different network configurations and to troubleshoot problems. You can type esxcli network to quickly see a help reference and the different options that can be used with the command. Let's walk through some useful esxcli network troubleshooting commands. Type the following command into your vSphere CLI to list all the virtual machines and the networks they are on. You can see that the command returned World ID, virtual machine name, number of ports, and the network: esxcli network vm list World ID  Name  Num Ports  Networks --------  ---------------------------------------------------  ---------  --------------- 14323012  cluster08_(5fa21117-18f7-427c-84d1-c63922199e05)          1  dvportgroup-372 Now use the World ID of a virtual machine returned by the last command to list all the ports the virtual machine is currently using. You can see the virtual switch name, MAC address of the NIC, IP address, and uplink port ID: esxcli network vm port list -w 14323012 Port ID: 50331662 vSwitch: dvSwitch-Network-Pools Portgroup: dvportgroup-372 DVPort ID: 1063 MAC Address: 00:50:56:01:00:7e IP Address: 0.0.0.0 Team Uplink: all(2) Uplink Port ID: 0 Active Filters: Type the following command in the CLI to list the statistics of the virtual switch—you need to replace the port ID as returned by the last command after –p flag: esxcli network port stats get -p 50331662 Packet statistics for port 50331662 Packets received: 10787391024 Packets sent: 7661812086 Bytes received: 3048720170788 Bytes sent: 154147668506 Broadcast packets received: 17831672 Broadcast packets sent: 309404 Multicast packets received: 656 Multicast packets sent: 52 Unicast packets received: 10769558696 Unicast packets sent: 7661502630 Receive packets dropped: 92865923 Transmit packets dropped: 0 Type the following command to list complete information about the network card of the virtual machine: esxcli network nic stats get -n vmnic0 NIC statistics for vmnic0 Packets received: 2969343419 Packets sent: 155331621 Bytes received: 2264469102098 Bytes sent: 46007679331 Receive packets dropped: 0 Transmit packets dropped: 0 Total receive errors: 78507 Receive length errors: 0 Receive over errors: 22 Receive CRC errors: 0 Receive frame errors: 0 Receive FIFO errors: 78485 Receive missed errors: 0 Total transmit errors: 0 Transmit aborted errors: 0 Transmit carrier errors: 0 Transmit FIFO errors: 0 Transmit heartbeat errors: 0 Transmit window errors: 0 A complete reference of the ESXCli Network command can be found here at https://goo.gl/9OMbVU. All the vicfg-* commands are very helpful and easy to use. I will encourage you to learn in order to make your life easier. Here are some of the vicfg-* commands relevant to network troubleshooting: vicfg-route: We will use this command for how to add or remove IP routes and how to create and delete default IP Gateways. vicfg-vmknic: We will use this command to perform different operations on VMkernel NICs for vSphere hosts. vicfg-dns: This command will be used to manipulate DNS information. vicfg-nics: We will use this command to manipulate vSphere Physical NICs. vicfg-vswitch: We will use this command to to create, delete, and modify vswitch information. Troubleshooting uplinks We will use the vicfg-nics command to manage physical network adapters of vSphere hosts. The vicfg-nics command can also be used to set up the speed, VMkernel name for the uplink adapters, duplex setting, driver information, and link state information of the NIC. Connect to your vMA appliance console and set up the target vSphere host: vifptarget --set crimv3esx001.linxsol.com List all the network cards available in the vSphere host. See the following screenshot for the output: vicfg-nics –l You can see that my vSphere host has five network cards from vmnic0 to vmnic5. You are able to see the PCI and driver information. The link state for the all the network cards is up. You can also see two types of network card speeds: 1000 Mbs and 9000 Mbs. There is also a card name in the Description field, MTU, and the Mac address for the network cards. You can set up a network card to auto-negotiate as follows: vicfg-nics --auto vimnic0 Now let's set the speed of vmnic0 to 1000 and its duplex settings to full: vicfg-nics --duplex full --speed 1000 vmnic0 Troubleshooting virtual switches The last command we will discuss in this article is vicfg-vswitch. The vicfg-vswitch command is a very powerful command that can be used to manipulate the day-to-day operations of a virtual switch. I will show you how to create and configure port groups and virtual switches. Set up a vSphere host in the vMA appliance in which you want to get information about virtual switches: vifptarget --set crimv3esx001.linxsol.com Type the following command to list all the information about the switches the vSphere host has. You can see the command output in the screenshot that follows: vicfg-vswitch -l You can see that the vSphere host has one virtual switch and two virtual NICs carrying traffic for the management network and for the vMotion. The virtual switch has 128 ports, and 7 of them are in used state. There are two uplinks to the switch with MTU set to 1500, while two VLANS are being used: one for the management network and one for the vMotion traffic. You can also see three distributed switches named OpenStack, dvSwitch-External-Networks, and dvSwitch-Network-Pools. Prefixing dv with the distributed switch name is a command practice, and it can help you to easily recognize a distributed switch. I will go through adding a new virtual switch: vicfg-vswitch --add vSwitch002 This creates a virtual switch with 128 ports and MTU of 1500. You can use the --mtu flag to specify a different MTU. Now add an uplink adapter vnic02 to the newly created virtual switch vSwitch002: vicfg-vswitch --link vmnic0 vSwitch002 To add a port group to the virtual switch, use the following command: vicfg-vswitch --add-pg portgroup002 vSwitch002 Now add an uplink adapter to the port group: vicfg-vswitch --add-pg-uplink vmnic0 --pg portgroup002 vSwitch002 We have discussed all the commands to create a virtual switch and its port groups and to add uplinks. Now we will see how to delete and edit the configuration of a virtual switch. An uplink NIC from the port group can be deleted using –N flag. Remove vmnic0 from the portgroup002: vicfg-vswitch --del-pg-uplink vmnic0 --pg portgroup002 vSwitch002 You can delete the recently created port group as follows: vicfg-vswitch --del-pg portgroup002 vSwitch002 To delete a switch, you first need to remove an uplink adapter from the virtual switch. You need to use the –U flag, which unlinks the uplink from the switch: vicfg-vswitch --unlink vmnic0 vSwitch002 You can delete a virtual switch using the –d flag. Here is how you do it: vicfg-vswitch --delete vSwitch002 You can check the Cisco Discovery Protocol (CDP) settings by using the --get-cdp flag with the vicfg-vswitch command. The following command resulted in putting the CDP in the Listen state, which indicates that the vSphere host is configured to receive CDP information from the physical switch: vi-admin@vma:~[crimv3esx001.linxsol.com]> vicfg-vswitch --get-cdp vSwitch0 listen You can configure CDP options for the vSphere host to down, listen, or advertise. In the Listen mode, the vSphere host tries to discover and publish this information received from a Cisco switch port, though the information of the vSwitch cannot be seen by the Cisco device. In the Advertise mode, the vSphere host doesn't discover and publish the information about the Cisco switch; instead, it publishes information about its vSwitch to the Cisco switch device. vicfg-vswitch --set-cdp both vSwitch0 Troubleshooting VLANs Virtual LANS or VLANs are used to separate the physical switching segment into different logical switching segments in order to segregate the broadcast domains. VLANs not only provide network segmentation but also provide us a method of effective network management. It also increases the overall network security, and nowadays, it is very commonly used in infrastructure. If not set up correctly, it can lead your vSphere host to no connectivity, and you can face some very common problems where you are unable to ping or resolve the host names anymore. Some common errors are exposed, such as Destination host unreachable and Connection failed. A Private VLAN (PVLAN) is an extended version of VLAN that divides logical broadcast domain into further segments and forms private groups. PVLANs are divided into primary and secondary PVLANs. Primary PVLAN is the VLAN distributed into smaller segments that are called primary. These then host all the secondary PVLANs within them. Secondary PVLANs live within primary VLANS, and individual secondary VLANs are recognized by VLAN IDs linked to them. Just like their ancestor VLANs, the packets that travel within secondary VLANS are tagged with their associated IDs. Then, the physical switch recognizes if the packets are tagged as isolated, community, or promiscuous. As network troubleshooting involves taking care of many different aspects, one aspect you will come across in the troubleshooting cycle is actually troubleshooting VLANS. vSphere Enterprise Plus licensing is a requirement to connect a host using a virtual distributed switch and VLANs. You can see the three different network segments in the following screenshot. VLAN A connects all the virtual machines on different vSphere hosts; VLAN B is responsible for carrying out management network traffic; and VLAN C is responsible for carrying out vMotion-related traffic. In order to create PVLANs on your vSphere host, you also need the support of a physical switch: For detailed information about the vSphere network, refer to the VMware official networking guide for vSphere 5.5 at http://goo.gl/SYySFL. Verifying physical trunks and VLAN configuration The first and most important step to troubleshooting your VLAN problem is to look into the VLAN configuration of your vSphere host. You should always start by verifying it. Let's walk through how to verify the network configuration of the management network and VLAN configuration from the vSphere client: Open and log in to your vSphere client. Click on the vSphere host you are trying to troubleshoot. Click on the Configuration menu and choose Networking and then Properties of the switch you are troubleshooting. Choose the network you are troubleshooting from the list, and click on Edit. This will open a new window. Verify the VLAN ID for Management Network. Match the ID of the VLAN provided by your network administrator. Verifying VLAN configuration from CLI Following are the steps for verifying VLAN configuration from CLI: Log in to vSphere CLI. Type the following command in the console: esxcfg-vswitch -l Alternatively, in the vMA appliance, type the vicfg-vswitch command—the output is similar for both commands: vicfg-vswitch –l The output of the excfg-vswitch –l command is as follows: Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks vSwitch0         128         7           128               1500    vmnic3,vmnic2   PortGroup Name        VLAN ID  Used Ports  Uplinks   vMotion               2231     1           vmnic3,vmnic2   Management Network    2230     1           vmnic3,vmnic2  ---Omitted output--- The output of the vicfg-vswitch –l command is as follows: Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks vSwitch0        128             7               128                 1500    vmnic2,vmnic3    PortGroup Name                VLAN ID   Used Ports      Uplinks    vMotion                       2231      1               vmnic2,vmnic3    Management Network            2230      1               vmnic3,vmnic2 --Omitted output--- Match it with your network configuration. If the VLAN ID is incorrect or missing, you can add or edit it using the following command from the vSphere CLI: esxcfg-vswitch –v 2233 –p "Management Network" vSwitch0 To add or edit the VLAN ID from the vMA appliance, use the following command: vicfg-vswitch --vlan 2233 --pg "Management Network" vSwitch0 Verifying VLANs from PowerCLI Verifying information about VLANs from the PowerCLI is fairly simple. Type the following command into the console after connecting with vCenter using Connect-VIServer: Get-VirtualPortGroup –VMHost crimv3esx001.linxsol.com | select Name, VirtualSwitch VLanID Name                                           VirtualSwitch                                  VlanId ----                                                -------------                                     ----- vMotion                                        vSwitch0                                      2231 Management Network                 vSwitch0                                       2233 Verifying PVLANs and secondary PVLANs When you have configured PVLANs or secondary PVLANs in your vSphere infrastructure, you may arrive at a situation where you need to troubleshoot them. This topic will provide you some tips to obtain and view information about PVLANs and secondary PVLANs, as follows: Log in to the vSphere client and click on Networking. Select a distributed switch and right-click on it. From the menu, choose Edit Settings and click on it. This will open the Distributed Switch Settings window. Click on the third tab named Private VLAN. In the section on the left named Primary private VLAN ID, verify the VLAN ID provided by your network engineer. You can verify the VLAN ID of the secondary PVLAN in the next section on the right. Testing virtual machine connectivity Whenever you are troubleshooting, virtual-machine-to-virtual-machine testing is very important. It helps you to isolate the problem domain to a smaller scope. When performing virtual-machine-to-virtual-machine testing, you should always move virtual machines to a single vSphere host. You can then start troubleshooting the network using basic commands, such as ping. If ping works, you are ready to test it further and move the virtual machines to other hosts, and if it still doesn't work, it most likely is a configuration problem of a physical switch or is likely to be a mismatched physical trunk configuration. The most common problem in this scenario is a problematic physical switch configuration. Troubleshooting VMkernel interfaces In this section, we will see how to troubleshoot VMkernel interfaces: Confirm VLAN tagging Ping to check connectivity Vicfg-vmknic Escli network ip interface for local configuration Escli network ip interface list Add or remove Set Escli network ip interface ipv4 get You should know how to use these commands to test if everything is working. You should be able to ping to ensure connectivity exists. We will use the vicfg-vmknic command to configure vSphere VMkernel NICs. Let's create a new VMkernel NIC in a vSphere host using the following steps: Log in to your VMware vSphere CLI. Type the following command to create a new VMkernel NIC: vicfg-vmknic –h crimv3esx001.linxsol.com --add --ip 10.2.0.10 –n 255.255.255.0 'portgroup01' You can enable vMotion using the vicfg-vmknic command as follows: vicfg-vmknic –enable-vmotion. You will not be able to enable vMotion from ESXCLI.vMotion protect migration of your virtual machines with zero down time. You can delete an existing VMkernel NIC as follows: vicfg-vmknic –h crimv3esx001.linxsol.com --delete 'portgroup01' Now check by typing the following command which VMkernel NICs are available in the system: vicfg-vmknic -l Verifying configuration from DCUI When you successfully install vSphere, the first yellow screen that you see is called the vSphere DCUI. DCUI is a frontend management system that helps perform some basic system administration tasks. It also offers the best way to troubleshoot some problems that may be difficult to troubleshoot through vMA, vCLI, or PowerCLI. Further, it is very useful when your host becomes irresponsive from the vCenter or is not accessible from any of the management tools. Some useful tasks that can be performed using the DCUI are as follows: Configuring the Lockdown mode Checking connectivity of Management Network by Ping Configuring and restarting network settings Restarting management agents Viewing logs Resetting vSphere configuration Changing root password Verifying network connectivity from DCUI The vSphere host automatically assigns the first network card available to the system for the management network. Moreover, the default installation of the vSphere host does not let you set up VLAN tags until the VMkernel has been loaded. Verifying network connectivity from the DCUI is important but easy. To do so, follow these steps: Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to the Test Management Network option. Click Enter, and you will see a new screen. Here you can enter up to three IP addresses and the host name to be resolved. You can also type your gateway address on this screen to see if you are able to reach to your gateway. In the host name, you can enter your DNS server name to test if the name resolves successfully. Press Esc to get back and Esc again to log off from the vSphere DCUI. Verifying management network from DCUI You can also verify the settings of your management network from the DCUI. Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to option Configure Management Network option and click Enter. Click Enter again after selecting the first option Network Adapters. On the next screen, you will see a list of all the network adapters your system has. It will show you the Device Name, Hardware Type, Label, Mac Address of the network card, and the status as Connected or Disconnected. From the given network cards, you can select or deselect any of the network card by pressing the space Bar on your keyboard. Press Esc to get back and Esc again to log off from the vSphere DCUI. As you can see in the preceding screenshot, you can also configure the IP address and DNS settings for your vSphere host. You can also use DCUI to configure VLANs and DNS Suffix for your vSphere host. Summary In this article, for troubleshooting, we took a deep dive into the troubleshooting commands and some of the monitoring tools to monitor network performance. The various platforms to execute different commands help you to isolate your troubleshooting techniques. For example, for troubleshooting a single vSphere host, you may like to use esxcli, but for a bunch of vSphere hosts you would like to automate scripting tasks from PowerCLI or from a vMA appliance. Resources for Article: Further resources on this subject: UPGRADING VMWARE VIRTUAL INFRASTRUCTURE SETUPS [article] VMWARE VREALIZE OPERATIONS PERFORMANCE AND CAPACITY MANAGEMENT [article] WORKING WITH VIRTUAL MACHINES [article]
Read more
  • 0
  • 0
  • 11803
Modal Close icon
Modal Close icon