Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-search-algorithms-game-play-going-b
Daan van
26 Oct 2015
7 min read
Save for later

Search Algorithms for Game Play: Going from A to B

Daan van
26 Oct 2015
7 min read
In a lot of games, for example tower defense games or other real time strategy games, enemies need to Progress from A, over the playing field towards B. One game element could be obstructing the path of the enemies so that there is more time to attack. If you are interested in created these sort of games yourself, you need to have a clear understanding how an enemy could navigate her way around the game. In this blog post we are going to discuss an algorithm to determine the shortest path from A to B. The notion of a graph is used to formalize our thinking. Most importantly A and B will be vertices of a graph, and we construct a path that follows some of the edges of the graph starting from A until we reach B. We will allow the edges to be weighted, to signify the difficulty of traversing that particular edge. The algorithm will be described in a platform independent way. It can be easily translated into various languages and frameworks. Graphs One helpful tool in finding the shortest path is graphs. Graphs are a set of vertices or points connected with edges or arcs. You are allowed to go from one vertex to an other vertex if it is connected with an edge. Below you can see an example of a graph that is layout like a hexagonal grid. In this image the circles represent the vertices and the lines represent edges. Path In the image above two vertices are special. One is colored red, the other is colored green. We would like to know a shortest path from the red vertex to the green vertex. If we have found a shortest path we will indicate that by highlighting the vertices that we follow along the path. Algorithm The following series of images will visualize the algorithm we will use to find a path from the red vertex to the green vertex. It starts out by picking a vertex we will examine closer. Because we are at the start we will examine the red vertex. We will look at all the neighbors, i.e. vertices that are connected by an edge, of the vertex we are examining. For the neighbors we now know a path from the red vertex to that particular vertex. Because we are not at the green vertex yet, we are going to include the neighbors of the vertex we are examining into the frontier. The frontier are the vertices for which we know a path from the red vertex, but we have not examined yet. In other words, they are candidates to examine next. Next we will pick a vertex of the frontier and continue the process. We will have something to say about how to pick a vertex from the frontier shortly. For now we will just pick one. From this vertex we will examine its neighbors we have not yet visited. And add those to the frontier. If we continue this process: We will eventually have visited the green vertex and we know a shortest path from the red vertex to the green vertex. Pseudocode We will write down, in pseudocode, an algorithm that can find a shortest path. We assume we are given a graph G, a start vertex start of G and a finish vertex finish of G. We are interested in a shortest path from start to finish and the following algorithm will provide us with one. for (var v in G.vertices) { v.distance = Number.POSITIVE_INFINITY; } start.distance = 0; var frontier = [start]; var visited = []; while (frontier.length > 0) { var current = pickOneOf(frontier); for (var neighbour in current.neighbors()) { if (neighbour not in visited) { neighbour.distance = Math.min(neighbour.distance, current.distance + 1); frontier.push(neighbour) } } visited.push(current); } We will now annotate the algorithm. Initialization We need to initialize some variables that are needed throughout the algorithm. for (var v in G.vertices) { v.distance = Number.POSITIVE_INFINITY; } start.distance = 0; var frontier = [start]; var visited = []; We first set the distance of all vertices, besides the start vertex to ∞. The distance to the start vertex is set to zero. The frontier will be the collection of vertices for which we know a path but still need to be examined. We initialize it to include the start vertex. Visited will be used to keep track of all vertices that have been examined. Because we still need to examine the start vertex we leave it empty for now. Loop We are going to loop until the frontier is empty. while (frontier.length > 0) { // examine a particular vertex in the frontier } Because it is possible for the hexagonal grid to reach every vertex from the start vertex, we will end up knowing a shortest path for every vertex. If we are only interested in the shortest path to the finish vertex the condition could be !(finish in visited), i.e. continue as long as we have not visited the finish vertex. Pick Vertex from Frontier Within the loop we first pick a vertex from the frontier to examine. var current = pickOneOf(frontier); This is the heart of the algorithm. Dijkstra, a famous computer scientist, proved that if we pick a vertex of the frontier with the smallest distance, we will end up with a shortest path. Pseudocode for the pickOnOf function could look like: function pickOneOf(frontier){ var best = frontier[0]; for (var candidate in frontier) { if (candidate.distance < best.distance) { best = candidate; } } return best; } Process Neighbors The current vertex is a vertex with the smallest distance to the start vertex. So we now can determine the distance to the start vertex of the neighbor of the current vertex. We only need to include vertices that we have not visited yet. for (var neighbour in current.neighbors()) { if (neighbour not in visited) { /* update neighbour info */ } } Update Neighbour Info We can now update the information about the neighbour. For instance, if we have found a shorter path we want to update the distance. And we want to add the neighbour to the frontier. neighbour.distance = Math.min(neighbour.distance, current.distance + 1); frontier.push(neighbour) Mark current visited Finally when we are done examining the current vertex, we add current to the collection of visited. visited.push(current); Edge Weights We have included an image where this distance is shown for each vertex. The story gets interesting when we alter the weights of the edges, i.e. the cost to travel over that particular edge. The algorithm needs a small change. When we update the neighbour info we need to use the edge.weight instead of default weight of 1. In the picture below we have altered the weights of edges, still the algorithm finds a shortest path. The weights of the edges is indicated by the color. A black edge has weight 1, a blue edge has weight 3, an orange edge has weight 5 and a red edge has weight 10. Live Seeing an algorithm in action can help you to understand it. You can try this out live in your browser with the following visualization. Conclusion We learned that Dijkstra Algorithm can be used to find a shortest path between two vertices of a graph. This in turn can be used to guide enemies over the playing field. About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 4537

article-image-getting-hands-io-redirection-pipes-and-filters
Packt
26 Oct 2015
28 min read
Save for later

Getting Hands-on with I/O, Redirection Pipes, and Filters

Packt
26 Oct 2015
28 min read
In this article by Sinny Kumari, author of the book Linux Shell Scripting Essentials, we will cover I/O redirection pipes and filters. In day-to-day work, we come across different kinds of files such as text files, source code files from different programming languages (for example, file.sh, file.c, and file.cpp), and so on. While working, we more often perform various operations on files or directories such as searching for a given string or pattern, replacing strings, printing few lines of a file, and so on. Performing these operations is not easy if we have to do it manually. Manual searching for a string or pattern in a directory having thousands of files can take months, and has high chances of making errors. Shell provides many powerful commands to make our work easier, faster, and error-free. Shell commands have the ability to manipulate and filter text from different streams such as standard input, file, and so on. Some of these commands are grep, sed, head, tr, sort, and so on. Shell also comes with a feature of redirecting output from one command to another with the pipe ('|'). Using pipe helps to avoids creation of unnecessary temporary files. One of the best qualities of these commands is that they come along with the man pages. We can directly go to the man page and see what all features they provide by running the man command. Most of the commands have options such as --help to find the help usage and --version to know the version number of the command. This article will cover the following topics in detail: Standard I/O and error streams Redirecting the standard I/O and error streams Pipe and pipelines—connecting commands Regular expressions Filtering output using grep (For more resources related to this topic, see here.) Standard I/O and error streams In shell programming, there are different ways to provide an input (for example, via a keyboard and terminal) and display an output (for example, terminal and file) and error (for example, terminal), if any, during the execution of a command or program. The following examples show the input, output, and error while running the commands: The input from a user by a keyboard and the input obtained by a program via a standard input stream, that is terminal, is taken as follows: $ read -p "Enter your name:" Enter your name:Foo The output printed on the standard output stream, that is terminal, is as follows: $ echo "Linux Shell Scripting" Linux Shell Scripting The error message printed on the standard error stream, that is terminal, is as follows: $ cat hello.txt cat: hello.txt: No such file or directory When a program executes, by default, three files get opened with it which are stdin, stdout, and stderr. The following table provides a short description about them: File descriptor number File name Description 0 stdin This is standard input being read from the terminal 1 stdout This is standard output to the terminal 2 stderr This is standard error to the terminal File descriptors File descriptors are integer numbers representing opened files in an operating system. The unique file descriptor numbers are provided to each opened files. File descriptors' numbers go up from 0. Whenever a new process in Linux is created, then standard input, output, and error files are provided to it along with other needed opened files to process. To know what all open file descriptors are associated with a process, we will consider the following example: Run an application and get its process ID first. Consider running bash as an example to get PID of bash: $ pidof bash 2508 2480 2464 2431 1281 We see that multiple bash processes are running. Take one of the bash PID example, 2508, and run the following command: $ ls -l /proc/2508/fd total 0 lrwx------. 1 sinny sinny 64 May 20 00:03 0 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 20 00:03 1 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 19 23:22 2 -> /dev/pts/5 lrwx------. 1 sinny sinny 64 May 20 00:03 255 -> /dev/pts/5 We see that 0, 1, and 2 opened file descriptors are associated with process bash. Currently, all of them are pointing to /dev/pts/5. pts that is pseudo terminal slave. So, whatever we will do in this bash, input, output, and error related to this PID, output will be written to the /dev/pts/5 file. However, the pts files are pseudo files and contents are in memory, so you won't see anything when you open the file. Redirecting the standard I/O and error streams We have an option to redirect standard input, output, and errors, for example, to a file, another command, intended stream, and so on. Redirection is useful in different ways. For example, I have a bash script whose output and errors are displayed on a standard output—that is, terminal. We can avoid mixing an error and output by redirecting one of them or both to a file. Different operators are used for redirection. The following table shows some of operators used for redirection, along with its description: Operator Description >  This redirects a standard output to a file >>  This appends a standard output to a file <  This redirects a standard input from a file >& This redirects a standard output and error to a file >>& This appends a standard output and error to a file | This redirects an output to another command Redirecting standard output: An output of a program or command can be redirected to a file. Saving an output to a file can be useful when we have to look into the output in the future. A large number of output files for a program that runs with different inputs can be used in studying program output behavior. For example, showing redirecting echo output to output.txt is as follows: $ echo "I am redirecting output to a file" > output.txt $ We can see that no output is displayed on the terminal. This is because output was redirected to output.txt. The operator '>' (greater than) tells the shell to redirect the output to whatever filename mentioned after the operator. In our case, its output.txt: $ cat output.txt I am redirecting output to a file Now, let's add some more output to the output.txt file: $ echo "I am adding another line to file" > output.txt $ cat output.txt I am adding another line to file We noticed that the previous content of the output.txt file got erased and it only has the latest redirected content. To retain the previous content and append the latest redirected output to a file, use the operator '>>': $ echo "Adding one more line" >> output.txt $ cat output.txt I am adding another line to file Adding one more line We can also redirect an output of a program/command to another command in bash using the operator '|' (pipe): $ ls /usr/lib64/ | grep libc.so libc.so libc.so.6 In this example, we gave the output of ls to the grep command using the '|' (pipe) operator, and grep gave the matching search result of the libc.so library: Redirecting standard input Instead of getting an input from a standard input to a command, it can be redirected from a file using the < (less than) operator. For example, we want to count the number of words in the output.txt file created from the Redirecting standard output section: $ cat output.txt I am adding another line to file Adding one more line $ wc -w < output.txt 11 We can sort the content of output.txt: $ sort < output.txt # Sorting output.txt on stdout Adding one more line I am adding another line to file We can also give a patch file as an input to the patch command in order to apply a patch.diff in a source code. The command patch is used to apply additional changes made in a file. Additional changes are provided as a diff file. A diff file contains the changes between the original and the modified file by running the diff command. For example, I have a patch file to apply on output.txt: $ cat patch.diff # Content of patch.diff file 2a3 > Testing patch command $ patch output.txt < patch.diff # Applying patch.diff to output.txt $ cat output.txt # Checking output.txt content after applying patch I am adding another line to file Adding one more line Testing patch command Redirecting standard error There is a possibility of getting an error while executing a command/program in bash because of different reasons such as invalid input, insufficient arguments, file not found, bug in program, and so on: $ cd /root # Doing cd to root directory from a normal user bash: cd: /root/: Permission denied Bash prints the error on a terminal saying, permission denied. In general, errors are printed on a terminal so that it's easy for us to know the reason of an error. Printing both the errors and output on the terminal can be annoying because we have to manually look into each line and check whether the program encountered any error: $ cd / ; ls; cat hello.txt; cd /bin/; ls *.{py,sh} We ran a series of commands in the preceding section. First cd to /, ls content of /, cat file hello.txt, cd to /bin and see files matching *.py and *.sh in /bin/. The output will be as follows: bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var cat: hello.txt: No such file or directory alsa-info.sh kmail_clamav.sh sb_bnfilter.py sb_mailsort.py setup-nsssysinit.sh amuFormat.sh kmail_fprot.sh sb_bnserver.pysb_mboxtrain.py struct2osd.sh core_server.py kmail_sav.shsb_chkopts.py sb_notesfilter.py We see that hello.txt doesn't exist in the / directory and because of this there is an error printed on the terminal as well, along with other output. We can redirect the error as follows: $ (cd / ; ls; cat hello.txt; cd /bin/; ls *.{py,sh}) 2> error.txt bin boot dev etc home lib lib64 lost+found media mnt opt proc rootrun sbin srv sys tmp usr var alsa-info.sh kmail_clamav.sh sb_bnfilter.py sb_mailsort.py setup-nsssysinit.sh amuFormat.sh kmail_fprot.sh sb_bnserver.pysb_mboxtrain.py struct2osd.sh core_server.py kmail_sav.shsb_chkopts.py sb_notesfilter.py We can see that the error has been redirected to the error.txt file. To verify, check the error.txt content: $ cat error.txt cat: hello.txt: No such file or directory Multiple redirection: We can redirect stdin, stdout, and stderr together in a command or script or a combination of some of them. The following command redirects both stdout and stder: $ (ls /home/ ;cat hello.txt;) > log.txt 2>&1 Here, stdout is redirected to log.txt and error messages are redirected to log.txt as well. In 2>&1, 2> means redirect an error and &1 means redirect to stdout. In our case, we have already redirected stdout to the log.txt file. So, now both the stdout and stderr outputs will be written into log.txt and nothing will be printed on the terminal. To verify, we will check the content of log.txt: $ cat log.txt lost+found sinny cat: hello.txt: No such file or directory The following example shows the stdin, stdout, and stderr redirection: $ cat < ~/.bashrc > out.txt 2> err.txt Here, the .bashrc file present in the home directory acts as an input to the cat command and its output is redirected to the out.txt file. Any error encountered in between is redirected to the err.txt file. The following bash script will explain stdin, stdout, stderr, and their redirection with even more clarity: #!/bin/bash # Filename: redirection.sh # Description: Illustrating standard input, output, error # and redirecting them ps -A -o pid -o command > p_snapshot1.txt echo -n "Running process count at snapshot1: " wc -l < p_snapshot1.txt echo -n "Create a new process with pid = " tail -f /dev/null & echo $! # Creating a new process echo -n "Running process count at snapshot2: " ps -A -o pid -o command > p_snapshot2.txt wc -l < p_snapshot2.txt echo echo "Diff bewteen two snapshot:" diff p_snapshot1.txt p_snapshot2.txt This script saves two snapshots of running all the currently running processes in the system and generates diff. The output after running the process will look somewhat as follows: $ sh redirection.sh Running process count at snapshot1: 246 Create a new process with pid = 23874 Running process count at snapshot2: 247 Diff bewteen two snapshot: 246c246,247 < 23872 ps -A -o pid -o command --- > 23874 tail -f /dev/null > 23875 ps -A -o pid -o command Pipe and pipelines – connecting commands The outputs of the programs are generally saved in files for further use. Sometimes, temporary files are created in order to use an output of a program as an input to another program. We can avoid creating temporary files and feed the output of a program as an input to another program using bash pipe and pipelines. Pipe The pipe denoted by the operator | connects the standard output of a process in the left to the standard input in the right process by inter process communication mechanism. In other words, the | (pipe) connects commands by providing the output of a command as the input to another command. Consider the following example: $ cat /proc/cpuinfo | less Here, the cat command, instead of displaying the content of the /proc/cpuinfo file on stdout, passes its output as an input to the less command. The less command takes the input from cat and displays on the stdout per page. Another example using pipe is as follows: $ ps -aux | wc -l # Showing number of currently running processes in system 254 Pipeline Pipeline is a sequence of programs/commands separated by the operator ' | ' where the output of execution of each command is given as an input to the next command. Each command in a pipeline is executed in a new subshell. The syntax will be as follows: command1 | command2 | command3 … Examples showing pipeline are as follows: $ ls /usr/lib64/*.so | grep libc | wc -l 13 Here, we are first getting a list of files from the /usr/lib64 directory that has the.so extension. The output obtained is passed as an input to the next grep command to look for the libc string. The output is further given to the wc command to count the number of lines. Regular expression Regular expression (also known as regex or regexp) provides a way of specifying a pattern to be matched in a given big chunk of text data. It supports a set of characters to specify the pattern. It is widely used for a text search and string manipulation. A lot of shell commands provide an option to specify regex such as grep, sed, find, and so on. The regular expression concept is also used in other programming languages such as C++, Python, Java, Perl, and so on. Libraries are available in different languages to support regular expression's features. Regular expression metacharacters The metacharacters used in regular expressions are explained in the following table: Metacharacters Description * (Asterisk) This matches zero or more occurrences of the previous character + (Plus) This matches one or more occurrences of the previous character ? This matches zero or one occurrence of the previous element . (Dot) This matches any one character ^ This matches the start of the line $ This matches the end of line [... ] This matches any one character within a square bracket [^... ] This matches any one character that is not within a square bracket | (Bar) This matches either the left side or the right side element of | {X} This matches exactly X occurrences of the previous element {X,} This matches X or more occurrences of the previous element {X,Y} This matches X to Y occurrences of the previous element (...) This groups all the elements < This matches the empty string at the beginning of a word > This matches the empty string at the end of a word This disables the special meaning of the next character Character ranges and classes When we look into a human readable file or data, its major content contains alphabets (a to z) and numbers (0-9). While writing regex for matching a pattern consisting of alphabets or numbers, we can make use character ranges or classes. Character ranges We can use character ranges in a regular expression as well. We can specify a range by a pair of characters separated by a hyphen. Any characters that fall in between that range, inclusive, are matched. Character ranges are enclosed inside square brackets. The following table shows some of character ranges: Character range Description [a-z] This matches any single lowercase letter from a to z [A-Z] This matches any single uppercase letter from A to Z [0-9] This matches any single digit from 0 to 9 [a-zA-Z0-9] This matches any single alphabetic or numeric characters [h-k] This matches any single letter from h to k [2-46-8j-lB-M] This matches any single digit from 2 to 4 or 6 to 8 or any letter from j to l or B to M Character classes: Another way of specifying a range of character matches is by using Character classes. It is specified within the square brackets [:class:]. The possible class value is mentioned in the following table: Character Class Description [:alnum:] This matches any single alphabetic or numeric character; for example, [a-zA-Z0-9] [:alpha:] This matches any single alphabetic character; for example, [a-zA-Z] [:digit:] This matches any single digit; for example, [0-9] [:lower:] This matches any single lowercase alphabet; for example, [a-z] [:upper:] This matches any single uppercase alphabet; for example, [A-Z] [:blank:] This matches a space or tab [:graph:] This matches a character in the range of ASCII—for example 33-126—excluding a space character [:print:] This matches a character in the range of ASCII—for example. 32-126—including a space character [:punct:] This matches any punctuation marks such as '?', '!', '.', ',', and so on [:xdigit:] This matches any hexadecimal characters; for example, [a-fA-F0-9] [:cntrl:] This matches any control characters Creating your own regex: In the previous sections of regular expression, we discussed about metacharacters, character ranges, character class, and their usage. Using these concepts, we can create powerful regex that can be used to filter out text data as per our need. Now, we will create a few regex using the concepts we have learned. Matching dates in mm-dd-yyyy format We will consider our valid date starting from UNIX Epoch—that is, 1st January 1970. In this example, we will consider all the dates between UNIX Epoch and 30th December 2099 as valid dates. An explanation of forming its regex is given in the following subsections: Matching a valid month 0[1-9] matches 01st to 09th month 1[0-2] matches 10th, 11th, and 12th month '|' matches either left or right expression Putting it all together, the regex for matching a valid month of date will be 0[1-9]|1[0-2]. Matching a valid day 0[1-9] matches 01st to 09th day [12][0-9] matches 10th to 29th day 3[0-1] matches 30th to 31st day '|' matches either left or right expression 0[1-9]|[12][0-9]|3[0-1] matches all the valid days in a date Matching the valid year in a date 19[7-9][[0-9] matches years from 1970 to 1999 20[0-9]{2} matches years from 2000 to 2099 '|' matches either left or right expression 19[7-9][0-9]|20[0-9]{2} matches all the valid years between 1970 to 2099 Combining valid months, days, and years regex to form valid dates Our date will be in mm-dd-yyyy format. By putting together regex formed in the preceding sections for months, days, and years, we will get regex for the valid date: (0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[0-1])-(19[7-9][0-9]|20[0-9]{2}) There is a nice website http://regexr.com/, where you can also validate regular expression. The following screenshot shows the matching of the valid date among the given input: Regex for a valid shell variable A valid variable name can contain a character from alphanumeric and underscore, and the first letter of the variable can't be a digit. Keeping these rules in mind, a valid shell variable regex can be written as follows: ^[_a-zA-Z][_a-zA-Z0-9]*$ Here, ^ (caret) matches the start of a line. The regex [_a-zA-Z] matches _ or any upper or lower case alphabet [_a-zA-Z0-9]* matches zero or multiple occurrences of _,any digit or upper and lower case alphabet $ (Dollar) matches the end of the line. In character class format, we can write regex as ^[_[:alpha:]][_[:alnum:]]*$. The following screenshot shows valid shell variables using regex formed: Enclose regular expression in single quotes (') to avoid pre-shell expansion. Use back slash () before a character to escape the special meaning of metacharacters. Metacharacters such as ?, +, {, |, (, and ) are known to be extended regex. They lose their special meaning when used in basic regex. To avoid this, use them with backslash '?', '+', '{', '|', '(', and ')'. Filtering an output using grep One of the powerful and widely used command in shell is grep. It searches in an input file and matches lines in which the given pattern is found. By default, all the matched patterns are printed on stdout that is usually terminal. We can also redirect the matched output to other streams such as file. Instead of giving an input from a file, grep can also take the input from the redirected output of the command executed on the left-hand side of '|'. Syntax The syntax of using the grep command is as follows: grep [OPTIONS] PATTERN [FILE...] Here, FILE can be multiple files for a search. If no file is given as an input for a search, it will search the standard input. PATTERN can be any valid regular expression. Put PATTERN within single quotes (') or double quotes (") as per need. For example, use single quotes (') to avoid any bash expansion and double quotes (") for expansion. A lot of OPTIONS are available in grep. Some of the important and widely used options are discussed in the following table.   Option Usage -i This enforces case insensitive match in both pattern and input file(s) -v This displays the non-matching line -o This displays only the matched part in the matching line -f FILE This obtains a pattern from a file, one per line -e PATTERN This specifies multiple search pattern -E This considers pattern as an extended regex (egrp) -r This reads all the files in a directory recursively, excluding resolving of symbolic links unless explicitly specified as an input file -R This reads all the files in a directory recursively and resolving symbolic if any -a This processes binary file as a text file -n This prefixes each matched line along with a line number -q Don't print anything on stdout -s Don't print error messages -c This prints the count of matching lines of each input file -A NUM This prints NUM lines after the actual string match. No effect with the -o option -B NUM This prints NUM lines before the actual string match. No effect with the -o option -C NUM This prints NUM lines after and before the actual string match. No effect with the -o option  Looking for a pattern in a file: A lot of times we have to search for a given string or a pattern in a file. The grep command provides us the capability to do it in a single line. Let's see the following example: The input file for our example will be input1.txt: $ cat input1.txt # Input file for our example This file is a text file to show demonstration of grep command. grep is a very important and powerful command in shell. This file has been used in chapter 2 We will try to get the following information from the input1.txt file using the grep command: Number of lines Line starting with a capital letter Line ending with a period (.) Number of sentences Searching sub-string "sent" Lines that don't have a periodNumber of times the string "file" is used  The following shell script demonstrates how to do the above mentioned tasks: #!/bin/bash #Filename: pattern_search.sh #Description: Searching for a pattern using input1.txt file echo "Number of lines = `grep -c '.*' input1.txt`" echo "Line starting with capital letter:" grep -c ^[A-Z].* input1.txt echo echo "Line ending with full stop (.):" grep '.*.$' input1.txt echo echo -n "Number of sentence = " grep -c '.' input1.txt echo "Strings matching sub-string sent:" grep -o "sent" input1.txt echo echo "Lines not having full stop are:" grep -v '.' input1.txt echo echo -n "Number of times string file used: = " grep -o "file" input1.txt | wc -w  The output after running the pattern_search.sh shell script will be as follows: Number of lines = 4 Line starting with capital letter: 2 Line ending with full stop (.): powerful command in shell. Number of sentence = 2 Strings matching sub-string sent: Lines not having full stop are: This file is a text file to show demonstration This file has been used in chapter 2 Number of times string file used: = 3 Looking for a pattern in multiple files The grep command also allow us to search for a pattern in multiple files as an input. To explain this in detail, we will head directly to the following example. The input files, in our case, will be input1.txt and input2.txt. We will reuse the content of the input1.txt file from the previous example. The content of input2.txt is as follows: $ cat input2.txt Another file for demonstrating grep CommaNd usage. It allows us to do CASE Insensitive string test as well. We can also do recursive SEARCH in a directory using -R and -r Options. grep allows to give a regular expression to search for a PATTERN. Some special characters like . * ( ) { } $ ^ ? are used to form regexp. Range of digit can be given to regexp e.g. [3-6], [7-9], [0-9] We will try to get the following information from the input1.txt and input2.txt files using the grep command: Search for the string command Case-insensitive search of the string command Print the line number where the string grep matches Search for punctuation marks Print one line followed by the matching lines while searching for the string important  The following shell script demonstrates how to follow the preceding steps: #!/bin/bash # Filename: multiple_file_search.sh # Description: Demonstrating search in multiple input files echo "This program searches in files input1.txt and input2.txt" echo "Search result for string "command":" grep "command" input1.txt input2.txt echo echo "Case insensitive search of string "command":" # input{1,2}.txt will be expanded by bash to input1.txt input2.txt grep -i "command" input{1,2}.txt echo echo "Search for string "grep" and print matching line too:" grep -n "grep" input{1,2}.txt echo echo "Punctuation marks in files:" grep -n [[:punct:]] input{1,2}.txt echo echo "Next line content whose previous line has string "important":" grep -A 1 'important' input1.txt input2.txt The following screenshot is the output after running the shell script pattern_search.sh. The matched pattern string has been highlighted: A few more grep usages The following subsections will cover a few more usages of the grep command. Searching in a binary file So far, we have seen all the grep examples running on text files. We can also search for a pattern in binary files using grep. For this, we have to tell the grep command to treat a binary file as a text file too. The option -a or –text tells grep to consider a binary file as a test file. We know that the grep command itself is a binary file that executes and gives a search result. One of options in grep is --text. The string --text should be somewhere available in the grep binary file. Let's search for it as follows: $ grep --text '--text' /usr/bin/grep -a, --text equivalent to –binary-files=text We saw that the string --text is found in the search path /usr/bin/grep. The character backslash ('') is used to escape its special meaning. Now, let's search for the -w string in the wc binary. We know that the wc command has an option -w that counts the number of words in an input text. $ grep -a '-w' /usr/bin/wc -w, --words print the word counts Searching in a directory We can also tell grep to search into all files/directories in a directory recursively using the option -R. This avoids the hassle of specifying each file as an input text file to grep. For example, we are interested in knowing at how many places #include <stdio.h> is used in a standard include directory: $ grep -R '#include <stdio.h>' /usr/include/ | wc -l 77 This means that the #include <stdio.h> string is found at 77 places in the /usr/include directory. In another example, we want to know how many python files (the extension .py) in /usr/lib64/python2.7/ does "import os". We can check that as follows: $ grep -R "import os" /usr/lib64/python2.7/*.py | wc -l 93 Excluding files/directories from search We can also specify the grep command to exclude a particular directory or file from search. This is useful when we don't want grep to look into a file or directory that has some confidential information. This is also useful in the case where we are sure that searching into a certain directory will be of no use. So, excluding them will reduce search time. Suppose, there is a source code directory called s0, which uses the git version control. Now, we are interested in searching for a text or pattern in source files. In this case, searching in the .git subdirectory will be of no use. We can exclude .git from search as follows: $ grep -R --exclude-dir=.git "search_string" s0 Here, we are searching for the search_string string in the s0 directory and telling grep to not to search in the .git directory. Instead of excluding a directory, to exclude a file, use the --exclude-from=FILE option. Display a filename with a matching pattern In some use-cases, we don't bother with where the search matched and at how many places the search matched in a file. Instead, we are interested in knowing only the filename where at least one search matched. For example, I want to save filenames that have a particular search pattern found in a file, or redirect to some other command for further processing. We can achieve this using the -l option: $ grep -Rl "import os" /usr/lib64/python2.7/*.py > search_result.txt $ wc -l search_result.txt 79 This example gets name of the file in which import os is written and saves result in file search_result.txt. Matching an exact word The exact matching of the word is also possible using word boundary that is b on both the sides of the search pattern. Here, we will reuse the input1.txt file and its content: $ grep -i --color "bab" input1.txt The --color option allows colored printing of the matched search result. The "bab" option tells grep to only look for the character a that is alone. In search results, it won't match the character a present as a sub-string in a string. The following screenshot shows the output: This is an input file. It conatins special character like ?, ! etc &^var is an invalid shll variable. _var1_ is a valid shell variable To delete characters other than alphanumeric, newline, and white-space, we can run the following command: tr -cd '[:alnum:] n' < tr2.txt This is an input file It conatins special character like etc var is an invalid shll variable var1 is a valid shell variable Summary After reading this article, you know how to provide an input to commands and print or save its result. You are also familiar with redirecting an output and input from one command to another. Now, you can easily search, replace strings or pattern in a file, and filter out data based on needs. From this article, we now have a good control on transforming/filtering text data. Resources for Article: Further resources on this subject: Linux Shell Scripting [article] Embedded Linux and Its Elements [article] Getting started with Cocos2d-x [article]
Read more
  • 0
  • 0
  • 9140

article-image-introduction-mapbox
Packt
26 Oct 2015
7 min read
Save for later

Introduction to MapBox

Packt
26 Oct 2015
7 min read
In this article by Bill Kastanakis, author of the book MapBox Cookbook, he has given an introduction to MapBox. Most of the websites we visit everyday us maps in order to display information about locations or point of interests to the user. It's amazing how this technology has evolved over the past decades. In the early days with the introduction of the Internet, maps used to be static images. Users were unable to interact with maps, and they were limited to just displaying static information. Interactive maps were available only to mapping professionals and accessed via very expensive GIS software. Cartographers have used this type of software to create or improve maps, usually for an agency or an organization. Again, if the location information was to be made available to the public, there were only two options: static images or a printed version. (For more resources related to this topic, see here.) Improvements made on Internet technologies opened up several possibilities for interactive content. It was a natural transition for maps to become live, respond to search queries, and allow user interactions (such as panning and changing the zoom level). Mobile devices were just starting to evolve, and a new age of smartphones was just about to begin. It was natural for maps to become even more important to consumers. Interactive maps are now in their pockets. More importantly, they can tell the users location. These maps also have the ability to display a great variety of data. In the age where smartphones and tables have become aware of the location, information has become even more important to companies. They use it to improve user experience. From general purpose websites (such as Google Maps) to more focused apps (such as Four Square and Facebook), maps are now a crucial component in the digital world. The popularity of mapping technologies is increasing over the years. From free open source solutions to commercial services for web and mobile developers and even services specialized for cartographers and visualization professionals, a number of services have become available to developers. Currently, there is an option for developers to choose from a variety of services that will work better on their specific task, and best of all, if you don't have increased traffic requirements, most of them will offer free plans for their consumers. What is MapBox? The issue with most of the solutions available is that they look extremely similar. Observing the most commonly used websites and services that implement a map, you can easily verify that they completely lack personality. Maps have the same colors and are present with the same features, such as roads, buildings, and labels. Currently, displaying road addresses in a specific website doesn't make sense. Customizing maps is a tedious task and is the main reason why it's avoided. What if the map that is provided by a service is not working well with the color theme used in your website or app? MapBox is a service provider that allows users to select a variety of customization options. This is one of the most popular features that has set it apart from competition. The power to fully customize your map in every detail, including the color theme, features you want to present to the user, information displayed, and so on, is indispensable. MapBox provides you with tools to fully write CartoCSS, the language behind the MapBox customization, SDKs, and frameworks to integrate their maps into your website with minimal effort and a lot more tools to assist you in your task to provide a unique experience to your users. Data Let's see what MapBox has to offer, and we will begin with three available datasets: MapBox Streets is the core technology behind MapBox street data. It's powered by open street maps and has an extremely vibrant community of 1.5 million cartographers and users, which constantly refine and improve map data in real time, as shown in the following screenshot: MapBox Terrain is composed of datasets fetched from 24 datasets owned by 13 organizations. You will be able to access elevation data, hill shades, and topography lines, as shown in the following screenshot: MapBox Satellite offers high-resolution cloudless datasets with satellite imagery, as shown in the following image: MapBox Editor MapBox Editor is an online editor where you can easily create and customize maps. It's purpose is to easily customize the map color theme by choosing from presets or creating your own styles. Additionally, you can add features, such as Markers, Lines, or define areas using polygons. Maps are also multilingual; currently, there are four different language options to choose from when you work with MapBox Editor. Although adding data manually in MapBox Editor is handy, it also offers the ability to batch import data, and it supports the most commonly used formats. The user interface is strictly visual; no coding skills is needed in order to create, customize, and present a map. It is very ideal if you want to quickly create and share maps. The user interface also supports sharing to all the major platforms, such as WordPress, and embedding in forums or on a website using iFrames. CartoCSS CartoCSS is a powerful open source style sheet language developed by MapBox and is widely supported by several other mapping and visualization platforms. It's extremely similar to CSS, and if you ever used CSS, it will be very easy to adapt. Take a look at the following code: #layer { line-color: #C00; line-width: 1; } #layer::glow { line-color: #0AF; line-opacity: 0.5; line-width: 4; } TileMill TileMill is a free open source desktop editor that you can use to write CartoCSS and fully customize your maps. The customization is done by adding layers of data from various sources and then customizing the layer properties using CartoCSS, a CSS-like style sheet language. When you complete the editing of the map, you can then export the tiles and upload them to your MapBox account in order to use the map on your website. TileMill was used as a standard solution for this type of work, but it uses raster data. This changed recently with the introduction of MapBox Studio, which uses vector data. MapBox Studio MapBox Studio is the new open source toolbox that was created by the MapBox team to customize maps, and the plan is to slowly replace TileMill. The advantage is that it uses vector tiles instead of raster. Vector tiles are superior because they hold infinite detail; they are not dependent on the resolution found in a fixed size image. You can still use CartoCSS to customize the map, and as with TileMill, at any point, you can export and share the map on your website. The API and SDK Accessing MapBox data using various APIs is also very easy. You can use JavaScript, WebGL, or simply access the data using REST service calls. If you are into mobile development, they offer separate SDKs to develop native apps for iOS and Android that take advantage of the amazing MapBox technologies and customization while maintaining a native look and feel. MapBox allows you to use your own sources. You can import a custom dataset and overlay the data to Mapbox streets, terrains, or satellite. Another noteworthy feature is that you are not limited to fetching data from various sources, but you can also query the tile metadata. Summary In this article, we learned what Mapbox, Mapbox Editor, CartoCSS, TileMill and MapBox Studio is all about. Resources for Article: Further resources on this subject: Constructing and Evaluating Your Design Solution [article] Designing Site Layouts in Inkscape [article] Displaying SQL Server Data using a Linq Data Source [article]
Read more
  • 0
  • 0
  • 2538

article-image-icons
Packt
26 Oct 2015
21 min read
Save for later

PrimeFaces Theme Development: Icons

Packt
26 Oct 2015
21 min read
In this article by Andy Bailey and Sudheer Jonna, the authors of the book, PrimeFaces Theme Development, we'll cover icons, which add a lot of value to an application based on the principle that a picture is worth a thousand words. Equally important is the fact that they can, when well designed, please the eye and serve as memory joggers for your user. We humans strongly associate symbols with actions. For example, a save button with a disc icon is more evocative. The association becomes even stronger when we use the same icon for the same action in menus and button bars. It is also possible to use icons in place of text labels. It is an important thing to keep in mind when designing the user interface of your application that the navigational and action elements (such as buttons) should not be so intrusive that the application becomes too cluttered with the things that can be done. The user wants to be able to see the information that they want to see and use input dialogs to add more. What they don't want is to be distracted with links, lots of link and button text, and glaring visuals. In this article, we will cover the following topics: The standard theme icon set Creating a set of icons of our own Adding new icons to a theme Using custom icons in a commandButton component Using custom icons in a menu component The FontAwesome icons as an alternative to the ThemeRoller icons (For more resources related to this topic, see here.) Introducing the standard theme icon set jQuery UI provides a big set of standard icons that can be applied by just adding icon class names to HTML elements. The full list of icons is available at its official site, which can be viewed by visiting http://api.jqueryui.com/theming/icons/. Also, available in some of the published icon cheat sheets at http://www.petefreitag.com/cheatsheets/jqueryui-icons/. The icon class names follow the following syntax in order to add them for HTML elements: .ui-icon-{icon type}-{icon sub description}-{direction} .ui-icon-{icon type}-{icon sub description}-{direction} For example, the following span element will display an icon of a triangle pointing to the south: <span class="ui-icon ui-icon-triangle-1-s"></span> Other icons such as ui-icon-triangle-1-n, ui-icon-triangle-1-e, and ui-icon-triangle-1-w represent icons of triangles pointing to the north, east, and west respectively. The direction element is optional, and it is available only for a few icons such as a triangle, an arrow, and so on. These theme icons will be integrated in a number of jQuery UI-based widgets such as buttons, menus, dialogs, date picker components, and so on. The aforementioned standard set of icons is available in the ThemeRoller as one image sprite instead of a separate image for each icon. That is, ThemeRoller is designed to use the image sprites technology for icons. The different image sprites that vary in color (based on the widget state) are available in the images folder of each downloaded theme. An image sprite is a collection of images put into a single image. A webpage with many images may take a long time to load and generate multiple server requests. For a high-performance application, this idea will reduce the number of server requests and bandwidth. Also, it centralizes the image locations so that all the icons can be found at one location. The basic image sprite for the PrimeFaces Aristo theme looks like this: The image sprite's look and feel will vary based on the screen area of the widget and its components such as the header and content and widget states such as hover, active, highlight, and error styles. Let us now consider a JSF/PF-based example, where we can add a standard set of icons for UI components such as the commandButton and menu bar. First, we will create a new folder in web pages called chapter6. Then, we will create a new JSF template client called standardThemeIcons.xhtml and add a link to it in the chaptersTemplate.xhtml template file. When adding a submenu, use Chapter 6 for the label name and for the menu item, use Standard Icon Set as its value. In the title section, replace the text title with the respective topic of this article, which is Standard Icons: <ui:define name="title">   Standard Icons </ui:define> In the content section, replace the text content with the code for commandButton and menu components. Let's start with the commandButton components. The set of commandButton components uses the standard theme icon set with the help of the icon attribute, as follows: <h:panelGroup style="margin-left:830px">   <h3 style="margin-top: 0">Buttons</h3>   <p:commandButton value="Edit" icon="ui-icon-pencil"     type="button" />   <p:commandButton value="Bookmark" icon="ui-icon-bookmark"     type="button" />   <p:commandButton value="Next" icon="ui-icon-circle-arrow-e"     type="button" />   <p:commandButton value="Previous" icon="ui-icon-circle-arrow-w"     type="button" /> </h:panelGroup> The generated HTML for the first commandButton that is used to display the standard icon will be as follows: <button id="mainForm:j_idt15" name="mainForm:j_idt15" class="ui-   button ui-widget ui-state-default ui-corner-all ui-button-text-   icon-left" type="button" role="button" aria-disabled="false">   <span class="ui-button-icon-left ui-icon ui-c   ui-icon-     pencil"></span>   <span class="ui-button-text ui-c">Edit</span> </button> The PrimeFaces commandButton renderer appends the icon position CSS class based on the icon position (left or right) to the HTML button element, apart from the icon CSS class in one child span element and text CSS class in another child span element. This way, it displays the icon on commandButton based on the icon position property. By default, the position of the icon is left. Now, we will move on to the menu components. A menu component uses the standard theme icon set with the help of the menu item icon attribute. Add the following code snippets of the menu component to your page: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-plus" />     <p:menuitem value="Delete" url="#" icon="ui-icon-close" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-person" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contact" />   </p:submenu> </p:menu> You may have observed from the preceding code snippets that each icon from ThemeRoller starts with ui-icon for consistency. Now, run the application and navigate your way to the newly created page, and you should see the standard ThemeRoller icons applied to buttons and menu items, as shown in the following screenshot: For further information, you can use PrimeFaces showcase (http://www.primefaces.org/showcase/), where you can see the default icons used for components, applying standard theme icons with the help of the icon attribute, and so on. Creating a set of icons of our own In this section, we are going to discuss how to create our own icons for the PrimeFaces web application. Instead of using images, you need to use image sprites by considering the impact of application performance. Most of the time, we might be interested in adding custom icons to UI components apart from the regular standard icon set. Generally, in order to create our own custom icons, we need to provide CSS classes with the background-image property, which is referred to the image in the theme images folder. For example, the following commandButton components will use a custom icon: <p:commandButton value="With Icon" icon="disk"/> <p:commandButton icon="disk"/> The disk icon is created by adding the .disk CSS class with the background image property. In order to display the image, you need to provide the correct relative path of the image from the web application, as follows: .disk {   background-image: url('disk.png') !important; } However, as discussed earlier, we are going to use the image sprite technology instead of a separate image for each icon to optimize web performance. Before creating an image sprite, you need to select all the required images and convert those images (PNG, JPG, and so on) to the icon format with a size almost equal to to that of the ThemeRoller icons. In this article, we used the Paint.NET tool to convert images to the ICO format with a size of 16 by 16 pixels. Paint.NET is a free raster graphics editor for Microsoft Windows, and it is developed on the .NET framework. It is a good replacement for the Microsoft Paint program in an editor with support for layers blending, transparency, and plugins. If the ICO format is not available, then you have to add the file type plug-in for the Paint.NET installation directory. So, this is just a two-step process for the conversion: The image (PNG, JPG, and so on) need to be saved as the Icons (*.ico) option from the Save as type dropdown. Then, select 16 by 16 dimensions with the supported bit system (8-bit, 32-bit, and so on). All the PrimeFaces theme icons are designed to have the same dimensions. There are many online and offline tools available that can be used to create an image sprite. I used Instant Sprite, an open source CSS sprite generator tool, to create an image sprite in this article. You can have a look at the official site for this CSS generator tool by visiting http://instantsprite.com/. Let's go through the following step-by-step process to create an image sprite using the Instant Sprite tool: First, either select multiple icons from your computer, or drag and drop icons on to the tool page. In the Thumbnails section, just drag and drop the images to change their order in the sprite. Change the offset (in pixels), direction (horizontal, vertical, and diagonal), and the type (.png or .gif) values in the Options section. In the Sprite section, right-click on the image to save it on your computer. You can also save the image in a new window or as a base64 type. In the Usage section, you will find the generated sprite CSS classes and HTML. Once the image is created, you will be able to see the image in the preview section before finalizing the image. Now, let's start creating the image sprite for button bar and menu components, which are going to be used in later sections. First, download or copy the required individual icons on the computer. Then, select all those files and drag and drop them in a particular order, as follows: We can also configure a few options, such as an offset of 10 px for icon padding, direction as horizontal to display them horizontally, and then finally selecting the image as the PNG type: The image sprite is generated in the sprite section, as follows: Right-click on the image to save it on your computer. Now, we have created a custom image sprite from the set of icons. Once the image sprite has been created, change the sprite name to ui-custom-icons and copy the generated CSS styles for later. In the generated HTML, note that each div class is appended with the ui-icon class to display the icon with a width of 16 px and height of 16 px. Adding the new icons to your theme In order to apply the custom icons to your web page, we first need to copy the generated image sprite file and then add the generated CSS classes from the previous section. The following generated sprite file has to be added to the images folder of the primefaces-moodyBlue2 custom theme. Let's name the file ui-custom-icons: After this, copy the generated CSS rules from the previous section. The first CSS class (ui-icon) contains the image sprite to display custom icons using the background URL property and dimensions such as the width and height properties for each icon. But since we are going to add the image reference in widget state style classes, you need to remove the background image URL property from the ui-icon class. Hence, the ui-icon class contains only the width and height dimensions: .ui-icon {   width: 16px;   height: 16px; } Later, modify the icon-specific CSS class names as shown in the following format. Each icon has its own icon name: .ui-icon-{icon name} The following CSS classes are used to refer individual icons with the help of the background-position property. Now after modification, the positioning CSS classes will look like this: .ui-icon-edit { background-position: 0 0; } .ui-icon-bookmark { background-position: -26px 0; } .ui-icon-next { background-position: -52px 0; } .ui-icon-previous { background-position: -78px 0; } .ui-icon-new { background-position: -104px 0; } .ui-icon-delete { background-position: -130px 0; } .ui-icon-refresh { background-position: -156px 0; } .ui-icon-print { background-position: -182px 0; } .ui-icon-home { background-position: -208px 0; } .ui-icon-admin { background-position: -234px 0; } .ui-icon-contactus { background-position: -260px 0; } Apart from the preceding CSS classes, we have to add the component state CSS classes. Widget states such as hover, focus, highlight, active, and error need to refer to different image sprites in order to display the component state behavior for user interactions. For demonstration purposes, we created only one image sprite and used it for all the CSS classes. But in real-time development, the image will vary based on the widget state. The following widget states refer to image sprites for different widget states: .ui-icon, .ui-widget-content .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-widget-header .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-default .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-hover .ui-icon, .ui-state-focus .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-active .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-highlight .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-error .ui-icon, .ui-state-error-text .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } In the JSF ecosystem, image references in the theme.css file must be converted to an expression that JSF resource loading can understand. So at first, in the preceding CSS classes, all the image URLs are appeared in the following expression: background-image: url("images/ui-custom-icons.png"); The preceding expression, when modified, looks like this: background-image: url("#{resource['primefaces-   moodyblue2:images/ui-custom-icons.png']}");  We need to make sure that the default state classes are commented out in the theme.css (the moodyblue2 theme) file to display the custom icons. By default, custom theme classes (such as the state classes and icon classes available under custom states and images and custom icons positioning) are commented out in the source code of the GitHub project. So, we need to uncomment these sections and comment out the default theme classes (such as the state classes and icon classes available under states and images and positioning). This means that the default or custom style classes only need to be available in the theme.css file. (OR) You can see all these changes in moodyblue3 theme as well. The custom icons appeared in Custom Icons screen by just changing the current theme to moodyblue3. Using custom icons in the commandButton components After applying the new icons to the theme, you are ready to use them on the PrimeFaces components. In this section, we will add custom icons to command buttons. Let's add a link named Custom Icons to the chaptersTemplate.xhtml file. The title of this page is also named Custom Icons. The following code snippets show how custom icons are added to command buttons using the icon attribute: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="ui-icon-edit" type="button" /> <p:commandButton value="Bookmark" icon="ui-icon-bookmark"   type="button" /> <p:commandButton value="Next" icon="ui-icon-next" type="button" /> <p:commandButton value="Previous" icon="ui-icon-previous"   type="button" /> Now, run the application and navigate to the newly created page. You should see the custom icons applied to the command buttons, as shown in the following screenshot: The commandButton component also supports the iconpos attribute if you wish to display the icon either to the left or right side. The default value is left. Using custom icons in a menu component In this section, we are going to add custom icons to a menu component. The menuitem tag supports the icon attribute to attach a custom icon. The following code snippets show how custom icons are added to the menu component: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-new" />     <p:menuitem value="Delete" url="#" icon="ui-icon-delete" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon-home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-admin" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contactus" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You will see the custom icons applied to the menu component, as shown in the following screenshot: Thus, you can apply custom icons on a PrimeFaces component that supports the icon attribute. The FontAwesome icons as an alternative to the ThemeRoller icons In addition to the default ThemeRoller icon set, the PrimeFaces team provided and supported a set of alternative icons named the FontAwesome iconic font and CSS framework. Originally, it was designed for the Twitter Bootstrap frontend framework. Currently, it works well with all frameworks. The official site for the FontAwesome toolkit is http://fortawesome.github.io/Font-Awesome/. The features of FontAwesome that make it a powerful iconic font and CSS toolkit are as follows: One font, 519 icons: In a single collection, FontAwesome is a pictographic language of web-related actions No JavaScript required: It has minimum compatibility issues because FontAwesome doesn't required JavaScript Infinite scalability: SVG (short for Scalable Vector Graphics) icons look awesome in any size Free to use: It is completely free and can be used for commercial usage CSS control: It's easy to style the icon color, size, shadow, and so on Perfect on retina displays: It looks gorgeous on high resolution displays It can be easily integrated with all frameworks Desktop-friendly Compatible with screen readers FontAwesome is an extension to Bootstrap by providing various icons based on scalable vector graphics. This FontAwesome feature is available from the PrimeFaces 5.2 release onwards. These icons can be customized in terms of size, color, drop and shadow and so on with the power of CSS. The full list of icons is available at both the official site of FontAwesome (http://fortawesome.github.io/Font-Awesome/icons/) as well as the PrimeFaces showcase (http://www.primefaces.org/showcase/ui/misc/fa.xhtml). In order to enable this feature, we have to set primefaces.FONT_AWESOME context param in web.xml to true, as follows: <context-param>   <param-name>primefaces.FONT_AWESOME</param-name>   <param-value>true</param-value> </context-param> The usage is as simple as using the standard ThemeRoller icons. PrimeFaces components such as buttons or menu items provide an icon attribute, which accepts an icon from the FontAwesome icon set. Remember that the icons should be prefixed by fa in a component. The general syntax of the FontAwesome icons will be as follows: fa fa-[name]-[shape]-[o]-[direction] Here, [name] is the name of the icon, [shape] is the optional shape of the icon's background (either circle or square), [o] is the optional outlined version of the icon, and [direction] is the direction in which certain icons point. Now, we first create a new navigation link named FontAwesome under chapter6 inside the chapterTemplate.xhtml template file. Then, we create a JSF template client called fontawesome.xhtml, where it explains the FontAwesome feature with the help of buttons and menu. This page has been added as a menu item for the top-level menu bar. In the content section, replace the text content with the following code snippets. The following set of buttons displays the FontAwesome icons with the help of the icon attribute. You may have observed that the fa-fw style class used to set icons at a fixed width. This is useful when variable widths throw off alignment: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="fa fa-fw fa-edit"   type="button" /> <p:commandButton value="Bookmark" icon="fa fa-fw fa-bookmark"   type="button" /> <p:commandButton value="Next" icon="fa fa-fw fa-arrow-right"   type="button" /> <p:commandButton value="Previous" icon="fa fa-fw fa-arrow-  left"   type="button" /> After this, apply the FontAwesome icons to navigation lists, such as the menu component, to display the icons just to the left of the component text content, as follows: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="fa fa-plus" />     <p:menuitem value="Delete" url="#" icon="fa fa-close" />     <p:menuitem value="Refresh" url="#" icon="fa fa-refresh" />     <p:menuitem value="Print" url="#" icon="fa fa-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="fa fa-home" />     <p:menuitem value="Admin" url="#" icon="fa fa-user" />     <p:menuitem value="Contact Us" url="#" icon="fa fa-       picture-o" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You should see the FontAwesome icons applied to buttons and menu items, as shown in the following screenshot: Note that the 40 shiny new icons of FontAwesome are available only in the PrimeFaces Elite 5.2.2 release and the community PrimeFaces 5.3 release because PrimeFaces was upgraded to FontAwesome 4.3 version since its 5.2.2 release. Summary In this article, we explored the standard theme icon set and how to use it on various PrimeFaces components. We also learned how to create our own set of icons in the form of the image sprite technology. We saw how to create image sprites using open source online tools and add them on a PrimeFaces theme. Finally, we had a look at the FontAwesome CSS framework, which was introduced as an alternative to the standard ThemeRoller icons. To ensure best practice, we learned how to use icons on commandButton and menu components. Now that you've come to the end of this article, you should be comfortable using web icons for PrimeFaces components in different ways. Resources for Article: Further resources on this subject: Introducing Primefaces [article] Setting Up Primefaces [article] Components Of Primefaces Extensions [article]
Read more
  • 0
  • 0
  • 5942

article-image-welcome-land-bludborne
Packt
23 Oct 2015
12 min read
Save for later

Welcome to the Land of BludBorne

Packt
23 Oct 2015
12 min read
In this article by Patrick Hoey, the author of Mastering LibGDX Game Development, we will jump into creating the world of BludBourne (that's our game!). We will first learn some concepts and tools related to creating tile based maps and then we will look into starting with BludBorne! We will cover the following topics in this article: Creating and editing tile based maps Implementing the starter classes for BludBourne (For more resources related to this topic, see here.) Creating and editing tile based maps For the BludBourne project map locations, we will be using tilesets, which are terrain and decoration sprites in the shape of squares. These are easy to work with since LibGDX supports tile-based maps with its core library. The easiest method to create these types of maps is to use a tile-based editor. There are many different types of tilemap editors, but there are two primary ones that are used with LibGDX because they have built in support: Tiled: This is a free and actively maintained tile-based editor. I have used this editor for the BludBourne project. Download the latest version from http://www.mapeditor.org/download.html. Tide: This is a free tile-based editor built using Microsoft XNA libraries. The targeted platforms are Windows, Xbox 360, and Windows Phone 7. Download the latest version from http://tide.codeplex.com/releases. For the BludBourne project, we will be using Tiled. The following figure is a screenshot from one of the editing sessions when creating the maps for our game:    The following is a quick guide for how we can use Tiled for this project: Map View (1): The map view is the part of the Tiled editor where you display and edit your individual maps. Numerous maps can be loaded at once, using a tab approach, so that you can switch between them quickly. There is a zoom feature available for this part of Tiled in the lower right hand corner, and can be easily customized depending on your workflow. The maps are provided in the project directory (under coreassetsmaps), but when you wish to create your own maps, you can simply go to File | New. In the New Map dialog box, first set the Tile size dimensions, which, for our project, will be a width of 16 pixels and a height of 16 pixels. The other setting is Map size which represents the size of your map in unit size, using the tile size dimensions as your unit scale. An example would be creating a map that is 100 units by 100 units, and if our tiles have a dimension of 16 pixels by 16 pixels then this would give is a map size of 1600 pixels by 1600 pixels. Layers (2): This represents the different layers of the currently loaded map. You can think of creating a tile map like painting a scene, where you paint the background first and build up the various elements until you get to the foreground. Background_Layer: This tile layer represents the first layer created for the tilemap. This will be the layer to create the ground elements, such as grass, dirt paths, water, and stone walkways. Nothing else will be shown below this layer. Ground_Layer: This tile layer will be the second later created for the tilemap. This layer will be buildings built on top of the ground, or other structures like mountains, trees, and villages. The primary reason is convey a feeling of depth to the map, as well as the fact that structural tiles such as walls have a transparency (alpha channel) so that they look like they belong on the ground where they are being created. Decoration_Layer: This third tile layer will contain elements meant to decorate the landscape in order to remove repetition and make more interesting scenes. These elements include rocks, patches of weeds, flowers, and even skulls. MAP_COLLISION_LAYER: This fourth layer is a special layer designated as an object layer. This layer does not contain tiles, but will have objects, or shapes. This is the layer that you will configure to create areas in the map that the player character and non-player characters cannot traverse, such as walls of buildings, mountain terrain, ocean areas, and decorations such as fountains. MAP_SPAWNS_LAYER: This fifth layer is another special object layer designated only for player and non-playable character spawns, such as people in the towns. These spawns will represent the various starting locations where these characters will first be rendered on the map. MAP_PORTAL_LAYER: This sixth layer is the last object layer designated for triggering events in order to move from one map into another. These will be locations where the player character walks over, triggering an event which activates the transition to another map. An example would be in the village map, when the player walks outside of the village map, they will find themselves on the larger world map. Tilesets (3): This area of Tiled represents all of the tilesets you will work with for the current map. Each tileset, or spritesheet, will get its own tab in this interface, making it easy to move between them. Adding a new tileset is as easy as clicking the New icon in the Tilesets area, and loading the tileset image in the New Tileset dialog. Tiled will also partition out the tilemap into the individual tiles after you configure the tile dimensions in this dialog. Properties (4): This area of Tiled represents the different additional properties that you can set for the currently selected map element, such as a tile or object. An example of where these properties can be helpful is when we create a portal object on the portal layer. We can create a property defining the name of this portal object that represents the map to load. So, when we walk over a small tile that looks like a town in the world overview map, and trigger the portal event, we know that the map to load is TOWN because the name property on this portal object is TOWN. After reviewing a very brief description of how we can use the Tiled editor for BludBourne, the following screenshots show the three maps that we will be using for this project. The first screenshot is of the TOWN map which will be where our hero will discover clues from the villagers, obtain quests, and buy armor and weapons. The town has shops, an inn, as well as a few small homes of local villagers:    The next screenshot is of the TOP_WORLD map which will be the location where our hero will battle enemies, find clues throughout the land, and eventually make way to the evil antagonist held up in his castle. The hero can see how the pestilence of evil has started to spread across the lands and lay ruin upon the only harvestable fields left:    Finally, we make our way to the CASTLE_OF_DOOM map, which will be where our hero, once leveled enough, will battle the evil antagonist held up in the throne room of his own castle. Here, the hero will find many high level enemies, as well as high valued items for trade:     Implementing the starter classes for BludBourne Now that we have created the maps for the different locations of BludBourne, we can now begin to develop the initial pieces of our source code project in order to load these maps, and move around in our world. The following diagram represents a high level view of all the relevant classes that we will be creating:   This class diagram is meant to show not only all the classes we will be reviewing in this article, but also the relationships that these classes share so that we are not developing them in a vacuum. The main entry point for our game (and the only platform specific class) is DesktopLauncher, which will instantiate BludBourne and add it along with some configuration information to the LibGDX application lifecycle. BludBourne will derive from Game to minimize the lifecycle implementation needed by the ApplicationListener interface. BludBourne will maintain all the screens for the game. MainGameScreen will be the primary gameplay screen that displays the different maps and player character moving around in them. MainGameScreen will also create the MapManager, Entity, and PlayerController. MapManager provides helper methods for managing the different maps and map layers. Entity will represent the primary class for our player character in the game. PlayerController implements InputProcessor and will be the class that controls the players input and controls on the screen. Finally, we have some asset manager helper methods in the Utility class used throughout the project. DesktopLauncher The first class that we will need to modify is DesktopLauncher, which the gdx-setup tool generated: package com.packtpub.libgdx.bludbourne.desktop; import com.badlogic.gdx.Application; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; import com.packtpub.libgdx.bludbourne.BludBourne; The Application class is responsible for setting up a window, handling resize events, rendering to the surfaces, and managing the application during its lifetime. Specifically, Application will provide the modules for dealing with graphics, audio, input and file I/O handling, logging facilities, memory footprint information, and hooks for extension libraries. The Gdx class is an environment class that holds static instances of Application, Graphics, Audio, Input, Files, and Net modules as a convenience for access throughout the game. The LwjglApplication class is the backend implementation of the Application interface for the desktop. The backend package that LibGDX uses for the desktop is called LWJGL. This implementation for the desktop will provide cross-platform access to native APIs for OpenGL. This interface becomes the entry point that the platform OS uses to load your game. The LwjglApplicationConfiguration class provides a single point of reference for all the properties associated with your game on the desktop: public class DesktopLauncher { public static void main (String[] arg) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); config.title = "BludBourne"; config.useGL30 = false; config.width = 800; config.height = 600; Application app = new LwjglApplication(new BludBourne(), config); Gdx.app = app; //Gdx.app.setLogLevel(Application.LOG_INFO); Gdx.app.setLogLevel(Application.LOG_DEBUG); //Gdx.app.setLogLevel(Application.LOG_ERROR); //Gdx.app.setLogLevel(Application.LOG_NONE); } } The config object is an instance of the LwjglApplicationConfiguration class where we can set top level game configuration properties, such as the title to display on the display window, as well as display window dimensions. The useGL30 property is set to false, so that we use the much more stable and mature implementation of OpenGL ES, version 2.0. The LwjglApplicationConfiguration properties object, as well as our starter class instance, BludBourne, are then passed to the backend implementation of the Application class, and an object reference is then stored in the Gdx class. Finally, we will set the logging level for the game. There are four values for the logging levels which represent various degrees of granularity for application level messages output to standard out. LOG_NONE is a logging level where no messages are output. LOG_ERROR will only display error messages. LOG_INFO will display all messages that are not debug level messages. Finally, LOG_DEBUG is a logging level that displays all messages. BludBourne The next class to review is BludBourne. The class diagram for BludBourne shows the attributes and method signatures for our implementation: The import packages for BludBourne are as follows: package com.packtpub.libgdx.bludbourne; import com.packtpub.libgdx.bludbourne.screens.MainGameScreen; import com.badlogic.gdx.Game; The Game class is an abstract base class which wraps the ApplicationListener interface and delegates the implementation of this interface to the Screen class. This provides a convenience for setting the game up with different screens, including ones for a main menu, options, gameplay, and cutscenes. The MainGameScreen is the primary gameplay screen that the player will see as they move their hero around in the game world: public class BludBourne extends Game { public static final MainGameScreen _mainGameScreen = new MainGameScreen(); @Override public void create(){ setScreen(_mainGameScreen); } @Override public void dispose(){ _mainGameScreen.dispose(); } } The gdx-setup tool generated our starter class BludBourne. This is the first place where we begin to set up our game lifecycle. An instance of BludBourne is passed to the backend constructor of LwjglApplication in DesktopLauncher which is how we get hooks into the lifecycle of LibGDX. BludBourne will contain all of the screens used throughout the game, but for now we are only concerned with the primary gameplay screen, MainGameScreen. We must override the create() method so that we can set the initial screen for when BludBourne is initialized in the game lifecycle. The setScreen() method will check to see if a screen is already currently active. If the current screen is already active, then it will be hidden, and the screen that was passed into the method will be shown. In the future, we will use this method to start the game with a main menu screen. We should also override dispose() since BludBourne owns the screen object references. We need to make sure that we dispose of the objects appropriately when we are exiting the game. Summary In this article, we first learned about tile based maps and how to create them with the Tiled editor. We then learned about the high level architecture of the classes we will have to create and implemented starter classes which allowed us to hook into the LibGDX application lifecycle. Have a look at Mastering LibGDX Game Development to learn about textures, TMX formatted tile maps, and how to manage them with the asset manager. Also included is how the orthographic camera works within our game, and how to display the map within the render loop. You can learn to implement a map manager that deals with collision layers, spawn points, and a portal system which allows us to transition between different locations seamlessly. Lastly, you can learn to implement a player character with animation cycles and input handling for moving around the game map. Resources for Article: Further resources on this subject: Finding Your Way [article] Getting to Know LibGDX [article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 18086

article-image-writing-3d-space-rail-shooter-threejs-part-3
Martin Naumann
23 Oct 2015
7 min read
Save for later

Writing a 3D space rail shooter in Three.js, Part 3

Martin Naumann
23 Oct 2015
7 min read
In the course of this three part series, you will learn how to write a simple 3D space shooter game with Three.js. The game will introduce the basic concepts of a Three.js application, how to write modular code and the core principles of a game, such as camera, player motion and collision detection. In Part 1 we set up our package and created the world of our game. In Part 2, we added the spaceship and the asteroids for our game. In this final Part 3 of the series, we will set the collision detection, add weapons to our craft and add a way to score and manage our game health as well. Collisions make things go boom Okay, now we'll need to set up collision detection and shooting. Let's start with collision detection! We will be using a technique called hitbox, where we'll create bounding boxes for the asteroids and the spaceship and check for intersections. Luckily, Three.js has a THREE.Box3 class to help us with this. The additions to the Player module: var Player = function(parent) { var loader = newObjMtlLoader(), self = this this.loaded = false this.hitbox = newTHREE.Box3() this.update = function() { if(!spaceship) return this.hitbox.setFromObject(spaceship) } This adds the hitbox and an update method that updates the hitbox by using the spaceship object to get dimensions and position for the box. Now we'll adjust the Asteroid module to do the same: var Asteroid = function(rockType) { var mesh = newTHREE.Object3D(), self = this // Speed of motion and rotation mesh.velocity = Math.random() * 2 + 2 mesh.vRotation = newTHREE.Vector3(Math.random(), Math.random(), Math.random()) this.hitbox = newTHREE.Box3() and tweak the update method: this.update = function(z) { mesh.position.z += mesh.velocity mesh.rotation.x += mesh.vRotation.x * 0.02; mesh.rotation.y += mesh.vRotation.y * 0.02; mesh.rotation.z += mesh.vRotation.z * 0.02; if(mesh.children.length > 0) this.hitbox.setFromObject(mesh.children[0]) if(mesh.position.z > z) { this.reset(z) } } You may have noticed the reset method that isn't implemented yet. It'll come in handy later - so let's make that method: this.reset = function(z) { mesh.velocity = Math.random() * 2 + 2 mesh.position.set(-50 + Math.random() * 100, -50 + Math.random() * 100, z - 1500 - Math.random() * 1500) } This method allows us to quickly push an asteroid back into action whenever we need to. On to the render loop: function render() { cam.position.z -= 1 tunnel.update(cam.position.z) player.update() for(var i=0;i<NUM_ASTEROIDS;i++) { if(!asteroids[i].loaded) continue asteroids[i].update(cam.position.z) if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) } } } So for each asteroid that is loaded, we're checking if the hitbox of our player is intersecting (i.e. colliding) with the hitbox of the asteroid. If so, we'll reset (i.e. push into the vortex ahead of us) the asteroid, based on the camera offset. Pew! Pew! Pew! Now on to get us some weaponry! Let's create a Shot module: var THREE = require('three') var shotMtl = newTHREE.MeshBasicMaterial({ color: 0xff0000, transparent: true, opacity: 0.5 }) var Shot = function(initialPos) { this.mesh = newTHREE.Mesh( newTHREE.SphereGeometry(3, 16, 16), shotMtl ) this.mesh.position.copy(initialPos) this.getMesh = function() { returnthis.mesh } this.update = function(z) { this.mesh.position.z -= 5 if(Math.abs(this.mesh.position.z - z) > 1000) { returnfalse deletethis.mesh } returntrue } returnthis } module.exports = Shot In this module we're creating a translucent, red sphere, spawned at the initial position given to the constructor function. The update method is a bit different from those we've seen so far as it returns either true (still alive) or false (dead, remove now) based on the position. Once the shot is too far from the camera, it gets cleaned up. Now back to our main.js: var shots = [] functionrender() { cam.position.z -= 1 tunnel.update(cam.position.z) player.update() for(var i=0; i<shots.length; i++) { if(!shots[i].update(cam.position.z)) { World.getScene().remove(shots[i].getMesh()) shots.splice(i, 1) } } This snippet is adding in a loop over all the shots and updates them, removing them if needed. But we also have to check for collisions with the asteroids: for(var i=0;i<NUM_ASTEROIDS;i++) { if(!asteroids[i].loaded) continue asteroids[i].update(cam.position.z) if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) } for(var j=0; j<shots.length; j++) { if(asteroids[i].hitbox.isIntersectionBox(shots[j].hitbox)) { asteroids[i].reset(cam.position.z) World.getScene().remove(shots[j].getMesh()) shots.splice(j, 1) break } } } Last but not least we need some code to take keyboard input to fire the shots: window.addEventListener('keyup', function(e) { switch(e.keyCode) { case32: // Space var shipPosition = cam.position.clone() shipPosition.sub(newTHREE.Vector3(0, 25, 100)) var shot = newShot(shipPosition) shots.push(shot) World.add(shot.getMesh()) break } }) This code - when the spacebar key is pressed - is adding a new shot to the array, which will then be updated in the render loop. Move it, move it! Cool, but while we're at the keyboard handler, let's make things moving a bit more! window.addEventListener('keydown', function(e) { if(e.keyCode == 37) { cam.position.x -= 5 } elseif(e.keyCode == 39) { cam.position.x += 5 } if(e.keyCode == 38) { cam.position.y += 5 } elseif(e.keyCode == 40) { cam.position.y -= 5 } }) This code uses the arrow keys to move the camera around. Finishing touches Now the last bits come into play: Score and health management as well. Start with defining the two variables in main.js: var score = 0, health = 100 and change these values where appropriate: if(player.loaded && player.hitbox.isIntersectionBox(asteroids[i].hitbox)) { asteroids[i].reset(cam.position.z) health -= 20 document.getElementById('health').textContent = health if(health < 1) { World.pause() alert('Game over! You scored ' + score + ' points') window.location.reload() } } This decreases the health by 20 points whenever the spaceship hits an asteroid and shows a "Game over" box and reloads the game afterwards. for(var j=0; j<shots.length; j++) { if(asteroids[i].hitbox.isIntersectionBox(shots[j].hitbox)) { score += 10 document.getElementById("score").textContent = score asteroids[i].reset(cam.position.z) World.getScene().remove(shots[j].getMesh()) shots.splice(j, 1) break } } This increases the score by 10 whenever a shot hits an asteroid. You may have noticed the two document.getElementById calls that will not work just yet. Those are for two UI elements that we'll add to the index.html to show the player the current health and score situation: <body> <div id="bar"> Health: <span id="health">100</span>% &nbsp;&nbsp; Score: <span id="score">0</span> </div> <script src="app.js"></script> </body> And throw in some CSS, too: @import url(http://fonts.googleapis.com/css?family=Orbitron); #bar{ font-family: Orbitron, sans-serif; position:absolute; left:0; right:0; height:1.5em; background:black; color:white; line-height:1.5em; } Wrap up With all 3 Parts of this series, we now with the help of Three.js have a basic 3D game running in the browser. There's a bunch of improvements to be made - the controls, mobile input compatibility and performance, but the basic concepts are in place. Now have fun playing! About the author Martin Naumann is an open source contributor and web evangelist by heart from Zurich with a decade of experience from the trenches of software engineering in multiple fields. He works as a software engineer at Archilogic in front and backend. He devotes his time to moving the web forward, fixing problems, building applications and systems and breaking things for fun & profit. Martin believes in the web platform and is working with bleeding edge technologies that will allow the web to prosper.
Read more
  • 0
  • 0
  • 4741
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-guidelines-creating-responsive-forms
Packt
23 Oct 2015
12 min read
Save for later

Guidelines for Creating Responsive Forms

Packt
23 Oct 2015
12 min read
In this article by Chelsea Myers, the author of the book, Responsive Web Design Patterns, covers the guidelines for creating responsive forms. Online forms are already modular. Because of this, they aren't hard to scale down for smaller screens. The little boxes and labels can naturally shift around between different screen sizes since they are all individual elements. However, form elements are naturally tiny and very close together. Small elements that you are supposed to click and fill in, whether on a desktop or mobile device, pose obstacles for the user. If you developed a form for your website, you more than likely want people to fill it out and submit it. Maybe the form is a survey, a sign up for a newsletter, or a contact form. Regardless of the type of form, online forms have a purpose; get people to fill them out! Getting people to do this can be difficult at any screen size. But when users are accessing your site through a tiny screen, they face even more challenges. As designers and developers, it is our job to make this process as easy and accessible as possible. Here are some guidelines to follow when creating a responsive form: Give all inputs breathing room. Use proper values for input's type attribute. Increase the hit states for all your inputs. Stack radio inputs and checkboxes on small screens. Together, we will go over each of these guidelines and see how to apply them. (For more resources related to this topic, see here.) The responsive form pattern Before we get started, let's look at the markup for the form we will be using. We want to include a sample of the different input options we can have. Our form will be very basic and requires simple information from the users such as their name, e-mail, age, favorite color, and favorite animal. HTML: <form> <!—- text input --> <label class="form-title" for="name">Name:</label> <input type="text" name="name" id="name" /> <!—- email input --> <label class="form-title" for="email">Email:</label> <input type="email" name="email" id="email" /> <!—- radio boxes --> <label class="form-title">Favorite Color</label> <input type="radio" name="radio" id="red" value="Red" /> <label>Red</label> <input type="radio" name="radio" id="blue" value="Blue" /><label>Blue</label> <input type="radio" name="radio" id="green" value="Green" /><label>Green</label> <!—- checkboxes --> <label class="form-title" for="checkbox">Favorite Animal</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> <input type="checkbox" name="checkbox" id="cat" value="Cat" /><label>Cat</label> <input type="checkbox" name="checkbox" id="other" value="Other" /><label>Other</label> <!—- drop down selection --> <label class="form-title" for="select">Age:</label> <select name="select" id="select"> <option value="age-group-1">1-17</option> <option value="age-group-2">18-50</option> <option value="age-group-3">&gt;50</option> </select> <!—- textarea--> <label class="form-title" for="textarea">Tell us more:</label> <textarea cols="50" rows="8" name="textarea" id="textarea"></textarea> <!—- submit button --> <input type="submit" value="Submit" /> </form> With no styles applied, our form looks like the following screenshot: Several of the form elements are next to each other, making the form hard to read and almost impossible to fill out. Everything seems tiny and squished together. We can do better than this. We want our forms to be legible and easy to fill. Let's go through the guidelines and make this eyesore of a form more approachable. #1 Give all inputs breathing room In the preceding screenshot, we can't see when one form element ends and the other begins. They are showing up as inline, and therefore displaying on the same line. We don't want this though. We want to give all our form elements their own line to live on and not share any space to the right of each type of element. To do this, we add display: block to all our inputs, selects, and text areas. We also apply display:block to our form labels using the class .form-title. We will be going more into why the titles have their own class for the fourth guideline. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; } .form-title { display:block; font-weight: bold; } As mentioned, we are applying display:block to text and e-mail inputs. We are also applying it to textarea and select elements. Just having our form elements display on their own line is not enough. We also give everything a margin-bottom element of 10px to give the elements some breathing room between one another. Next, we apply display:block to all the form titles and make them bold to add more visual separation. #2 Use proper values for input's type attribute Technically, if you are collecting a password from a user, you are just asking for text. E-mail, search queries, and even phone numbers are just text too. So, why would we use anything other than <input type="text"…/>? You may not notice the difference on your desktop computer between these form elements, but the change is the biggest on mobile devices. To show you, we have two screenshots of what the keyboard looks like on an iPhone while filling out the text input and the e-mail input: In the left image, we are focused on the text input for entering your name. The keyboard here is normal and nothing special. In the right image, we are focused on the e-mail input and can see the difference on the keyboard. As the red arrow points out, the @ key and the . key are now present when typing in the e-mail input. We need both of those to enter in a valid e-mail, so the device brings up a special keyboard with those characters. We are not doing anything special other than making sure the input has type="email" for this to happen. This works because type="email" is a new HTML5 attribute. HTML5 will also validate that the text entered is a correct email format (which used to be done with JavaScript). Here are some other HTML5 type attribute values from the W3C's third HTML 5.1 Editor's Draft (http://www.w3.org/html/wg/drafts/html/master/semantics.html#attr-input-type-keywords): color date datetime email month number range search tel time url week #3 Increase the hit states for all your inputs It would be really frustrating for the user if they could not easily select an option or click a text input to enter information. Making users struggle isn't going to increase your chance of getting them to actually complete the form. The form elements are naturally very small and not large enough for our fingers to easily tap. Because of this, we should increase the size of our form inputs. Having form inputs to be at least 44 x 44 px large is a standard right now in our industry. This is not a random number either. Apple suggests this size to be the minimum in their iOS Human Interface Guidelines, as seen in the following quote: "Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 x 44 points." As you can see, this does not apply to only form elements. Apple's suggest is for all clickable items. Now, this number may change along with our devices' resolutions in the future. Maybe it will go up or down depending on the size and precision of our future technology. For now though, it is a good place to start. We need to make sure that our inputs are big enough to tap with a finger. In the future, you can always test your form inputs on a touchscreen to make sure they are large enough. For our form, we can apply this minimum size by increasing the height and/or padding of our inputs. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; font-size: 1em; padding:5px; min-height: 2.75em; width: 100%; max-width: 300px; } The first two styles are from the first guideline. After this, we are increasing the font-size attribute of the inputs, giving the inputs more padding, and setting a min-height attribute for each input. Finally, we are making the inputs wider by setting the width to 100%, but also applying a max-width attribute so the inputs do not get too unnecessarily wide. We want to increase the size of our submit button as well. We definitely don't want our users to miss clicking this: input[type="submit"] { min-height: 3em; padding: 0 2.75em; font-size: 1em; border:none; background: mediumseagreen; color: white; } Here, we are also giving the submit button a min-height attribute, some padding, and increasing the font-size attribute. We are striping the browser's native border style on the button with border:none. We also want to make this button very obvious, so we apply a mediumseagreen color as the background and white text color. If you view the form so far in the browser, or look at the image, you will see all the form elements are bigger now except for the radio inputs and checkboxes. Those elements are still squished together. To make our radio and checkboxes bigger in our example, we will make the option text bigger. Doesn't it make sense that if you want to select red as your favorite color, you would be able to click on the word "red" too, and not just the box next to the word? In the HTML for the radio inputs and the checkboxes, we have markup that looks like this: <input type="radio" name="radio" id="red" value="Red" /><label>Red</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> To make the option text clickable, all we have to do is set the for attribute on the label to match the id attribute of the input. We will wrap the radio and checkbox inputs inside of their labels so that we can easily stack them for guideline #4. We will also give the labels a class of choice to help style them. <label class="choice" for="red"><input type="radio" name="radio" id="red" value="Red" />Red</label> <label class="choice" for="dog"><input type="checkbox" name="checkbox" id="dog" value="Dog" />Dog</label> Now, the option text and the actual input are both clickable. After doing this, we can apply some more styles to make selecting a radio or checkbox option even easier: label input { margin-left: 10px; } .choice { margin-right: 15px; padding: 5px 0; } .choice + .form-title { margin-top: 10px; } With label input, we are giving the input and the label text a little more space between each other. Then, using the .choice class, we are spreading out each option with margin-right: 15px and making the hit states bigger with padding: 5px 0. Finally, with .choice + .form-title, we are giving any .form-title element that comes after an element with a class of .choice more breathing room. This is going back to the responsive form guideline #1. There is only one last thing we need to do. On small screens, we want to stack the radio and checkbox inputs. On large screens, we want to keep them inline. To do this, we will add display:block to the .choice class. We will then be using a media query to change it back: @media screen and (min-width: 600px){ .choice { display: inline; } } With each input on its own line for smaller screens, they are easier to select. But we don't need to take up all that vertical space on wider screens. With this, our form is done. You can see our finished form, as shown in the following screenshot: Much better, wouldn't you say? No longer are all the inputs tiny and mushed together. The form is easy to read, tap, and begin entering in information. Filling in forms is not considered a fun thing to do, especially on a tiny screen with big thumbs. But there are ways in which we can make the experience easier and a little more visually pleasing. Summary A classic user experience challenge is to design a form that encourages completion. When it comes to fact, figures, and forms, it can be hard to retain the user's attention. This does not mean it is impossible. Having a responsive website does make styling tables and forms a little more complex. But what is the alternative? Nonresponsive websites make you pinch and zoom endlessly to fill out a form or view a table. Having a responsive website gives you the opportunity to make this task easier. It takes a little more code, but in the end, your users will greatly benefit from it. With this article, we have wrapped up guidelines for creating responsive forms. Resources for Article: Further resources on this subject: Securing and Authenticating Web API [article] Understanding CRM Extendibility Architecture [article] CSS3 – Selectors and nth Rules [article]
Read more
  • 0
  • 0
  • 10879

article-image-internet-connected-smart-water-meter-0
Packt
23 Oct 2015
15 min read
Save for later

Internet-Connected Smart Water Meter

Packt
23 Oct 2015
15 min read
In this article by Pradeeka Seneviratne author of the book, Internet of Things with Arduino Blueprints, explains that for many years and even now, water meter readings have been collected manually. To do this, a person has to visit the location where the water meter is installed. In this article, you will learn how to make a smart water meter with an LCD screen that has the ability to connect to the internet and serve meter readings to the consumer through the Internet. In this article, you shall do the following: Learn about water flow sensors and its basic operation Learn how to mount and plumb a water flow meter on and into the pipeline Read and count the water flow sensor pulses Calculate the water flow rate and volume Learn about LCD displays and connecting with Arduino Convert a water flow meter to a simple web server and serve meter readings through the Internet (For more resources related to this topic, see here.) Prerequisites An Arduino UNO R3 board (http://store.arduino.cc/product/A000066) Arduino Ethernet Shield R3 (https://www.adafruit.com/products/201) A liquid flow sensor (http://www.futurlec.com/FLOW25L0.shtml) A Hitachi HD44780 DRIVER compatible LCD Screen (16 x 2) (https://www.sparkfun.com/products/709) A 10K ohm resistor A 10K ohm potentiometer (https://www.sparkfun.com/products/9806) Few Jumper wires with male and female headers (https://www.sparkfun.com/products/9140) A breadboard (https://www.sparkfun.com/products/12002) Water flow sensors The heart of a water flow sensor consists of a Hall effect sensor (https://en.wikipedia.org/wiki/Hall_effect_sensor) that outputs pulses for magnetic field changes. Inside the housing, there is a small pinwheel with a permanent magnet attached to it. When the water flows through the housing, the pinwheel begins to spin, and the magnet attached to it passes very close to the Hall effect sensor in every cycle. The Hall effect sensor is covered with a separate plastic housing to protect it from the water. The result generates an electric pulse that transitions from low voltage to high voltage, or high voltage to low voltage, depending on the attached permanent magnet's polarity. The resulting pulse can be read and counted using the Arduino. For this project, we will use a Liquid Flow sensor from Futurlec (http://www.futurlec.com/FLOW25L0.shtml). The following image shows the external view of a Liquid Flow Sensor: Liquid flow sensor – the flow direction is marked with an arrow The following image shows the inside view of the liquid flow sensor. You can see a pinwheel that is located inside the housing: Pinwheel attached inside the water flow sensor Wiring the water flow sensor with Arduino The water flow sensor that we are using with this project has three wires, which are the following: Red (or it may be a different color) wire, which indicates the Positive terminal Black (or it may be a different color) wire, which indicates the Negative terminal Brown (or it may be a different color) wire, which indicates the DATA terminal All three wire ends are connected to a JST connector. Always refer to the datasheet of the product for wiring specifications before connecting them with the microcontroller and the power source. When you use jumper wires with male and female headers, do the following: Connect positive terminal of the water flow sensor to Arduino 5V. Connect negative terminal of the water flow sensor to Arduino GND. Connect DATA terminal of the water flow sensor to Arduino digital pin 2. Water flow sensor connected with Arduino Ethernet Shield using three wires You can directly power the water flow sensor using Arduino since most residential type water flow sensors operate under 5V and consume a very low amount of current. Read the product manual for more information about the supply voltage and supply current range to save your Arduino from high current consumption by the water flow sensor. If your water flow sensor requires a supply current of more than 200mA or a supply voltage of more than 5v to function correctly, then use a separate power source with it. The following image illustrates jumper wires with male and female headers: Jumper wires with male and female headers Reading pulses The water flow sensor produces and outputs digital pulses that denote the amount of water flowing through it. These pulses can be detected and counted using the Arduino board. Let's assume the water flow sensor that we are using for this project will generate approximately 450 pulses per liter (most probably, this value can be found in the product datasheet). So 1 pulse approximately equals to [1000 ml/450 pulses] 2.22 ml. These values can be different depending on the speed of the water flow and the mounting polarity of the water flow sensor. Arduino can read digital pulses generating by the water flow sensor through the DATA line. Rising edge and falling edge There are two type of pulses, as listed here:. Positive-going pulse: In an idle state, the logic level is normally LOW. It goes HIGH state, stays there for some time, and comes back to the LOW state. Negative-going pulse: In an idle state, the logic level is normally HIGH. It goes LOW state, stays LOW state for time, and comes back to the HIGH state. The rising and falling edges of a pulse are vertical. The transition from LOW state to HIGH state is called rising edge and the transition from HIGH state to LOW state is called falling edge. Representation of Rising edge and Falling edge in digital signal You can capture digital pulses using either the rising edge or the falling edge. In this project, we will use the rising edge. Reading and counting pulses with Arduino In the previous step, you attached the water flow sensor to Arduino UNO. The generated pulse can be read by Arduino digital pin 2 and the interrupt 0 is attached to it. The following Arduino sketch will count the number of pulses per second and display it on the Arduino Serial Monitor: Open a new Arduino IDE and copy the sketch named B04844_03_01.ino. Change the following pin number assignment if you have attached your water flow sensor to a different Arduino pin: int pin = 2; Verify and upload the sketch on the Arduino board: int pin = 2; //Water flow sensor attached to digital pin 2 volatile unsigned int pulse; const int pulses_per_litre = 450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse = 0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); } void count_pulse() { pulse++; } Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second in each loop The attachInterrupt() function is responsible for handling the count_pulse() function. When the interrupts() function is called, the count_pulse() function will start to collect the pulses generated by the liquid flow sensor. This will continue for 1000 milliseconds, and then the noInterrupts() function is called to stop the operation of count_pulse() function. Then, the pulse count is assigned to the pulse variable and prints it on the serial monitor. This will repeat again and again inside the loop() function until you press the reset button or disconnect the Arduino from the power. Calculating the water flow rate The water flow rate is the amount of water flowing in at a given point of time and can be expressed in gallons per second or liters per second. The number of pulses generated per liter of water flowing through the sensor can be found in the water flow sensor's specification sheet. Let's say there are m pulses per liter of water. You can also count the number of pulses generated by the sensor per second: Let's say there are n pulses per second. The water flow rate R can be expressed as: In litres per second Also, you can calculate the water flow rate in liters per minute using the following formula: For example, if your water flow sensor generates 450 pulses for one liter of water flowing through it, and you get 10 pulses for the first second, then the elapsed water flow rate is: 10/450 = 0.022 liters per second or 0.022 * 1000 = 22 milliliters per second. The following steps will explain you how to calculate the water flow rate using a simple Arduino sketch: Open a new Arduino IDE and copy the sketch named B04844_03_02.ino. Verify and upload the sketch on the Arduino board. The following code block will calculate the water flow rate in milliliters per second: Serial.print("Water flow rate: "); Serial.print(pulse * 1000/pulses_per_litre); Serial.println("milliliters per second"); Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second and the water flow rate in milliliters per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second and water flow rate in each loop Calculating the water flow volume The water flow volume can be calculated by summing up the product of flow rate and the time interval: Volume = ∑ Flow Rate * Time_Interval The following Arduino sketch will calculate and output the total water volume since the device startup: Open a new Arduino IDE and copy the sketch named B04844_03_03.ino. The water flow volume can be calculated using following code block: volume = volume + flow_rate * 0.1; //Time Interval is 0.1 second Serial.print("Volume: "); Serial.print(volume); Serial.println(" milliliters"); Verify and upload the sketch on the Arduino board. Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second, water flow rate in milliliters per second, and total volume of water in milliliters will be printed on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second, water flow rate and in each loop and sum of volume  To accurately measure water flow rate and volume, the water flow sensor needs to be carefully calibrated. The hall effect sensor inside the housing is not a precision sensor, and the pulse rate does vary a bit depending on the flow rate, fluid pressure, and sensor orientation. Adding an LCD screen to the water meter You can add an LCD screen to your newly built water meter to display readings, rather than displaying them on the Arduino serial monitor. You can then disconnect your water meter from the computer after uploading the sketch on to your Arduino. Using a Hitachi HD44780 driver compatible LCD screen and Arduino Liquid Crystal library, you can easily integrate it with your water meter. Typically, this type of LCD screen has 16 interface connectors. The display has two rows and 16 columns, so each row can display up to 16 characters. The following image represents the top view of a Hitachi HD44760 driver compatible LCD screen. Note that the 16-pin header is soldered to the PCB to easily connect it with a breadboard. Hitachi HD44780 driver compatible LCD screen (16 x 2)—Top View The following image represents the bottom view of the LCD screen. Again, you can see the soldered 16-pin header. Hitachi HD44780 driver compatible LCD screen (16x2)—Bottom View Wire your LCD screen with Arduino as shown in the next diagram. Use the 10k potentiometer to control the contrast of the LCD screen. Now, perform the following steps to connect your LCD screen with your Arduino: LCD RS pin (pin number 4 from left) to Arduino digital pin 8. LCD ENABLE pin (pin number 6 from left) to Arduino digital pin 7. LCD READ/WRITE pin (pin number 5 from left) to Arduino GND. LCD DB4 pin (pin number 11 from left) to Arduino digital pin 6. LCD DB5 pin (pin number 12 from left) to Arduino digital pin 5. LCD DB6 pin (pin number 13 from left) to Arduino digital pin 4. LCD DB7 pin (pin number 14 from left) to Arduino digital pin 3. Wire a 10K pot between Arduino +5V and GND, and wire its wiper (center pin) to LCD screen V0 pin (pin number 3 from left). LCD GND pin (pin number 1 from left) to Arduino GND. LCD +5V pin (pin number 2 from left) to Arduino 5V pin. LCD Backlight Power pin (pin number 15 from left) to Arduino 5V pin. LCD Backlight GND pin (pin number 16 from left) to Arduino GND. Fritzing representation of the circuit Open a new Arduino IDE and copy the sketch named B04844_03_04.ino. First initialize the Liquid Crystal library using following line: #include <LiquidCrystal.h> To create a new LCD object with following parameters, the syntax is LiquidCrystal lcd (RS, ENABLE, DB4, DB5, DB6, DB7): LiquidCrystal lcd(8, 7, 6, 5, 4, 3); Then initialize number of rows and columns in the LCD. Syntax is lcd.begin(number_of_columns, number_of_rows): lcd.begin(16, 2); You can set the starting location to print a text on the LCD screen using following function, syntax is lcd.setCursor(column, row): lcd.setCursor(7, 1); The column and row numbers are 0 index based and the following line will start to print a text in the intersection of the 8th column and 2nd row. Then, use the lcd.print() function to print some text on the LCD screen: lcd.print(" ml/s"); Verify and upload the sketch on the Arduino board. Blow some air through the water flow sensor using your mouth. You can see some information on the LCD screen such as pulses per second, water flow rate, and total water volume from the beginning of the time:  LCD screen output Converting your water meter to a web server In the previous steps, you learned how to display your water flow sensor's readings and calculate water flow rate and total volume on the Arduino serial monitor. In this step, you will learn how to integrate a simple web server to your water flow sensor and remotely read your water flow sensor's readings. You can make an Arduino web server with Arduino WiFi Shield or Arduino Ethernet shield. The following steps will explain how to convert the Arduino water flow meter to a web server with Arduino Wi-Fi shield: Remove all the wires you have connected to your Arduino in the previous sections in this article. Stack the Arduino WiFi shield on the Arduino board using wire wrap headers. Make sure the Arduino WiFi shield is properly seated on the Arduino board. Now, reconnect the wires from water flow sensor to the Wi-Fi shield. Use the same pin numbers as used in the previous steps. Connect the 9VDC power supply to the Arduino board. Connect your Arduino to your PC using the USB cable and upload the next sketch. Once the upload is completed, remove your USB cable from the Arduino. Open a new Arduino IDE and copy the sketch named B04844_03_05.ino. Change the following two lines according to your WiFi network settings, as shown here: char ssid[] = "MyHomeWiFi"; char pass[] = "secretPassword"; Verify and upload the sketch on the Arduino board. Blow the air through the water flow sensor using your mouth, or it would be better if you can connect the water flow sensor to a water pipeline to see the actual operation with the water. Open your web browser, type the WiFi shield's IP address assigned by your network, and hit the Enter key: http://192.168.1.177 You can see your water flow sensor's pulses per second, flow rate, and total volume on the Web page. The page refreshes every 5 seconds to display updated information. You can add an LCD screen to the Arduino WiFi shield as discussed in the previous step. However, remember that you can't use some of the pins in the Wi-Fi shield because they are reserved for SD (pin 4), SS (pin 10), and SPI (pin 11, 12, 13). We have not included the circuit and source code here in order to make the Arduino sketch simple. A little bit about plumbing Typically, the direction of the water flow is indicated by an arrow mark on top of the water flow meter's enclosure. Also, you can mount the water flow meter either horizontally or vertically according to its specifications. Some water flow meters can mount both horizontally and vertically. You can install your water flow meter to a half-inch pipeline using normal BSP pipe connectors. The outer diameter of the connector is 0.78" and the inner thread size is half-inch. The water flow meter has threaded ends on both sides. Connect the threaded side of the PVC connectors to both ends of the water flow meter. Use a thread seal tape to seal the connection, and then connect the other ends to an existing half-inch pipeline using PVC pipe glue or solvent cement. Make sure that you connect the water flow meter with the pipe line in the correct direction. See the arrow mark on top of the water flow meter for flow direction. BNC pipe line connector made by PVC Securing the connection between the water flow meter and BNC pipe connector using thread seal PVC solvent cement. Image taken from https://www.flickr.com/photos/ttrimm/7355734996/ Summary In this article, you gained hands-on experience and knowledge about water flow sensors and counting pulses while calculating and displaying them. Finally, you made a simple web server to allow users to read the water meter through the Internet. You can apply this to any type of liquid, but make sure to select the correct flow sensor because some liquids react chemically with the material that the sensor is made of. You can Google and find which flow sensors support your preferred liquid type. Resources for Article: Further resources on this subject: The Arduino Mobile Robot [article] Arduino Development [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 15034

article-image-build-remote-control-car-zigbee-part-2
Bill Pretty
22 Oct 2015
5 min read
Save for later

Build a Remote Control Car with Zigbee Part 2

Bill Pretty
22 Oct 2015
5 min read
In Part 1 we talked about some of the hardware that I used to create my Zigbee (XBee) controlled RC vehicles. In this part we will see how to use the software that come for free from Digi.com. Figure 1 XCTU Boot Screen When you start XCTU, you will always see the screen shown above. The first thing we have to do is to find out which serial port is connected to the Zigbee module. There are a number of ways to communicate serially with module. The hardware that I would recommend is shown below and is available from Sparkfun Electronics. Figure 2 XBee Explorer USB This adapter has a built in FTDI USB adapter, which will install itself as a serial port on your system. Figure 3 Select Serial Port In this case we are using Com 9 – USB Serial Port. The next thing we have to do is to select the baud rate and other parameters. Basically 9600 baud, 8 bits, no parity and no flow control. Figure 4 Port Parameters Now that we have the serial port configured, we can search for XBee modules to configure. The XBee module is powered by the USB port of you host computer, so you should see some lights flash when you first plug in the adapter. Setting Up the XBee Controller Figure 5 Controller Setup The figure above shows the first setup portion for the controller (the box with the joystick). The important thing to note is the DL or Destination Address Low Byte; this is the address of the XBee module in the robot. This is the module where the inputs to this XBee will be “mirrored” on the outputs of the robot’s XBee. But first we have to set them up as inputs on the controller. Figure 6 I/O Settings There are a few important things to be aware of in the figure above. First of all inputs D0 – D4 and D6 and D7 are configured as inputs. But the most important thing is the “DIO Change Detect” value. This value acts like an interrupt mask. The “7E” value tells the XBee to only scan the inputs we setup for a change. And these inputs must be pulled to a steady state, either high or low, or the mast will send spurious commands to the robot. Setting Up the XBee Receiver Figure 7 XBee Receiver Setup This is the setup screen for the robot. My robot’s name in this case is “Gaucho”. As you can see the corresponding I/O pins are configured as outputs on this XBee module. Also note that the lower address byte of the MAC Address is “40BF1EF1”. This is the DL byte that we have to enter into the controller setup. We are using “star” network configuration, so the controller can only talk to one robot at a time. If you want to control several robots, you will have to change this part of the address to talk to a specific robot. The XBee module is capable of driving and LED directly, so if you buy an adapter like the one below, you can test the controller before you install the XBee receiver in your robot. This is something I HIGHLY recommend. Figure 8 XBee Breakout Board Summary In this part the blog, I showed you how to set up the XBee modules using the free software from the folks at Digi. So at this point you have most of the information to start modifying / building your own XBee controlled robot. In part three of this article we will take a look at a very large robot that I have built and christened “Gaucho”, because it began as a child’s ride on electric car (Called Gaucho)! Figure 9 Gaucho About the Author Bill began his career in electronics in the early 80’s with a small telecom startup company that would eventually become a large multinational. He left there to pursue a career in commercial aviation in Canada’s north. From there he joined the Ontario Center for Microelectronics, a provincially funded research and development center. Bill left there for a career in the military as a civilian contractor at what was then called Defense Research Establishment Ottawa. That began a career which was to span the next 25 years, and continues today. Over the years Bill has acquired extensive knowledge in the field of technical security and started his own company in 2010. That company is called William Pretty Security Inc. and provides support in the form of research and development, to various law enforcement and private security agencies. Bill has published and presented a number of white papers on the subject of technical security. Bill was also a guest presenter for a number of years at the Western Canada Technical Conference, a law enforcement only conference held every year in western Canada. A selection of these papers is available for download from his web site. www.williamprettysecurity.com If you’re interested in building more of your own projects, then be sure to check out Bill’s titles available now in both print and eBook format! If you’re new to working with microcontrollers be sure to pick up Getting Started with Electronic Projects to start creating a whole host of great projects you can do in a single weekend with LM555, ZigBee, and BeagleBone components! If you’re looking for something more advanced to tinker with, then Bill’s other title - Building a Home Security System with BeagleBone – is perfect for hobbyists looking to make a bigger project!
Read more
  • 0
  • 1
  • 2918

article-image-configuring-brokers
Packt
21 Oct 2015
18 min read
Save for later

Configuring Brokers

Packt
21 Oct 2015
18 min read
In this article by Saurabh Minni, author of Apache Kafka Cookbook, we will cover the following topics: Configuring basic settings Configuring threads and performance Configuring log settings Configuring replica settings Configuring the ZooKeeper settings Configuring other miscellaneous parameters (For more resources related to this topic, see here.) This article explains the configurations of a Kafka broker. Before we get started with Kafka, it is critical to configure it to suit us best. The best part about Kafka is that it's highly configurable. Though, most of the time you will be good to go with the default settings in place, when dealing with scale and performance, you might want to get your hands with a configuration that suits  your application best. Configuring basic settings Let's configure the basic settings for your Apache Kafka broker. Getting ready I believe you have already Kafka installed. Make a copy of the server.properties file from the config folder. Now, let's get cracking with your favorite editor. How to do it... Open your server.properties file: The first configuration that you need to change is broker.id: broker.id=0 Next, give a host name to your machine: host.name=localhost You also need to set the port number: to listen to. port=9092 Lastly, the directory for data persistence is as follows: log.dirs=/disk1/kafka-logs How it works… With these basic configuration parameters in place, your Kafka broker is ready to be setup. All you need to do is pass on this new configuration file when you start the broker as a parameter. Some of the important configurations used in the configuration files are explained here: broker.id: This should be a nonnegative integer ID. It should be unique for a cluster as it is used for all intents, purposes, and names of brokers. This also allows the broker to be moved to a different host and/or port without additional changes on the side of a consumer. Its default value is 0. host.name: The refers to the default value, which is null. If it's not specified, Kafka will bind to all interfaces in a system. If it's specified, it will bind only to a particular address. If you want clients to connect only to a particular interface, it is a good idea to specify the host name. port: This defines the port number that the Kafka broker will be listening to, to accept client connections. log.dirs: This tells the broker the directory where it should store files for the persistence of messages. You can specify multiple directories here using commas that separate locations. The default value for this is /tmp/kafka-logs. There's more… Kafka also lets you specify two more parameters, which are very interesting: advertised.host.name: This is the hostname that is given out to producers, consumers, and other brokers to connect to. Usually, this is the same as host.name and you need not specify it. advertised.port: This specifies the port that other producers, consumers, and brokers need to connect to. If not specified, it uses the one mentioned in the port configuration parameters. The real use case of the preceding parameters is when you make use of bridged connections where your internal host.name and port number might be different from one that external parties need to connect to. Configuring threads and performance When using Kafka, these settings are something you need not modify. However, when you want to extract every last bit of performance from your machines, this comes in handy. Getting ready You are all set with your broker properties file and are set to edit it in your favorite editor. How to do it... Open your server.properties file. Change message.max.bytes: message.max.bytes=1000000 Set the number of network threads: num.network.threads=3 Set the number of IO threads: num.io.threads=8 Set the number of threads that perform background processing: background.threads=10 Set the maximum number of requests to be queued up: queued.max.requests=500 Set the send socket buffer size: socket.send.buffer.bytes=102400 Set the receive socket buffer size: socket.receive.buffer.bytes=102400 Set the maximum request size: socket.request.max.bytes=104857600 Set the number of partitions: num.partitions=1 How it works… These network and performance configurations are set to an optimal level for your application. You might need to experiment a little to come up with an optimal configuration. Here are some explanations for these confugurations: message.max.bytes: This sets the maximum size of the message that a server can receive. This should be set in order to prevent any producer from inadvertently sending extra large messages and swamping consumers. The default size should be set to 1000000. num.network.threads: This sets the number of threads running to handle a network request. If you have too many requests coming in, you need to change this value. Else, you are good to go in most use cases. The default value for this should be set to 3. num.io.threads: This sets the number of threads that are spawned for IO operations. This should be set to at least the number of disks that are present. The default value for this should be set to 8. background.threads: This sets the number of threads that run various background jobs. These include deleting old log files. The default value is 10 and you might not need to change this. queued.max.requests: This sets the size of the queue that holds pending messages, while others are processed by IO threads. If the queue is full, the network threads will stop accepting any more messages. If you have erratic loads in your application, you need to set it to some value at which this does not throttle. socket.send.buffer.bytes: This sets the SO_SNDBUFF buffer size, which is used for socket connections. socket.receive.buffer.bytes: This sets the SO_RCVBUFF buffer size, which is used for socket connections. socket.request.max.bytes: This sets the maximum request size for a server to receive. This should be smaller than the Java heap size that you have set. num.partitions: This sets the number of default partitions for any topic you create without explicitly mentioning any partition size. There's more You might also need to configure your Java installation for maximum performance. This includes settings for heap, socket size, and so on. Configuring log settings Log settings are perhaps the most important configurations that you need to change based on your system requirements. Getting ready Just open the server.properties file in your favorite editor. How to do it... Open your server.properties file. Here are the default values for it: Change the log.segment.bytes value: log.segment.bytes=1073741824 Set the log.roll.{ms,hours} value: log.roll.{ms,hours}=168 hours Set the log.cleanup.policy value: log.cleanup.policy=delete Set the log.retention.{ms,minutes,hours} value: log.retention.{ms,minutes,hours}=168 hours Set the log.retention.bytes value: log.retention.bytes=-1 Set the log.retention.check.interval.ms value: log.retention.check.interval.ms= 30000 Set the log.cleaner.enable value: log.cleaner.enable=false Set the log.cleaner.threads value: log.cleaner.threads=1 Set the log.cleaner.backoff.ms value: log.cleaner.backoff.ms=15000 Set the log.index.size.max.bytes value: log.index.size.max.bytes=10485760 Set the log.index.interval.bytes value: log.index.interval.bytes=4096 Set the log.flush.interval.messages value: log.flush.interval.messages=Long.MaxValue Set the log.flush.interval.ms value: log.flush.interval.ms=Long.MaxValue How it works… Here is the explanation of log settings: log.segment.bytes: This defines the maximum segment size in bytes. Once a segment reaches a particular size, a new segment file is created. A topic is stored as a bunch of segment files in a directory. This can also be set on a per topic basis. Its default value is 1 GB. log.roll.{ms,hours}: This sets the time period after which a new segment file is created even if it has not reached the required size limit. This setting can also be set on a per topic basis. Its default value is 7 days. log.cleanup.policy: The value for this can be either deleted or compacted. With the delete option set, log segments are deleted periodically when it reaches its time threshold or size limit. If a compact option is set, log compaction will be used to clean up obsolete records. This setting can be set on a per topic basis. log.retention.{ms,minutes,hours}: This sets the amount of time that logs segments are retained. This can be set on a per topic basis. The default value for this is 7 days. log.retention.bytes: This sets the maximum number of byte logs per partition that are retained before they are deleted. This value can be set for a per topic basis. When either of the log time or size limits are reached, segments are deleted. log.retention.check.interval.ms: This sets the time interval at which logs are checked for deletion to meet retention policies. The default value for this is 5 minutes. log.cleaner.enable: For log compaction to be enabled, this has to be set as true. log.cleaner.threads: This sets the number of threads that work to clean logs for compaction. log.cleaner.backoff.ms: This defines the interval at which logs check if any other logs need cleaning. log.index.size.max.bytes: This settings sets the maximum size allowed for the offset index of each log segment. This can be set for per topic basis as well. log.index.interval.bytes: This defines the byte interval at which a new entry is added to the offset index. For each fetch request, the broker performs a linear scan for a particular number of bytes to find the correct position in the log to begin and end a fetch. Setting this as a larger value will mean larger index files (and a bit more memory usage) but less scanning. log.flush.interval.messages: This is the number of messages that are kept in memory till they're flushed to the disk. Though this does not guarantee durability, it gives finer control. log.flush.interval.ms: This sets the time interval at which the messages are flushed to the disk. There's more Some other settings are listed at http://kafka.apache.org/documentation.html#brokerconfigs. See also More on log compassion is available at http://kafka.apache.org/documentation.html#compaction. Configuring replica settings You will also want set up a replica for reliability purposes. Let's see some of the important settings that you need to handle for replication to work best for you. Getting ready Open the server.properties file in your favorite editor. How to do it... Open your server.properties file. Here are default values for the settings: Set the default.replication.factor value: default.replication.factor=1 Set the replica.lag.time.max.ms value: replica.lag.time.max.ms=10000 Set the replica.lag.max.messages value: replica.lag.max.messages=4000 Set the replica.fetch.max.bytes value: replica.fetch.max.bytes=1048576 Set the replica.fetch.wait.max.ms value: replica.fetch.wait.max.ms=500 Set the num.replica.fetchers value: num.replica.fetchers=1 Set the replica.high.watermark.checkpoint.interval.ms value: replica.high.watermark.checkpoint.interval.ms=5000 Set the fetch.purgatory.purge.interval.requests value: fetch.purgatory.purge.interval.requests=1000 Set the producer.purgatory.purge.interval.requests value: producer.purgatory.purge.interval.requests=1000 Set the replica.socket.timeout.ms value: replica.socket.timeout.ms=30000 Set the replica.socket.receive.buffer.bytes value: replica.socket.receive.buffer.bytes=65536 How it works… Here is the explanation of the preceding settings: default.replication.factor: This sets the default replication factor for automatically created topics. replica.lag.time.max.ms: This is time period within which if a leader does not receive any fetch request, its moved out of in-sync replicas and is treated as dead. replica.lag.max.messages: This is maximum number of messages a follower can be behind the leader by before it is considered dead and not in-sync. replica.fetch.max.bytes: This sets the maximum number of bytes of data that a follower will fetch in a request from its leader. replica.fetch.wait.max.ms: This sets the maximum amount of time for the leader to respond to a replica's fetch request. num.replica.fetchers: This specifies the number of threads used to replicate messages from the leader. Increasing the number of threads increases the IO rate to a degree. replica.high.watermark.checkpoint.interval.ms: This specifies the frequency with which each replica saves its high watermark to disk for recovery. fetch.purgatory.purge.interval.requests: This sets the fetch request purgatory's purge interval. This purgatory is the place where the fetch requests are kept on hold till they can be serviced. producer.purgatory.purge.interval.requests: This sets the producer request purgatory's purge interval. This purgatory is the place where the producer requests are kept on hold till they have been serviced. There's more Some other settings are listed at http://kafka.apache.org/documentation.html#brokerconfigs. Configuring the ZooKeeper settings ZooKeeper is used in Kafka for cluster management and to maintain the details of topics. Getting ready Just open the server.properties file in your favorite editor. How to do it… Open your server.properties file. Here are the default values for the settings: Set the zookeeper.connect property: zookeeper.connect=127.0.0.1:2181,192.168.0.32:2181 Set the zookeeper.session.timeout.ms property: zookeeper.session.timeout.ms=6000 Set the zookeeper.connection.timeout.ms property: zookeeper.connection.timeout.ms=6000 Set the zookeeper.sync.time.ms property: zookeeper.sync.time.ms=2000 How it works… Here is the explanation of these settings: zookeeper.connect: This is where you specify the ZooKeeper connection string in the form of hostname:port. You can use comma-separated values to specify multiple ZooKeeper nodes. This ensures reliability and continuity of Kafka clusters even in the event of a ZooKeeper node being down. ZooKeeper allows you to use the chroot path to make a particular Kafka data available only under a particular path. This enables you to have the same ZooKeeper clusters support multiple Kafka clusters. Here is the method to specify connection a string in this case: host1:port1,host2:port2,host3:port3/chroot/path The preceding statement puts all the cluster data in the /chroot/path path. This path must be created prior to starting Kafka clusters and users must use the same string. zookeeper.session.timeout.ms: This specifies the time within which if the heartbeat from a server is not received, then it is considered dead. The value for this must be carefully selected because if this heartbeat has too long an interval, it will not be able to detect a dead server in time and also lead to issues. Also, if the time period is too small, a live server might be considered dead. zookeeper.connection.timeout.ms: This specifies the maximum connection time that a client waits to accept a connection. zookeeper.sync.time.ms property: This specifies the time period by which a ZooKeeper follower can be behind its leader The ZooKeeper management details from the Kafka perspective are highlighted at http://kafka.apache.org/documentation.html#zk. You can find ZooKeeper at https://zookeeper.apache.org/ See also Configuring other miscellaneous parameters Besides the configurations mentioned previously, there are some other configurations that also need to be set. Getting ready Open the server.properties file in your favorite editor. We will look at the default values of the properties in the following section. How to do it... Set the auto.create.topics.enable property: auto.create.topics.enable=true Set the controlled.shutdown.enable property: controlled.shutdown.enable=true Set the controlled.shutdown.max.retries property: controlled.shutdown.max.retries=3 Set the controlled.shutdown.retry.backoff.ms property: controlled.shutdown.retry.backoff.ms=5000 Set the auto.leader.rebalance.enable property: auto.leader.rebalance.enable=true Set the leader.imbalance.per.broker.percentage property: leader.imbalance.per.broker.percentage=10 Set the leader.imbalance.check.interval.seconds property: leader.imbalance.check.interval.seconds=300 Set the offset.metadata.max.bytes property: offset.metadata.max.bytes=4096 Set the max.connections.per.ip property: max.connections.per.ip=Int.MaxValue Set the connections.max.idle.ms property: connections.max.idle.ms=600000 Set the unclean.leader.election.enable property: unclean.leader.election.enable=true Set the offsets.topic.num.partitions property: offsets.topic.num.partitions=50 Set the offsets.topic.retention.minutes property: offsets.topic.retention.minutes=1440 Set the offsets.retention.check.interval.ms property: offsets.retention.check.interval.ms=600000 Set the offsets.topic.replication.factor property: offsets.topic.replication.factor=3 Set the offsets.topic.segment.bytes property: offsets.topic.segment.bytes=104857600 Set the offsets.load.buffer.size property: offsets.load.buffer.size=5242880 Set the offsets.commit.required.acks property: offsets.commit.required.acks=-1 Set the offsets.commit.timeout.ms property: offsets.commit.timeout.ms=5000 How it works… An explanation of the settings is as follows. auto.create.topics.enable: Setting this value to true will make sure that if you fetch metadata or produce messages for a nonexistent topic, it will automatically be created. Ideally, in a production environment, you should set this value to false. controlled.shutdown.enable: This is set to true to make sure that when shutdown is called on the broker, if it's the leader of any topic, then it gracefully moves all leaders to a different broker before it shuts down. This increases the availability of the system overall. controlled.shutdown.max.retries: This sets the maximum number of retries that the broker makes to perform a controlled shutdown before performing an unclean one. controlled.shutdown.retry.backoff.ms: This sets the backoff time between controlled shutdown retries. auto.leader.rebalance.enable: If this is set to true, the broker will automatically try to balance the leadership of partitions among other brokers by periodically giving leadership to the preferred replica of each partition if it's available. leader.imbalance.per.broker.percentage: This sets the percentage of leader imbalance that's allowed per broker. The cluster will rebalance the leadership if this ratio goes above the set value. leader.imbalance.check.interval.seconds: This defines the time period for checking leader imbalance. offset.metadata.max.bytes: This defines the maximum amount of metadata allowed to the client to be stored with their offset. max.connections.per.ip: This sets the maximum number of connections that the broker accepts from a given IP address. connections.max.idle.ms: This sets the maximum time till which the broker will be idle before it closes a socket connection unclean.leader.election.enable: This is set to true to allow replicas that are not in-sync replicas (ISR) in order to be allowed to become the leader. This can lead to data loss. This is the last option for the cluster, though. offsets.topic.num.partitions: This sets the number of partitions for the offset commit topic. This cannot be changed post deployment, so its suggested that the number be set to a higher limit. The default value for this is 50. offsets.topic.retention.minutes: This sets offsets that are older than present time be marked for deletion. Actual deletion occurs when a log cleaner run the compaction of an offset topic. offsets.retention.check.interval.ms: This sets the time interval for the checking of stale offsets. offsets.topic.replication.factor: This sets the replication factor for the offset commit topic. The higher the value, the higher the availability. If at the time of creation of an offset topic, the number of brokers is lower than the replication factor, the number of replicas created will be equal to the brokers. offsets.topic.segment.bytes: This sets the segment size for offset topics. This, if kept low, leads to faster log compaction and loads. offsets.load.buffer.size: This sets the buffer size that's to be used for reading offset segments into offset manager's cache. offsets.commit.required.acks: This sets the number of acknowledgements that are required before an offset commit can be accepted. offsets.commit.timeout.ms: This sets the time after which an offset commit will be performed in case the required number of replicas have not received the offset commit. See also There are more broker configurations that are available. Read more about them at http://kafka.apache.org/documentation.html#brokerconfigs. Summary In this article, we discussed setting basic configurations for the Kafka broker, configuring and managing threads, performance, logs, and replicas. We also discussed ZooKeeper settings that are used for cluster management and some miscellaneous parameter settings. Resources for Article: Further resources on this subject: Writing Consumers [article] Introducing Kafka [article] Testing With Groovy [article]
Read more
  • 0
  • 0
  • 2694
article-image-monitoring-and-troubleshooting-networking
Packt
21 Oct 2015
21 min read
Save for later

Monitoring and Troubleshooting Networking

Packt
21 Oct 2015
21 min read
This article by Muhammad Zeeshan Munir, author of the book VMware vSphere Troubleshooting, includes troubleshooting vSphere virtual distributed switches, vSphere standard virtual switches, vLANs, uplinks, DNS, and routing, which is one of the core issues a seasonal system engineer has to deal with on a daily basis. This article will cover all these topics and give you hands-on step-by-step instructions to manage and monitor your network resources. The following topics will be covered in this article: Different network troubleshooting commands VLANs troubleshooting Verification of physical trunks and VLAN configuration Testing of VM connectivity VMkernel interface troubleshooting Configuration command (Vicfg-vmknic and esxcli network ip interface) Use of Direct Console User Interface (DCUI) to verify configuration (For more resources related to this topic, see here.) Network troubleshooting commands Some of the commands that can be used for networking troubleshooting include net-dvs, Esxcli network, vicfg-route, vicfg-vmknic, vicfg-dns, vicfg-nics, and vicfg-vswitch. You can use the net-dvs command to troubleshoot VMware distributed dvSwitches. The command shows all the information regarding the VMware distributed dvSwtich configuration. The net-dvs command reads the information from the /etc/vmware/dvsdata.db file and displays all the data in the console. A vSphere host keeps updating its dvsdata.db file every five minutes. Connect to a vSphere host using PuTTY. Enter your user name and password when prompted. Type the following command in the CLI: net-dvs You will see something similar to the following screenshot: In the preceding screenshot, you can see that the first line represents the UUID of a VMware distributed switch. The second line shows the maximum number of ports a distributed switch can have. The line com.vmware.common.alias = dvswitch-Network-Pools represents the name of a distributed switch. The next line com.vmware.common.uplinkPorts: dvUplink1 to dvUplinkn shows the uplink ports a distributed switch has. The distributed switch MTU is set to 1,600 and you can see the information about CDP just below it. CDP information can be useful to troubleshoot connectivity issues. You can see com.vmware.common.respools.list listing networking resource pools, while com.vmware.common.host.uplinkPorts shows the ports numbers assigned to uplink ports. Further details about these uplink ports are explained as follows for each uplink port by their port number. You can also see the port statistics as displayed in the following screenshot. When you perform troubleshooting, these statistics can help you to check the behavior of the distributed switch and the ports. From these statistics, you can diagnose if the data packets are going in and out. As you can see in the following screenshot, all the metrics regarding packet drops are zero. If you find in your troubleshooting that the packets are being dropped, you can easily start finding the root cause of the problem: Unfortunately, the net-dvs command is very poorly documented, and usually, it is hard to find useful references. Moreover, it is not supported by VMware. However, you can use it with –h switch to display more options. Repairing a dvsdata.db file Sometimes, the dvsdata.db file of a vSphere host becomes corrupted and you face different types of distributed switch errors, for example, unable to create proxy DVS. In this case, when you try to run the net-dvs command on a vSphere host, it will fail with an error as well. As I have mentioned earlier, the net-dvs command reads data from the /etc/vmware/dvsdata.db file—it fails because it is unable to read data from the file. The possible cause for the corruption of the dvsdata.db file could be network outage; or when a vSphere host is disconnected from vCenter and deleted, it might have the information in its cache. You can resolve this issue by restoring the dvsdata.db file by following these steps: Through PuTTY, connect to a functioning vSphere host in your infrastructure. Copy the dvsdata.db file from the vSphere host. The file can be found in /etc/vmware/dvsdata.db. Transfer the copied dvsdata.db file to the corrupted vSphere host and overwrite it. Restart your vSphere host. Once the vSphere host is up and running, use PuTTY to connect to it. Run the net-dvs command. The command should be executed successfully this time without any errors. ESXCLI network The esxcli network command is a longtime friend of the system administrator and the support staff for troubleshooting network related issues. The esxcli network command will be used to examine different network configurations and to troubleshoot problems. You can type esxcli network to quickly see a help reference and the different options that can be used with the command. Let's walk through some useful esxcli network troubleshooting commands. Type the following command into your vSphere CLI to list all the virtual machines and the networks they are on. You can see that the command returned World ID, virtual machine name, number of ports, and the network: esxcli network vm list World ID  Name  Num Ports  Networks --------  ---------------------------------------------------  ---------  --------------- 14323012  cluster08_(5fa21117-18f7-427c-84d1-c63922199e05)          1  dvportgroup-372 Now use the World ID of a virtual machine returned by the last command to list all the ports the virtual machine is currently using. You can see the virtual switch name, MAC address of the NIC, IP address, and uplink port ID: esxcli network vm port list -w 14323012 Port ID: 50331662 vSwitch: dvSwitch-Network-Pools Portgroup: dvportgroup-372 DVPort ID: 1063 MAC Address: 00:50:56:01:00:7e IP Address: 0.0.0.0 Team Uplink: all(2) Uplink Port ID: 0 Active Filters: Type the following command in the CLI to list the statistics of the virtual switch—you need to replace the port ID as returned by the last command after –p flag: esxcli network port stats get -p 50331662 Packet statistics for port 50331662 Packets received: 10787391024 Packets sent: 7661812086 Bytes received: 3048720170788 Bytes sent: 154147668506 Broadcast packets received: 17831672 Broadcast packets sent: 309404 Multicast packets received: 656 Multicast packets sent: 52 Unicast packets received: 10769558696 Unicast packets sent: 7661502630 Receive packets dropped: 92865923 Transmit packets dropped: 0 Type the following command to list complete information about the network card of the virtual machine: esxcli network nic stats get -n vmnic0 NIC statistics for vmnic0 Packets received: 2969343419 Packets sent: 155331621 Bytes received: 2264469102098 Bytes sent: 46007679331 Receive packets dropped: 0 Transmit packets dropped: 0 Total receive errors: 78507 Receive length errors: 0 Receive over errors: 22 Receive CRC errors: 0 Receive frame errors: 0 Receive FIFO errors: 78485 Receive missed errors: 0 Total transmit errors: 0 Transmit aborted errors: 0 Transmit carrier errors: 0 Transmit FIFO errors: 0 Transmit heartbeat errors: 0 Transmit window errors: 0 A complete reference of the ESXCli Network command can be found here at https://goo.gl/9OMbVU. All the vicfg-* commands are very helpful and easy to use. I will encourage you to learn in order to make your life easier. Here are some of the vicfg-* commands relevant to network troubleshooting: vicfg-route: We will use this command for how to add or remove IP routes and how to create and delete default IP Gateways. vicfg-vmknic: We will use this command to perform different operations on VMkernel NICs for vSphere hosts. vicfg-dns: This command will be used to manipulate DNS information. vicfg-nics: We will use this command to manipulate vSphere Physical NICs. vicfg-vswitch: We will use this command to to create, delete, and modify vswitch information. Troubleshooting uplinks We will use the vicfg-nics command to manage physical network adapters of vSphere hosts. The vicfg-nics command can also be used to set up the speed, VMkernel name for the uplink adapters, duplex setting, driver information, and link state information of the NIC. Connect to your vMA appliance console and set up the target vSphere host: vifptarget --set crimv3esx001.linxsol.com List all the network cards available in the vSphere host. See the following screenshot for the output: vicfg-nics –l You can see that my vSphere host has five network cards from vmnic0 to vmnic5. You are able to see the PCI and driver information. The link state for the all the network cards is up. You can also see two types of network card speeds: 1000 Mbs and 9000 Mbs. There is also a card name in the Description field, MTU, and the Mac address for the network cards. You can set up a network card to auto-negotiate as follows: vicfg-nics --auto vimnic0 Now let's set the speed of vmnic0 to 1000 and its duplex settings to full: vicfg-nics --duplex full --speed 1000 vmnic0 Troubleshooting virtual switches The last command we will discuss in this article is vicfg-vswitch. The vicfg-vswitch command is a very powerful command that can be used to manipulate the day-to-day operations of a virtual switch. I will show you how to create and configure port groups and virtual switches. Set up a vSphere host in the vMA appliance in which you want to get information about virtual switches: vifptarget --set crimv3esx001.linxsol.com Type the following command to list all the information about the switches the vSphere host has. You can see the command output in the screenshot that follows: vicfg-vswitch -l You can see that the vSphere host has one virtual switch and two virtual NICs carrying traffic for the management network and for the vMotion. The virtual switch has 128 ports, and 7 of them are in used state. There are two uplinks to the switch with MTU set to 1500, while two VLANS are being used: one for the management network and one for the vMotion traffic. You can also see three distributed switches named OpenStack, dvSwitch-External-Networks, and dvSwitch-Network-Pools. Prefixing dv with the distributed switch name is a command practice, and it can help you to easily recognize a distributed switch. I will go through adding a new virtual switch: vicfg-vswitch --add vSwitch002 This creates a virtual switch with 128 ports and MTU of 1500. You can use the --mtu flag to specify a different MTU. Now add an uplink adapter vnic02 to the newly created virtual switch vSwitch002: vicfg-vswitch --link vmnic0 vSwitch002 To add a port group to the virtual switch, use the following command: vicfg-vswitch --add-pg portgroup002 vSwitch002 Now add an uplink adapter to the port group: vicfg-vswitch --add-pg-uplink vmnic0 --pg portgroup002 vSwitch002 We have discussed all the commands to create a virtual switch and its port groups and to add uplinks. Now we will see how to delete and edit the configuration of a virtual switch. An uplink NIC from the port group can be deleted using –N flag. Remove vmnic0 from the portgroup002: vicfg-vswitch --del-pg-uplink vmnic0 --pg portgroup002 vSwitch002 You can delete the recently created port group as follows: vicfg-vswitch --del-pg portgroup002 vSwitch002 To delete a switch, you first need to remove an uplink adapter from the virtual switch. You need to use the –U flag, which unlinks the uplink from the switch: vicfg-vswitch --unlink vmnic0 vSwitch002 You can delete a virtual switch using the –d flag. Here is how you do it: vicfg-vswitch --delete vSwitch002 You can check the Cisco Discovery Protocol (CDP) settings by using the --get-cdp flag with the vicfg-vswitch command. The following command resulted in putting the CDP in the Listen state, which indicates that the vSphere host is configured to receive CDP information from the physical switch: vi-admin@vma:~[crimv3esx001.linxsol.com]> vicfg-vswitch --get-cdp vSwitch0 listen You can configure CDP options for the vSphere host to down, listen, or advertise. In the Listen mode, the vSphere host tries to discover and publish this information received from a Cisco switch port, though the information of the vSwitch cannot be seen by the Cisco device. In the Advertise mode, the vSphere host doesn't discover and publish the information about the Cisco switch; instead, it publishes information about its vSwitch to the Cisco switch device. vicfg-vswitch --set-cdp both vSwitch0 Troubleshooting VLANs Virtual LANS or VLANs are used to separate the physical switching segment into different logical switching segments in order to segregate the broadcast domains. VLANs not only provide network segmentation but also provide us a method of effective network management. It also increases the overall network security, and nowadays, it is very commonly used in infrastructure. If not set up correctly, it can lead your vSphere host to no connectivity, and you can face some very common problems where you are unable to ping or resolve the host names anymore. Some common errors are exposed, such as Destination host unreachable and Connection failed. A Private VLAN (PVLAN) is an extended version of VLAN that divides logical broadcast domain into further segments and forms private groups. PVLANs are divided into primary and secondary PVLANs. Primary PVLAN is the VLAN distributed into smaller segments that are called primary. These then host all the secondary PVLANs within them. Secondary PVLANs live within primary VLANS, and individual secondary VLANs are recognized by VLAN IDs linked to them. Just like their ancestor VLANs, the packets that travel within secondary VLANS are tagged with their associated IDs. Then, the physical switch recognizes if the packets are tagged as isolated, community, or promiscuous. As network troubleshooting involves taking care of many different aspects, one aspect you will come across in the troubleshooting cycle is actually troubleshooting VLANS. vSphere Enterprise Plus licensing is a requirement to connect a host using a virtual distributed switch and VLANs. You can see the three different network segments in the following screenshot. VLAN A connects all the virtual machines on different vSphere hosts; VLAN B is responsible for carrying out management network traffic; and VLAN C is responsible for carrying out vMotion-related traffic. In order to create PVLANs on your vSphere host, you also need the support of a physical switch: For detailed information about the vSphere network, refer to the VMware official networking guide for vSphere 5.5 at http://goo.gl/SYySFL. Verifying physical trunks and VLAN configuration The first and most important step to troubleshooting your VLAN problem is to look into the VLAN configuration of your vSphere host. You should always start by verifying it. Let's walk through how to verify the network configuration of the management network and VLAN configuration from the vSphere client: Open and log in to your vSphere client. Click on the vSphere host you are trying to troubleshoot. Click on the Configuration menu and choose Networking and then Properties of the switch you are troubleshooting. Choose the network you are troubleshooting from the list, and click on Edit. This will open a new window. Verify the VLAN ID for Management Network. Match the ID of the VLAN provided by your network administrator. Verifying VLAN configuration from CLI Following are the steps for verifying VLAN configuration from CLI: Log in to vSphere CLI. Type the following command in the console: esxcfg-vswitch -l Alternatively, in the vMA appliance, type the vicfg-vswitch command—the output is similar for both commands: vicfg-vswitch –l The output of the excfg-vswitch –l command is as follows: Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks vSwitch0         128         7           128               1500    vmnic3,vmnic2   PortGroup Name        VLAN ID  Used Ports  Uplinks   vMotion               2231     1           vmnic3,vmnic2   Management Network    2230     1           vmnic3,vmnic2  ---Omitted output--- The output of the vicfg-vswitch –l command is as follows: Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks vSwitch0        128             7               128                 1500    vmnic2,vmnic3    PortGroup Name                VLAN ID   Used Ports      Uplinks    vMotion                       2231      1               vmnic2,vmnic3    Management Network            2230      1               vmnic3,vmnic2 --Omitted output--- Match it with your network configuration. If the VLAN ID is incorrect or missing, you can add or edit it using the following command from the vSphere CLI: esxcfg-vswitch –v 2233 –p "Management Network" vSwitch0 To add or edit the VLAN ID from the vMA appliance, use the following command: vicfg-vswitch --vlan 2233 --pg "Management Network" vSwitch0 Verifying VLANs from PowerCLI Verifying information about VLANs from the PowerCLI is fairly simple. Type the following command into the console after connecting with vCenter using Connect-VIServer: Get-VirtualPortGroup –VMHost crimv3esx001.linxsol.com | select Name, VirtualSwitch VLanID Name                                           VirtualSwitch                                  VlanId ----                                                -------------                                     ----- vMotion                                        vSwitch0                                      2231 Management Network                 vSwitch0                                       2233 Verifying PVLANs and secondary PVLANs When you have configured PVLANs or secondary PVLANs in your vSphere infrastructure, you may arrive at a situation where you need to troubleshoot them. This topic will provide you some tips to obtain and view information about PVLANs and secondary PVLANs, as follows: Log in to the vSphere client and click on Networking. Select a distributed switch and right-click on it. From the menu, choose Edit Settings and click on it. This will open the Distributed Switch Settings window. Click on the third tab named Private VLAN. In the section on the left named Primary private VLAN ID, verify the VLAN ID provided by your network engineer. You can verify the VLAN ID of the secondary PVLAN in the next section on the right. Testing virtual machine connectivity Whenever you are troubleshooting, virtual-machine-to-virtual-machine testing is very important. It helps you to isolate the problem domain to a smaller scope. When performing virtual-machine-to-virtual-machine testing, you should always move virtual machines to a single vSphere host. You can then start troubleshooting the network using basic commands, such as ping. If ping works, you are ready to test it further and move the virtual machines to other hosts, and if it still doesn't work, it most likely is a configuration problem of a physical switch or is likely to be a mismatched physical trunk configuration. The most common problem in this scenario is a problematic physical switch configuration. Troubleshooting VMkernel interfaces In this section, we will see how to troubleshoot VMkernel interfaces: Confirm VLAN tagging Ping to check connectivity Vicfg-vmknic Escli network ip interface for local configuration Escli network ip interface list Add or remove Set Escli network ip interface ipv4 get You should know how to use these commands to test if everything is working. You should be able to ping to ensure connectivity exists. We will use the vicfg-vmknic command to configure vSphere VMkernel NICs. Let's create a new VMkernel NIC in a vSphere host using the following steps: Log in to your VMware vSphere CLI. Type the following command to create a new VMkernel NIC: vicfg-vmknic –h crimv3esx001.linxsol.com --add --ip 10.2.0.10 –n 255.255.255.0 'portgroup01' You can enable vMotion using the vicfg-vmknic command as follows: vicfg-vmknic –enable-vmotion. You will not be able to enable vMotion from ESXCLI.vMotion protect migration of your virtual machines with zero down time. You can delete an existing VMkernel NIC as follows: vicfg-vmknic –h crimv3esx001.linxsol.com --delete 'portgroup01' Now check by typing the following command which VMkernel NICs are available in the system: vicfg-vmknic -l Verifying configuration from DCUI When you successfully install vSphere, the first yellow screen that you see is called the vSphere DCUI. DCUI is a frontend management system that helps perform some basic system administration tasks. It also offers the best way to troubleshoot some problems that may be difficult to troubleshoot through vMA, vCLI, or PowerCLI. Further, it is very useful when your host becomes irresponsive from the vCenter or is not accessible from any of the management tools. Some useful tasks that can be performed using the DCUI are as follows: Configuring the Lockdown mode Checking connectivity of Management Network by Ping Configuring and restarting network settings Restarting management agents Viewing logs Resetting vSphere configuration Changing root password Verifying network connectivity from DCUI The vSphere host automatically assigns the first network card available to the system for the management network. Moreover, the default installation of the vSphere host does not let you set up VLAN tags until the VMkernel has been loaded. Verifying network connectivity from the DCUI is important but easy. To do so, follow these steps: Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to the Test Management Network option. Click Enter, and you will see a new screen. Here you can enter up to three IP addresses and the host name to be resolved. You can also type your gateway address on this screen to see if you are able to reach to your gateway. In the host name, you can enter your DNS server name to test if the name resolves successfully. Press Esc to get back and Esc again to log off from the vSphere DCUI. Verifying management network from DCUI You can also verify the settings of your management network from the DCUI. Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to option Configure Management Network option and click Enter. Click Enter again after selecting the first option Network Adapters. On the next screen, you will see a list of all the network adapters your system has. It will show you the Device Name, Hardware Type, Label, Mac Address of the network card, and the status as Connected or Disconnected. From the given network cards, you can select or deselect any of the network card by pressing the space Bar on your keyboard. Press Esc to get back and Esc again to log off from the vSphere DCUI. As you can see in the preceding screenshot, you can also configure the IP address and DNS settings for your vSphere host. You can also use DCUI to configure VLANs and DNS Suffix for your vSphere host. Summary In this article, for troubleshooting, we took a deep dive into the troubleshooting commands and some of the monitoring tools to monitor network performance. The various platforms to execute different commands help you to isolate your troubleshooting techniques. For example, for troubleshooting a single vSphere host, you may like to use esxcli, but for a bunch of vSphere hosts you would like to automate scripting tasks from PowerCLI or from a vMA appliance. Resources for Article: Further resources on this subject: UPGRADING VMWARE VIRTUAL INFRASTRUCTURE SETUPS [article] VMWARE VREALIZE OPERATIONS PERFORMANCE AND CAPACITY MANAGEMENT [article] WORKING WITH VIRTUAL MACHINES [article]
Read more
  • 0
  • 0
  • 11803

article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 0
  • 7059

article-image-how-build-game-using-phaser
Mika Turunen
21 Oct 2015
9 min read
Save for later

How to build a game using Phaser

Mika Turunen
21 Oct 2015
9 min read
Let’s take a look into writing a simple Breakout, Arkanoid for some of us, clone with Phaser. To keep it as simple as possible I’ve created a separate github repository for the code used in this post. I’m going to assume you have some experience in JavaScript. You should also give Phasers official web site a visit and see what the commotion is about. Setup To have everything in working order, download https://nodejs.org/ and have it working on your command prompt, meaning commands like node and npm are recognized. If you are having difficulties or just like to explore the different options of creating a no hassle http server for Phaser projects, you can always look at the Phasers official getting started guide on the HTTP servers. Project structure Create a barkanoid directory on your local machine and extract or clone the files from the github repository into the directory. You should see the following project structure: barkanoid | |----js |----|----barkanoid.js |----assets |----|----background.jpg |----|----ball.png |----|----paddle.png |----|----tile0.png |----|----tile1.png |----|----tile2.png |----|----tile3.png |----|----tile4.png |----|----tile5.png |----index.html |----package.json Assets is for all game related assets such as graphics, sounds and the likes. Js directory is for all the JavaScript files and since we are keeping it as simple as possible for the sake of this post, it’s only one .js file. index.html is the actual game canvas. package.json is the node file that tells the Node package manager (npm) what to install when we use it. Installing dependencies There are a few dependencies that we first need to take care of, such as the actual Phaser itself and HTTP server we are going to serve our files from. Luckily for us, Node.js makes this super simple and with the project in the github you can just simply write the following command in the barkanoid directory. npm install It might take a while, depending on your Internet connection. All dependencies should now be installed. Programming time Phaser requires at least one HTML file to act as the starting canvas for our game, so let’s go ahead and create it. Save the index.html into the root of the Barkanoid directory for easier access. index.html <!doctype html> <html> <head> <meta charset="UTF-8"/> <title>Barkanoid Example</title> <script src="/node_modules/phaser/dist/phaser.min.js"></script> <script src="/js/barkanoid.js"></script> </head> <body> <div id="barkanoid"></div> </body> </html> Notice the html elements attribute id=”barkanoid”, that HTML div element is the container where Phaser will inject the game canvas. This can be called anything really but it’s important to know what the id of the element is so we can actually tell Phaser about this. Let’s continue with the js/barkanoid.js file. Create the Phaser game object and set it up with the HTML div element with the id “barkanoid”. // phaserCreate the game object itself var game = newPhaser.Game( 800, 600, // 800 x 600 rebackgroundolution. Phaser.AUTO, // Allow Phaser to determine Canvas or WebGL "barkanoid", // The HTML element ID we will connect Phaser to. { // Functions (callbacks) for Phaser to call in preload: phaserPreload, // in different states of its execution create: phaserCreate, update: phaserUpdate } ); You can attach callbacks for Phasers preload, create, update and render. For this project we only need the preload, create and update. Preload function: /** * Preload callback. Used to load all assets into Phaser. */ functionphaserPreload() { // Loading the background abackground an image game.load.image("background", "/assets/background.jpg"); // Loading the tiles game.load.image("tile0", "/assets/tile0.png"); game.load.image("tile1", "/assets/tile1.png"); game.load.image("tile2", "/assets/tile2.png"); game.load.image("tile3", "/assets/tile3.png"); game.load.image("tile4", "/assets/tile4.png"); game.load.image("tile5", "/assets/tile5.png"); // Loading the paddle and the ball game.load.image("paddle", "/assets/paddle.png"); game.load.image("ball", "/assets/ball.png"); } This is nothing too fancy. I am keeping it as simple as possible and just loading set of images into Phaser with game.load.image and giving them simple aliases and a location of the file. The following is the phaserCreate function, but don’t get scared, it’s actually quite simple even though a bit lengthy compared to the preload one. We’ll walk through it in three steps. /** * Create callback. Used to create all game related objects, set states and other pre-game running * details. */ functionphaserCreate() { game.physics.startSystem(Phaser.Physics.ARCADE); // All walls collide except the bottom game.physics.arcade.checkCollision.down = false; // Using the in-game name to fetch the loaded asset for the Background object background = game.add.tileSprite(0, 0, 800, 600, "background"); Simply telling Phaser that Arcade style physics are enabled and that we do not want to check for collisions on the bottom of the screen and create a simple background from the background image. // Continuing from the first part ... // Creating a tile group tiles = game.add.group(); tiles.enableBody = true; tiles.physicsdBodyType = Phaser.Physics.ARCADE; // Creating N tiles into the tile group for (var y = 0; y < 4; y++) { for (var x = 0; x < 15; x++) { // Randomizing the tile sprite we load for the tile var randomTileNumber = Math.floor(Math.random() * 6); var tile = tiles.create(120 + (x * 36), 100 + (y * 52), "tile" + randomTileNumber); tile.body.bounce.set(1); tile.body.immovable = true; } } Next create a group for the tiles object with game.add.group. The group can be of many different things but we are going to have a group of game objects for easier collision manipulation. The tile colors get randomized every time the game starts. Create four rows with 15 columns on them of tiles. // Continuing from the second part ... // Setup the player -- paddle paddle = game.add.sprite(game.world.centerX, 500, "paddle"); paddle.anchor.setTo(0.5, 0.5); game.physics.enable(paddle, Phaser.Physics.ARCADE); paddle.body.collideWorldBounds = true; paddle.body.bounce.set(1); paddle.body.immovable = true; // phaserCreate the ball ball = game.add.sprite(game.world.centerX, paddle.y - 16, "ball"); ball.anchor.set(0.5); ball.checkWorldBounds = true; game.physics.enable(ball, Phaser.Physics.ARCADE); ball.body.collideWorldBounds = true; ball.body.bounce.set(1); // When it goes out of bounds we'll call the function 'death' ball.events.onOutOfBounds.add(helpers.death, this); // Setup score text scoreText = game.add.text(32, 550, "score: 0", defaultTextOptions); livesText = game.add.text(680, 550, "lives: 3", defaultTextOptions); introText = game.add.text(game.world.centerX, 400, "- click to start -", boldTextOptions); introText.anchor.setTo(0.5, 0.5); game.input.onDown.add(helpers.release, this); } This creates the player, the ball and some informative text elements. And last but not least, the common update function Phaser calls every update cycle. This is where you can handle updating different objects, their states and other rocket-sciency parts one might have in a game. /** * Phaser Engines update loop that gets called every phaserUpdate. */ functionphaserUpdate () { paddle.x = game.input.x; // Making sure the player does not move out of bounds if (paddle.x < 24) { paddle.x = 24; } elseif (paddle.x > game.width - 24) { paddle.x = game.width - 24; } if (ballOnPaddle) { // Setting the ball on the paddle when player has it ball.body.x = paddle.x; } else { // Check collisions, the function gets called when the N collides with X game.physics.arcade.collide(ball, paddle, helpers.ballCollideWithPaddle, null, this); game.physics.arcade.collide(ball, tiles, helpers.ballCollideWithTile, null, this); } } You probably noticed the functions we are calling and objects we are using that were never declared anywhere, like defaultTextOptions and helpers.release. All the helper functions are defined after the callbacks for Phaser. // Few game related variables that we'll leave undefined var ball, paddle, tiles, livesText, introText, background; var ballOnPaddle = true; var lives = 3; var score = 0; var defaultTextOptions = { font: "20px Arial", align: "left", fill: "#ffffff" }; var boldTextOptions = { font: "40px Arial", fill: "#ffffff", align: "center" }; /** * Set of helper functions. */ var helpers = { /** * Releases ball from the paddle. */ release: function() { if (ballOnPaddle) { ballOnPaddle = false; ball.body.velocity.y = -300; ball.body.velocity.x = -75; introText.visible = false; } }, /** * Ball went out of bounds. */ death: function() { lives--; livesText.text = "lives: " + lives; if (lives === 0) { helpers.gameOver(); } else { ballOnPaddle = true; ball.reset(paddle.body.x + 16, paddle.y - 16); } }, /** * Game over, all lives lost. */ gameOver: function() { ball.body.velocity.setTo(0, 0); introText.text = "Game Over!"; introText.visible = true; }, /** * Callback for when ball collides with Tiles. */ ballCollideWithTile: function(ball, tile) { tile.kill(); score += 10; scoreText.text = "score: " + score; // Are they any tiles left? if (tiles.countLiving() <= 0) { // New level start score += 1000; scoreText.text = "score: " + score; introText.text = "- Next Level -"; // Attach ball to the players paddle ballOnPaddle = true; ball.body.velocity.set(0); ball.x = paddle.x + 16; ball.y = paddle.y - 16; // Tell tiles to revive tiles.callAll("revive"); } }, /** * Callback for when ball collides with the players paddle. */ ballCollideWithPaddle: function(ball, paddle) { var diff = 0; // Super simplistic bounce physics for the ball movement if (ball.x < paddle.x) { // Ball is on the left-hand side diff = paddle.x - ball.x; ball.body.velocity.x = (-10 * diff); } elseif (ball.x > paddle.x) { // Ball is on the right-hand side diff = ball.x -paddle.x; ball.body.velocity.x = (10 * diff); } else { // Ball is perfectly in the middle // Add a little random X to stop it bouncing straight up! ball.body.velocity.x = 2 + Math.random() * 8; } } }; Most of the helper functions are pretty self-explanatory and there's a decent amount of comments around them so they should be easy to understand. Time to play the game After about 200 lines or so of code and setting everything up, you should be ready to say nmp start in the barkanoid directory to start the game. Enjoy the Barkanoid game you just created. Play a round or two and start customizing it as much as you want. Have fun! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 6303
article-image-nginx-service
Packt
20 Oct 2015
15 min read
Save for later

Nginx service

Packt
20 Oct 2015
15 min read
In this article by Clement Nedelcu, author of the book, Nginx HTTP Server - Third Edition, we discuss the stages after having successfully built and installed Nginx. The default location for the output files is /usr/local/nginx. (For more resources related to this topic, see here.) Daemons and services The next step is obviously to execute Nginx. However, before doing so, it's important to understand the nature of this application. There are two types of computer applications—those that require immediate user input, thus running in the foreground, and those that do not, thus running in the background. Nginx is of the latter type, often referred to as daemon. Daemon names usually come with a trailing d and a couple of examples can be mentioned here—httpd, the HTTP server daemon, is the name given to Apache under several Linux distributions; named, the name server daemon; or crond the task scheduler—although, as you will notice, this is not the case for Nginx. When started from the command line, a daemon immediately returns the prompt, and in most cases, does not even bother outputting data to the terminal. Consequently, when starting Nginx you will not see any text appear on the screen, and the prompt will return immediately. While this might seem startling, it is on the contrary a good sign. It means the daemon was started correctly and the configuration did not contain any errors. User and group It is of utmost importance to understand the process architecture of Nginx and particularly the user and groups its various processes run under. A very common source of troubles when setting up Nginx is invalid file access permissions—due to a user or group misconfiguration, you often end up getting 403 Forbidden HTTP errors because Nginx cannot access the requested files. There are two levels of processes with possibly different permission sets: The Nginx master process: This should be started as root. In most Unix-like systems, processes started with the root account are allowed to open TCP sockets on any port, whereas other users can only open listening sockets on a port above 1024. If you do not start Nginx as root, standard ports such as 80 or 443 will not be accessible. Note that the user directive that allows you to specify a different user and group for the worker processes will not be taken into consideration for the master process. The Nginx worker processes: These are automatically spawned by the master process under the account you specified in the configuration file with the user directive. The configuration setting takes precedence over the configuration switch you may have specified at compile time. If you did not specify any of those, the worker processes will be started as user nobody, and group nobody (or nogroup depending on your OS). Nginx command-line switches The Nginx binary accepts command-line arguments to perform various operations, among which is controlling the background processes. To get the full list of commands, you may invoke the help screen using the following commands: [alex@example.com ~]$ cd /usr/local/nginx/sbin [alex@example.com sbin]$ ./nginx -h The next few sections will describe the purpose of these switches. Some allow you to control the daemon, some let you perform various operations on the application configuration. Starting and stopping the daemon You can start Nginx by running the Nginx binary without any switches. If the daemon is already running, a message will show up indicating that a socket is already listening on the specified port: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) […] [emerg]: still could not bind(). Beyond this point, you may control the daemon by stopping it, restarting it, or simply reloading its configuration. Controlling is done by sending signals to the process using the nginx -s command. Command Description nginx –s stop Stops the daemon immediately (using the TERM signal). nginx –s quit Stops the daemon gracefully (using the QUIT signal). nginx –s reopen Reopens the log files. nginx –s reload Reloads the configuration. Note that when starting the daemon, stopping it, or performing any of the preceding operations, the configuration file is first parsed and verified. If the configuration is invalid, whatever command you have submitted will fail, even when trying to stop the daemon. In other words, in some cases you will not be able to even stop Nginx if the configuration file is invalid. An alternate way to terminate the process, in desperate cases only, is to use the kill or killall commands with root privileges: [root@example.com ~]# killall nginx Testing the configuration As you can imagine, testing the validity of your configuration will become crucial if you constantly tweak your server setup . The slightest mistake in any of the configuration files can result in a loss of control over the service—you will then be unable to stop it via regular init control commands, and obviously, it will refuse to start again. Consequently, the following command will be useful to you in many occasions; it allows you to check the syntax, validity, and integrity of your configuration: [alex@example.com ~]$ /usr/local/nginx/sbin/nginx –t The –t switch stands for test configuration. Nginx will parse the configuration anew and let you know whether it is valid or not. A valid configuration file does not necessarily mean Nginx will start though, as there might be additional problems such as socket issues, invalid paths, or incorrect access permissions. Obviously, manipulating your configuration files while your server is in production is a dangerous thing to do and should be avoided when possible. The best practice, in this case, is to place your new configuration into a separate temporary file and run the test on that file. Nginx makes it possible by offering the –c switch: [alex@example.com sbin]$ ./nginx –t –c /home/alex/test.conf This command will parse /home/alex/test.conf and make sure it is a valid Nginx configuration file. When you are done, after making sure that your new file is valid, proceed to replacing your current configuration file and reload the server configuration: [alex@example.com sbin]$ cp -i /home/alex/test.conf /usr/local/nginx/conf/nginx.conf cp: erase 'nginx.conf' ? yes [alex@example.com sbin]$ ./nginx –s reload Other switches Another switch that might come in handy in many situations is –V. Not only does it tell you the current Nginx build version, but more importantly it also reminds you about the arguments that you used during the configuration step—in other words, the command switches that you passed to the configure script before compilation. [alex@example.com sbin]$ ./nginx -V nginx version: nginx/1.8.0 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) TLS SNI support enabled configure arguments: --with-http_ssl_module In this case, Nginx was configured with the --with-http_ssl_module switch only. Why is this so important? Well, if you ever try to use a module that was not included with the configure script during the precompilation process, the directive enabling the module will result in a configuration error. Your first reaction will be to wonder where the syntax error comes from. Your second reaction will be to wonder if you even built the module in the first place! Running nginx –V will answer this question. Additionally, the –g option lets you specify additional configuration directives in case they were not included in the configuration file: [alex@example.com sbin]$ ./nginx –g "timer_resolution 200ms"; Adding Nginx as a system service In this section, we will create a script that will transform the Nginx daemon into an actual system service. This will result in mainly two outcomes: the daemon will be controllable using standard commands, and more importantly, it will automatically be launched on system startup and stopped on system shutdown. System V scripts Most Linux-based operating systems to date use a System-V style init daemon. In other words, their startup process is managed by a daemon called init, which functions in a way that is inherited from the old System V Unix-based operating system. This daemon functions on the principle of runlevels, which represent the state of the computer. Here is a table representing the various runlevels and their signification: Runlevel State 0 System is halted 1 Single-user mode (rescue mode) 2 Multiuser mode, without NFS support 3 Full multiuser mode 4 Not used 5 Graphical interface mode 6 System reboot You can manually initiate a runlevel transition: use the telinit 0 command to shut down your computer or telinit 6 to reboot it. For each runlevel transition, a set of services are executed. This is the key concept to understand here: when your computer is stopped, its runlevel is 0. When you turn it on, there will be a transition from runlevel 0 to the default computer startup runlevel. The default startup runlevel is defined by your own system configuration (in the /etc/inittab file) and the default value depends on the distribution you are using: Debian and Ubuntu use runlevel 2, Red Hat and Fedora use runlevel 3 or 5, CentOS and Gentoo use runlevel 3, and so on—the list is long. So, in summary, when you start your computer running CentOS, it operates a transition from runlevel 0 to runlevel 3. That transition consists of starting all services that are scheduled for runlevel 3. The question is how to schedule a service to be started at a specific runlevel. For each runlevel, there is a directory containing scripts to be executed. If you enter these directories (rc0.d, rc1.d, to rc6.d), you will not find actual files, but rather symbolic links referring to scripts located in the init.d directory. Service startup scripts will indeed be placed in init.d, and links will be created by tools placing them in the proper directories. About init scripts An init script, also known as service startup script or even sysv script, is a shell script respecting a certain standard. The script controls a daemon application by responding to commands such as start, stop, and others, which are triggered at two levels. First, when the computer starts, if the service is scheduled to be started for the system runlevel, the init daemon will run the script with the start argument. The other possibility for you is to manually execute the script by calling it from the shell: [root@example.com ~]# service httpd start Or if your system does not come with the service command: [root@example.com ~]# /etc/init.d/httpd start The script must accept at least the start, stop, restart, force-reload, and status commands, as they will be used by the system to respectively start up, shut down, restart, forcefully reload the service, or inquire its status. However, to enlarge your field of action as a system administrator, it is often interesting to provide further options, such as a reload argument to reload the service configuration or a try-restart argument to stop and start the service again. Note that since service httpd start and /etc/init.d/httpd start essentially do the same thing, with the exception that the second command will work on all operating systems, we will make no further mention of the service command and will exclusively use the /etc/init.d/ method. Init script for Debian-based distributions We will thus create a shell script to start and stop our Nginx daemon and also to restart and reloading it. The purpose here is not to discuss Linux shell script programming, so we will merely provide the source code of an existing init script, along with some comments to help you understand it. Due to differences in the format of the init scripts from one distribution to another, we will discover two separate scripts here. The first one is meant for Debian-based distributions such as Debian, Ubuntu, Knoppix, and so forth. First, create a file called nginx with the text editor of your choice, and save it in the /etc/init.d/ directory (on some systems, /etc/init.d/ is actually a symbolic link to /etc/rc.d/init.d/). In the file you just created, insert the script provided in the code bundle supplied with this book. Make sure that you change the paths to make them correspond to your actual setup. You will need root permissions to save the script into the init.d directory. The complete init script for Debian-based distributions can be found in the code bundle. Init script for Red Hat–based distributions Due to the system tools, shell programming functions, and specific formatting that it requires, the preceding script is only compatible with Debian-based distributions. If your server is operated by a Red Hat–based distribution such as CentOS, Fedora, and many more, you will need an entirely different script. The complete init script for Red Hat–based distributions can be found in the code bundle. Installing the script Placing the file in the init.d directory does not complete our work. There are additional steps that will be required to enable the service. First, make the script executable. So far, it is only a piece of text that the system refuses to run. Granting executable permissions on the script is done with the chmod command: [root@example.com ~]# chmod +x /etc/init.d/nginx Note that if you created the file as the root user, you will need to be logged in as root to change the file permissions. At this point, you should already be able to start the service using service nginx start or /etc/init.d/nginx start, as well as stopping, restarting, or reloading the service. The last step here will be to make it so the script is automatically started at the proper runlevels. Unfortunately, doing this entirely depends on what operating system you are using. We will cover the two most popular families—Debian, Ubuntu, or other Debian-based distributions and Red Hat/Fedora/CentOS, or other Red Hat–derived systems. Debian-based distributions For the Debian-based distribution, a simple command will enable the init script for the system runlevel: [root@example.com ~]# update-rc.d -f nginx defaults This command will create links in the default system runlevel folders. For the reboot and shutdown runlevels, the script will be executed with the stop argument; for all other runlevels, the script will be executed with start. You can now restart your system and see your Nginx service being launched during the boot sequence. Red Hat–based distributions For the Red Hat–based systems family, the command differs, but you get an additional tool to manage system startup. Adding the service can be done via the following command: [root@example.com ~]# chkconfig nginx on Once that is done, you can then verify the runlevels for the service: [root@example.com ~]# chkconfig --list nginx Nginx 0:off 1:off 2:on 3:off 4:on 5:on 6:off Another tool will be useful to you to manage system services namely, ntsysv. It lists all services scheduled to be executed on system startup and allows you to enable or disable them at will. The tool ntsysv requires root privileges to be executed. Note that prior to using ntsysv, you must first run the chkconfig nginx on command, otherwise Nginx will not appear in the list of services. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed to you directly. NGINX Plus Since mid-2013, NGINX, Inc., the company behind the Nginx project, also offers a paid subscription called NGINX Plus. The announcement came as a surprise for the open source community, but several companies quickly jumped on the bandwagon and reported amazing improvements in terms of performance and scalability after using NGINX Plus. NGINX, Inc., the high performance web company, today announced the availability of NGINX Plus, a fully-supported version of the popular NGINX open source software complete with advanced features and offered with professional services. The product is developed and supported by the core engineering team at Nginx Inc., and is available immediately on a subscription basis. As business requirements continue to evolve rapidly, such as the shift to mobile and the explosion of dynamic content on the Web, CIOs are continuously looking for opportunities to increase application performance and development agility, while reducing dependencies on their infrastructure. NGINX Plus provides a flexible, scalable, uniformly applicable solution that was purpose built for these modern, distributed application architectures. Considering the pricing plans ($1,500 per year per instance) and the additional features made available, this platform is indeed clearly aimed at large corporations looking to integrate Nginx into their global architecture seamlessly and effortlessly. Professional support from the Nginx team is included and discounts can be offered for multiple-instance subscriptions. This book covers the open source version of Nginx only and does not detail advanced functionality offered by NGINX Plus. For more information about the paid subscription, take a look at http://www.nginx.com. Summary From this point on, Nginx is installed on your server and automatically starts with the system. Your web server is functional, though it does not yet answer the most basic functionality: serving a website. The first step towards hosting a website will be to prepare a suitable configuration file. Resources for Article: Further resources on this subject: Getting Started with Nginx[article] Fine-tune the NGINX Configuration[article] Nginx proxy module [article]
Read more
  • 0
  • 0
  • 6293

article-image-understanding-text-search-and-hierarchies-sap-hana
Packt
20 Oct 2015
9 min read
Save for later

Understanding Text Search and Hierarchies in SAP HANA

Packt
20 Oct 2015
9 min read
In this article by Vinay Singh, author of the book Real Time Analytics with SAP HANA, this article covers Full Text Search and hierarchies in SAP HANA, and how to create and use them in our data models. After completing this article, you should be able to: Create and use Full Text Search Create hierarchies—level and parent child hierarchies (For more resources related to this topic, see here.) Creating and using Full Text Search Before we proceed with the creation and use of Full Text Search, let's quickly go through the basic terms associated with it. They are as follows: Text Analysis: This is the process of analyzing unstructured text, extracting relevant information, and then transforming this information into structure information that can be leveraged in different ways. The scripts provide additional possibilities to analyze strings or large text columns by providing analysis rules for many industries in many languages in SAP HANA. Full Text Search: This capability of HANA helps to speed up search capabilities within large amounts of text data significantly. The primary function of Full Text Search is to optimize linguistic searches. Fuzzy Search: This functionality enables to find strings that match a pattern approximately (rather than exactly). It's a fault-tolerant search, meaning that a query returns records even if the search term contains additional or missing characters, or even spelling mistakes. It is an alternative to a non-fault tolerant SQL statement. The score() function: When using contains() in the where clause of a select statement, the score() function can be used to retrieve the score. This is a numeric value between 0.0 and 1.0. The score defines the similarity between the user input and the records returned by the search. A score of 0.0 means that there is no similarity. The higher the score, the more similar a record is to the search input. Some of the applied applications of fuzzy search could be: Fault-tolerant check for duplicate records. Its helps to prevent duplication entry in Systems by searching similar entries. Fault-tolerant search in text columns—for example, search documents on diode and find all documents that contain the term "triode". Fault-tolerant search in structure database content search for rhyming words, for example coffee Krispy biscuit and find toffee crisp biscuits (the standard example given by SAP). Let's see what are the use cases for text search: Combining structure and unstructured data Medicine and healthcare Patents Brand monitoring and the buying pattern of consumer Real-time analytics on a large volume of data Data from social media Finance data Sales optimization Monitoring and production planning The results of text analysis are stored in a table and therefore, can be leveraged in all the HANA- supported scenarios: Standard Analytics: Create analytical views and calculation views on top. For example, companies mentioned in news articles over time. Data mining, predictive: Using R, Predictive Analysis Library (PAL) functions. For example, clustering, time series analysis, and so on. Search-based applications: Create a search model and build a search UI with the HANA Info Access (InA) toolkit for HTML5. Text analysis results can be used to navigate and filter search results. For example, People finder, search UI for internal documents. The capabilities of HANA Full Text Search and text analysis are as follows: Native full text search Database text analysis The graphical modeling of search models Info Access toolkit for HTML5 UIs. The benefits of full text search: Extract unstructured content with no additional cost Combine structure and unstructured information for unified information access Less data duplication and transfer Harness the benefit of InA (Info Access toolkit ) for an HTML5 application The following are the supported data types by fuzzy search: Short text Text VARCHAR NVARCHAR Date Data with full text index. Enabling search option Before we can use the search option in any attribute or analytical view, we will need to enable this functionality in the SAP HANA Studio Preferences as shown in the following screenshot: We are well prepared to move ahead with the creation and use of Full Text search. Let's do this step by step as follows: Create the table that we will use to perform the Full Text Search statements: Create Schema <DEMO>; // I am creating , it would be already present from our previous exercises. SET SCHEMA DEMO; // Set the schema name Create a Column Table including FUZZY SEARCH indexed columns. DROP TABLE DEMO.searchtbl_FUZZY; CREATE COLUMN TABLE DEMO.searchtbl_FUZZY ( CUST_NAME TEXT FUZZY SEARCH INDEX ON, CUST_COUNTY TEXT FUZZY SEARCH INDEX ON, CUST_DEPT TEXT FUZZY SEARCH INDEX ON, ); Prepare the fuzzy search logic (SQL logic): Search for customers in the countries that contain the 'MAIN' word: SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_county, 'MAIN'); Search for customers in the countries that contain the 'MAIN' word but with Fuzzy parameter 0.4 SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_county, 'West', FUZZY(0.3)); Perform a fuzzy search for a customer working in a department that includes the department word : SELECT highlighted(cust_dept), score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_dept, 'Department', FUZZY(0.5)); Fuzzy search for all the columns by looking for the customer word: SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(*, 'Customer', FUZZY(0.5)); Creating hierarchies Hierarchies are created to maintain data in a structured format, such as maintaining customer or employee data based on their roles and splitting the data based on geographies. Hierarchical data is very useful for organizational purposes during decision making. Two types of hierarchies can be created in SAP HANA: The level hierarchy Parent-child hierarchy The hierarchies are initially created in the attribute view and later can be combined in the analytic view or calculation view for consumption in a report as per business requirements. Let's create both types of hierarchies in attribute views. Creating level hierarchy Each level represents a position in the hierarchy. For example, a time dimension might have a hierarchy that represents data at the month, quarter, and year levels. Each level above the base level contains aggregate values for the levels below it. Create a new attribute view (for your own practice, I would suggest you to create a new one). You can also use an existing one. Use the SNWD_PD EPM sample tables. In output view, mark the following as output: In the semantic node of the view, create new hierarchy as shown in the following screenshot and fill the details: Save and Activate the view. Now the hierarchy is ready to be used in an analytical view. Add a client and node key again as output to your attribute view that you just created, that is AT_LEVEL_HIERARCY_DEMO, as we will use these two fields in Create an analytical view. It should look like the following screenshot. Add the attribute view created in the preceding step and the SNWD_SO_I table to the data foundation: Join client to client and product guide to node key:  Save and activate. Go to MS Excel | All Programs | Microsoft Office | Microsoft Excel 2010 then go to Data tab | From Other Sources | From Data Connection Wizard. You will get a new popup for Data Connection Wizard | Other/Advanced | SAP HANA MDX Provider: You will be asked to provide the connection details, fill the details, and test the connection (these are the same details that you used while adding the system to SAP HANA Studio). Data Connection Wizard will now ask you to choose the analytical view (choose the one that you just created in the preceding step): The preceding steps will take you to an excel sheet and you will see data as per the choices that you chose in the Pivot table field list: Create parent-child hierarchy The parent-child hierarchy is a simple, two-level hierarchy where the child element has an attribute containing the parent element. These two columns define the hierarchical relationships among the members of the dimension. The first column, called the member key column, identifies each dimension member. The other column, called the parent column, identifies the parent of each dimension member. The parent attribute determines the name of each level in the parent-child hierarchy and determines whether the data for parent members should be displayed  Let's create a parent-child hierarchy using the following steps: Create an attribute view. Create a table that has the parent-child information: The following is the sample code and the insert statement: CREATE COLUMN TABLE "DEMO"."CCTR_HIE"( "CC_CHILD" NVARCHAR(4), "CC_PARENT" NVARCHAR(4)); insert into "DEMO"."CCTR_HIE" values('','') insert into "DEMO"."CCTR_HIE" values('C11','c1'); insert into "DEMO"."CCTR_HIE" values('C12','c1'); insert into "DEMO"."CCTR_HIE" values('C13','c1'); insert into "DEMO"."CCTR_HIE" values('C14','c2'); insert into "DEMO"."CCTR_HIE" values('C21','c2'); insert into "DEMO"."CCTR_HIE" values('C22','c2'); insert into "DEMO"."CCTR_HIE" values('C31','c3'); insert into "DEMO"."CCTR_HIE" values('C1','c'); insert into "DEMO"."CCTR_HIE" values('C2','c'); insert into "DEMO"."CCTR_HIE" values('C3','c'); We will put the preceding table into our data foundation of attribute view as follows: Make CC_CHILD as the key attribute. Now let's create new hierarchy as shown in the following screenshot: Save and activate the hierarchy. Create a new analytical view and add the HIE_PARENT_CHILD_DEMO view and the CCTR_COST table in data foundation. Join CCTR to CCTR_CILD with many is to one relationship. Make sure that in the semantic node, COST is set as a measure. Save and Activate the analytical view. Preview the data. As per the business need, we can use one of the two hierarchies along with attribute view or analytical view. Summary In this article, we took a deep dive into Full Text Search, fuzzy logic, and hierarchies concepts. We learned how to create and use text search and fuzzy logic. The parent-child and level hierarchies were discussed in detail with a hands-on approach on both. Resources for Article: Further resources on this subject: Sabermetrics with Apache Spark [article] Meeting SAP Lumira [article] Achieving High-Availability on AWS Cloud [article]
Read more
  • 0
  • 0
  • 16391
Modal Close icon
Modal Close icon