Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-open-source-intelligence
Packt
30 Dec 2014
30 min read
Save for later

Open Source Intelligence

Packt
30 Dec 2014
30 min read
This article is written by Douglas Berdeaux, the author of Penetration Testing with Perl. Open source intelligence (OSINT) refers to intelligence gathering from open and public sources. These sources include search engines, the client target's web-accessible software or sites, social media sites and forums, Internet routing and naming authorities, public information sites, and more. If done properly and thoroughly, the practice of OSINT can prove to be useful to strengthen social engineering and remote exploitation attacks on our client target as we search for ways to gain access to their systems and buildings during a penetration test. What's covered In this article, we will cover how to gather the information listed using Perl: E-mail addresses from our client target using search engines and social media sites Networking, hosting, routing, and system data of our client target using online resources and simple networking utilities To gather this data, we rely heavily on the LWP::UserAgent Perl module. We will also discover how to use this module with a secured socket layer SSL/TLS (HTTPS) connection. In addition to this, we will learn about a few new Perl modules that are listed here: Net::Whois::Raw Net::DNS::Dig Net::DNS Net::Traceroute XML::LibXML Google dorks Before we use Google for intelligence gathering, we should briefly touch upon using Google dorks, which we can use to refine and filter our Google searches. A Google dork is a string of special syntax that we pass to Google's request handler using the q= option. The dork can comprise operators and keywords separated by a colon and concatenated strings using a plus symbol + as a delimiter. Here is a list of simple Google dorks that we can use to narrow our Google searches: intitle:<string> searches for pages whose HTML title tags contain the string <string> filetype:<ext> searches for files that have the extension <ext> site:<domain> narrows the search to only results that are located on the <domain> target servers inurl:<string> returns results that contain <string> in their URL -<word> negates the word following the minus symbol - in a search filter link:<page> searches for pages that contain the HTML HREF links to the page This is just a small list and a complete guide of Google search operators that can be found on their support servers. A list of well-known exploited Google dorks for information gathering can be found in a Google hacker's database at http://www.exploit-db.com/google-dorks/. E-mail address gathering Getting e-mail addresses from our target can be a rather hard task and can also mean gathering usernames used within the target's domain, remote management systems, databases, workstations, web applications, and much more. As we can imagine, gathering a username is 50 percent of the intrusion for target credential harvesting; the other 50 percent being the password information. So how do we gather e-mail addresses from a target? Well, there are several methods; the first we will look at will be simply using search engines to crawl the web for anything useful, including forum posts, social media, e-mail lists for support, web pages and mailto links, and anything else that was cached or found from ever-spidering search engines. Using Google for e-mail address gathering Automating queries to search engines is usually always best left to application programming interfaces (APIs). We might be able to query the search engine via a simple GET request, but this leaves enough room for error, and the search engine can potentially temporarily block our IP address or force us to validate our humanness using an image of words as it might assume that we are using a bot. Unfortunately, Google only offers a paid version of their general search API. They do, however, offer an API for a custom search, but this is restricted to specified domains. We want to be as thorough as possible and search as much of the web as we can, time permitting, when intelligence gathering. Let's go back to our LWP::UserAgent Perl module and make a simple request to Google, searching for any e-mail addresses and URLs from a given domain. The URLs are useful as they can be spidered to within our application if we feel inclined to extend the reach of our automated OSINT. In the following examples, we want to impersonate a browser as much as possible to not raise flags at Google by using automation. We accomplish this by using the LWP::UserAgent Perl module and spoofing a valid Firefox user agent: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $usage = "Usage ./email_google.pl <domain>"; my $target = shift or die $usage; my $ua = LWP::UserAgent->new; my %emails = (); # unique my $url = 'https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22'.$target.'%22'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout $ua->show_progress(1); # display progress bar my $res = $ua->get($url); if($res->is_success){ my @urls = split(/url?q=/,$res->as_string); foreach my $gUrl (@urls){ # Google URLs next if($gUrl =~ m/(webcache.googleusercontent)/i or not $gUrl =~ m/^http/); $gUrl =~ s/&amp;sa=U.*//; print $gUrl,"n"; } my @emails = $res->as_string =~ m/[a-z0-9_.-]+@/ig; foreach my $email (@emails){ if(not exists $emails{$email}){    print "Possible Email Match: ",$email,$target,"n";    $emails{$email} = 1; # hashes are faster } } } else{ die $res->status_line; } The LWP::UserAgent module used in the previous code is not new to us. We did, however, add SSL support using the LWP::Protocol::https module. Our URL $url object is a simple Google search URL that anyone would browse to with a normal browser. The num= value pertains the returned results from Google in a single page, which we have set to 100. To also act as a browser, we needed to set the user agent with the agent() method, which we did as a Mozilla browser. After this, we set a timeout and Boolean to show a simple progress bar. The rest is just simple Perl string manipulation and pattern matching. We use the regular expression url?q= to split the string returned by the as_string method from the $res object. Then, for each URL string, we use another regular expression, &amp;sa=U.*, to remove excess analytic garbage that Google adds. Then, we simply parse out all e-mail addresses found using the same method but different regexp. We stuff all matches into the @emails array and loop over them, displaying them to our screen if they don't exist in the $emails{} Perl hash. Let's run this program against the weaknetlabs.com domain and analyze the output: root@wnld960:~# perl email_google.pl weaknetlabs.com ** GET https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22weaknetlabs.com%22 ==> 200 OK (1s) http://weaknetlabs.com/ http://weaknetlabs.com/main/%3Fpage_id%3D479 … http://www.securitytube.net/video/2039 Possible Email Match: Douglas@weaknetlabs.com Possible Email Match: weaknetlabs@weaknetlabs.com root@wnld960:~# This is the (trimmed) output when we run an automated Google search for an e-mail address from weaknetlabs.com. Using social media for e-mail address gathering Now, let's turn our attention to using social media sites such as Google+, LinkedIn, and Facebook to try to gather e-mail addresses using Perl. Social media sites can sometimes reflect information about an employee's attitude towards their employer, their status within the company, position, e-mail addresses, and more. All of this information is considered OSINT and can be useful when advancing our attacks. Google+ We can also search plus.google.com for contact information from users belonging to our target. The following is the URL-encoded Google dork we will use to search the Google+ profiles for an employee of our target: intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com The URL-encoded symbols are as follows: %3A: This is a colon, that is, : %2B: This is a plus symbol, that is, + The plus symbol + is a special component of Google dork, as we mentioned in the previous section. The intitle keyword tells Google to display results whose HTML <title> tag contains the About – Google+ text. Then, we add the string (in quotations) "Works at " (notice the space at the end), and then the target name as the string object $target. The site keyword tells the Google search engine to only display results from the plus.google.com site. Let's implement this in our Perl program and see what results are returned for Google employees: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googleplus.pl <target name>"; my $target = shift or die $usage; $target =~ s/s/+/g; my $gUrl = 'https://www.google.com/search?safe=off&noj=1&sclient=psy-ab&q=intitle%3A"About+-+Google%2B"+"Works+at+' .$target.'"+site%3Aplus.google.com&oq=intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $res = $ua->get($gUrl); if($res->is_success){ foreach my $string (split(/url?q=/,$res->as_string)){ next if($string =~ m/(webcache.googleusercontent)/i or not $string =~ m/^http/); $string =~ s/&amp;sa=U.*//; print $string,"n"; } } else{ die $res->status_line; } This Perl program is quite similar to our last search program. Now, let's run this to find possible Google employees. Since a target client company can have spaces in its name, we accommodate them by encoding them for Google as plus symbols: root@wnld960:~# perl googleplus.pl google https://plus.google.com/%2BPaulWilcox/about https://plus.google.com/%2BNatalieVillalobos/about ... https://plus.google.com/%2BAndrewGerrand/about root@wnld960:~# The preceding (trimmed) output proves that our Perl script works as we browse to the returned results. These two Google search scripts provided us with some great information quickly. Let's move on to another example, not using Google but LinkedIn, a social media site for professionals. LinkedIn LinkedIn can provide us with the contact information and IT skill levels of our client target during a penetration test. Here, we will focus on the contact information. By now, we should feel very comfortable making any type of web request using LWP::UserAgent and parsing its output for intelligence data. In fact, this LinkedIn example should be a breeze. The trick is fine-tuning our filters and regular expressions to get only relevant data. Let's just dive right into the code and then analyze some sample output: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googlepluslinkedin.pl <target name>"; my $target = shift or die $usage; my $gUrl = 'https://www.google.com/search?q=site:linkedin.com+%22at+'.$target.'%22'; my %lTargets = (); # unique $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $google = getUrl($gUrl); # one and ONLY call to Google foreach my $title ($google =~ m/shref="/url?.*">[a-z0-9_. -]+s?.b.at $target..b.s-slinked/ig){ my $lRurl = $title; $title =~ s/.*">([^<]+).*/$1/; $lRurl =~ s/.*url?.*q=(.*)&amp;sa.*/$1/; print $title,"-> ".$lRurl."n"; my @ln = split(/15?12/,getUrl($lRurl)); foreach(@ln){ if(m/title="/i){    my $link = $_;    $link =~ s/.*href="([^"]+)".*/$1/;    next if exists $lTargets{$link};    $lTargets{$link} = 1;    my $name = $_;    $name =~ s/.*title="([^"]+)".*/$1/;    print "t",$name," : ",$link,"n"; } } } sub getUrl{ sleep 1; # pause... my $res = $ua->get(shift); if($res->is_success){ return $res->as_string; }else{ die $res->status_line; } } The preceding Perl program makes one query to Google to find all possible positions from the target; for each position found, it queries LinkedIn to find employees of the target. The regular expressions used were finely crafted after inspection of the returned HTML object from a simple query to both Google and LinkedIn. This is a great example of how we can spider off from our initial Google results to gather even more intelligence using Perl automation. Let's take a look at some sample outputs from this program when used against Walmart.com: root@wnld960:~# perl linkedIn.pl Walmart Buyer : http://www.linkedin.com/title/buyer/at-walmart/        Jason Kloster : http://www.linkedin.com/in/jasonkloster       Rajiv Ahirwal : http://www.linkedin.com/in/rajivahirwal ... Store manager : http://www.linkedin.com/title/store%2Bmanager/at-walmart/        Benjamin Hunt 13k+ (LION) #1 Connected Leader at Walmart : http://www.linkedin.com/in/benjaminhunt01 ... Shift manager : http://www.linkedin.com/title/shift%2Bmanager/at-walmart/        Frank Burns : http://www.linkedin.com/pub/frank-burns/24/83b/285 ... Assistant store manager : http://www.linkedin.com/title/assistant%2Bstore%2Bmanager/at-walmart/        John Cole : http://www.linkedin.com/pub/john-cole/67/392/b39        Crystal Herrera : http://www.linkedin.com/pub/crystal-herrera/92/74a/97b root@wnld960:~# The preceding (trimmed) output provided some great insight into employee positions, and even real employees in those positions of the target, with a simple call to one script. All of this information is publicly available information and we are not directly attacking Walmart or its employees; we are just using this as an example of intelligence-gathering techniques during a penetration test using Perl programming. This information can further be used for reporting, and we can even extend this data into other areas of research. For instance, we can easily follow the LinkedIn links with LWP::UserAgent and pull even more data from the publicly available LinkedIn profiles. This data, when compared to Google+ profile data and simple Google searches, should help in providing a background to create a more believable pretext for social engineering. Now, let's see if we can use Google to search more social media websites for information on our client target. Facebook We can easily argue that Facebook is one of the largest social networking sites around during the writing of this book. Facebook can easily return a large amount of data about a person, and we don't even have to go to the site to get it! We can easily extend our reach into the Web with the gathered employee names, from our previous code, by searching Google using the site:faceboook.com parameter and the exact same syntax as from the first example in the Google section of the E-mail address gathering section. The following are a few simple Google dorks that can possibly reveal information about our client target: site:facebook.com "manager at target" site:facebook.com "ceo at target" site:facebook.com "owner of target" site:facebook.com "experience at target" This information can return customer and employee criticism that can be used for a wide array of penetration-testing purposes, including social engineering pretexting. We can narrow our focus even further by adding other keywords and strings from our previously gathered intelligence, such as city names, company names, and more. Just about anything returned can be compiled into a unique wordlist for password cracking, and contrasted with the known data with Digital Credential Analysis (DCA). Domain Name Services Domain Name Services (DNS) are used to translate IP addresses into hostnames so that we can use alphanumeric addresses instead of IP addresses for websites or services. It makes our lives a lot easier when typing in a URL with a name rather than a 4-byte numerical value. Any client target can potentially have full control over their naming services. DNS A records can be assigned to any IP address. We can easily write our own record with domain control for an IPv4 class A address, such as 10.0.0.1, which is commonly done for an internal network to allow its users to easily connect to different internal services. The Whois query Sometimes, when we can get an IP address for a client target, we can pass this IP address to the Whois database, and in return, we can get a range of IP addresses in which our IP lies and the organization that owns the range. If the organization is our target, then we now know a range of IP addresses pointing directly to their resources. Usually, this information is given during a penetration test, and the limitations on the lengths that we are allowed to go to for IP ranges are set so that we can be limited simply to reporting. Let's use Perl and the Net::Whois::Raw module to interact with the American Registry for Internet Numbers (ARIN) database for an IP address: #!/usr/bin/perl -w use strict; use Net::Whois::Raw; die "Usage: perl netRange.pl <IP Address>" unless $ARGV[0]; foreach(split(/n/,whois(shift))){ print $_,"n" if(m/^(netrange|orgname)/i); } The preceding code, when run, should produce information about the network range and organization name that owns the range. It is very simple, and it can be compared to calling the whois program form the Linux command line. If we were to script this to run through a number of different IP addresses and run the Whois query against each one, we could be violating the terms of service set by ARIN. Let's test it and see what we get with a random IP address: root@wnld960:~# perl whois.pl 198.123.2.22 NetRange:       198.116.0.0 - 198.123.255.255 OrgName:       National Aeronautics and Space Administration root@wnld960:~# This is the output from our Perl program, which reveals an IP range that can belong to the organization listed. If this fails, and we need to find more than one hostname owned by our client target, we can try a brute force method that simply checks our name servers; we will do just that in the next section. The DIG query DIG stands for domain information groper and is a utility to do just that using DNS queries. The DIG Linux utility has actually replaced the older host and nslookup. In making these queries, one thing to note is that when we don't specify a name server to use, the DIG utility will simply use the Linux OS default resolver. We can, however, pass a name server to DIG; we will cover this in the upcoming section, Zone transfers. There is a nice object-oriented Perl module for DIG that we will examine, which is called Net::DNS::Dig. Let's quickly look at an example to query our DNS with this module: #!/usr/bin/perl -w use Net::DNS::Dig; use strict; my $dig = new Net::DNS::Dig(); my $dom = shift or die "Usage: perl dig.pl <domain>"; my $dobj = $dig->for($dom, 'A'); # print $dobj->sprintf; # print entire dig query response print "CODE: ",$dobj->rcode(1),"n"; # Dig Response Code my %mx = Net::DNS::Dig->new()->for($dom,'MX')->rdata(); while(my($val,$server) = each(%mx)){ print "MX: ",$server," - ",$val,"n"; } The preceding code is simple. We create a DIG object $dig and call the for() method, passing the domain name we pulled by shifting the command-line arguments and types for A records. We print the returned response with sprintf(), and then the response code alone with the rcode() method. Finally, we create a hash object %mx from the rdata() method. We pass the rdata() object returned from making a new Net::DNS::Dig object, and call the for() method on it with a type of MX for the mail server. Let's try this against a domain and see what is returned: root@wnld960:~# perl dig.pl weaknetlabs.com ; <<>> Net::DNS::Dig 0.12 <<>> -t a weaknetlabs.com. ;; ;; Got answer. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34071 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;weaknetlabs.com.               IN     A ;; ANSWER SECTION: weaknetlabs.com.       300    IN     A       198.144.36.192 ;; Query time: 118 ms ;; SERVER: 75.75.76.76# 53(75.75.76.76) ;; WHEN: Mon May 19 18:26:31 2014 ;; MSG SIZE rcvd: 49 -- XFR size: 2 records CODE: NOERROR MX: mailstore1.secureserver.net - 10 MX: smtp.secureserver.net – 0 The output is just as expected. Everything above the line starting with CODE is the response from making the DIG query. CODE is returned from the rcode() method. Since we passed a true value to rcode(), we got a string type, NOERROR, returned. Next, we printed the key and value pairs of the %mx Perl hash, which displayed our target's e-mail server names. Brute force enumeration Keeping the previous lesson in mind, and knowing that Linux offers a great wealth of networking utilities, we might be inclined to write our own DNS brute force tool to enumerate any possible A records that our client target could have made prior to our penetration test. Let's take a quick look at the nslookup utility we can use to check if a record exists: trevelyn@wnld960:~$ nslookup admin.warcarrier.org Server:         75.75.76.76 Address:       75.75.76.76#53 Non-authoritative answer: Name:   admin.warcarrier.org Address: 10.0.0.1 trevelyn@wnld960:~$ nslookup admindoesntexist.warcarrier.org Server:         75.75.76.76 Address:        75.75.76.76#53 ** server can't find admindoesntexist.warcarrier.org: NXDOMAIN trevelyn@wnld960:~$ This is the output of two calls to nslookup, the networking utility used for returning IP addresses of hostnames, and vice versa. The first A record check was successful, and the second, the admindoesntexist subdomain, was not. We can easily see from the output of this program how we can parse it to check whether the subdomain exists. We can also see from the two subdomains that we can use a simple word list of commonly used subdomains for efficiency, before trying many possible combinations. A lot of intelligence gathering might have already been done for you by search engines such as Google. In fact, the keyword search site: can return more than just the www subdomains. If we broaden our num= URL GET parameter and loop through all possible results by incrementing the start= parameter, we can potentially get results from other subdomains of our target. Now that we have seen the basic query for a subdomain, let's turn our focus to use Perl and a new Perl module, Net::DNS, to enumerate a few subdomains: #!/usr/bin/perl -w use strict; use Net::DNS; my $dns = Net::DNS::Resolver->new; my @subDomains = ("admin","admindoesntexist","www","mail","download","gateway"); my $usage = "perl domainbf.pl <domain name>"; my $domain = shift or die $usage; my $total = 0; dns($_) foreach(@subDomains); print $total," records testedn"; sub dns{ # search sub domains: $total++; # record count my $hn = shift.".".$domain; # construct hostname my $dnsLookup = $dns->search($hn); if($dnsLookup){ # successful lookup my $t=0; foreach my $ip ($dnsLookup->answer){    return unless $ip->type eq "A" and $t<1; # A records    print $hn,": ",$ip->address,"n"; # just the IP    $t++; } } return; } The preceding Perl program loops through the @domains array and calls the dns() subroutine on each, which returns or prints a successful query. The $t integer token is used for subdomains, which has several identical records to avoid repetition in the program's output. After this, we simply print the total of the records tested. This program can be easily modified to open a word list file, and we can loop through each by passing them to the dns() subroutine, with something similar to the following: open(FLE,"file.txt"); while(<FLE>){ dns($_); } Zone transfers As we have seen with an A record, the admin.warcarrier.org entry provided us with some insight as to the IP range of the internal network, or the class A address 10.0.0.1. Sometimes, when a client target is controlling and hosting their own name servers, they accidentally allow DNS zone transfers from their name servers into public name servers, providing the attacker with information where the target's resources are. Let's use the Linux host utility to check for a DNS zone transfer: [trevelyn@shell ~]$ host -la warcarrier.org beth.ns.cloudflare.com Trying "warcarrier.org" Using domain server: Name: beth.ns.cloudflare.com Address: 2400:cb00:2049:1::adf5:3a67#53 Aliases: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20461 ;; flags: qr aa; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ; warcarrier.org.           IN     AXFR ;; ANSWER SECTION: warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 warcarrier.org.     300     IN     NS     beth.ns.cloudflare.com. warcarrier.org.     300     IN     NS     hank.ns.cloudflare.com. warcarrier.org.     300     IN     A      50.97.177.66 admin.warcarrier.org. 300     IN     A       10.0.0.1 gateway.warcarrier.org. 300 IN   A       10.0.0.124 remote.warcarrier.org. 300     IN     A       10.0.0.15 partner.warcarrier.org. 300 IN   CNAME   warcarrier.weaknetlabs.com. calendar.warcarrier.org. 300 IN   CNAME   login.secureserver.net. direct.warcarrier.org. 300 IN   CNAME   warcarrier.org. warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 Received 401 bytes from 2400:cb00:2049:1::adf5:3a67#53 in 56 ms [trevelyn@shell ~]$ As we see from the output of the host command, we have found a successful DNS zone transfer, which provided us with even more hostnames used by our client target. This attack has provided us with a few CNAME records, which are used as aliases to other servers owned or used by our target, the subnet (class A) IP addresses used by the target, and even the name servers used. We can also see that the default name, direct, used by CloudFlare.com is still set for the cloud service to allow connections directly to the IP of warcarrier.org, which we can use to bypass the cloud service. The host command requires the name server, in our case beth.ns.cloudflare.com, before performing the transfer. What this means for us is that we will need the name server information before querying for a potential DNS zone transfer in our Perl programs. Let's see how we can use Net::DNS for the entire process: #!/usr/bin/perl -w use strict; use Net::DNS; my $usage = "perl dnsZt.pl <domain name>"; die $usage unless my $dom = shift; my $res = Net::DNS::Resolver->new; # dns object my $query = $res->query($dom,"NS"); # query method call for nameservers if($query){ # query of NS was successful foreach my $rr (grep{$_->type eq 'NS'} $query->answer){ $res->nameservers($rr->nsdname); # set the name server print "[>] Testing NS Server: ".$rr->nsdname."n"; my @subdomains = $res->axfr($dom); if ($#subdomains > 0){    print "[!] Successful zone transfer:n";    foreach (@subdomains){    print $_->name."n"; # returns a Net::DNS::RR object    } }else{ # 0 returned domains    print "[>] Transfer failed on " . $rr->nsdname . "n"; } } }else{ # Something went wrong: warn "query failed: ", $res->errorstring,"n"; } The preceding program that uses the Net::DNS Perl module will first query for the name servers used by our target and then test the DNS zone transfer for each target. The grep() function returns a list to the foreach() loop of all name servers (NS) found. The foreach() loop then simply attempts the DNS zone transfer (AXFR) and returns the results if the array is larger than zero elements. Let's test the output on our client target: [trevelyn@shell ~]$ perl dnsZt.pl warcarrier.org [>] Testing NS Server: hank.ns.cloudflare.com [!] Successful zone transfer: warcarrier.org warcarrier.org admin.warcarrier.org gateway.warcarrier.org remote.warcarrier.org partner.warcarrier.org calendar.warcarrier.org direct.warcarrier.org [>] Testing NS Server: beth.ns.cloudflare.com [>] Transfer failed on beth.ns.cloudflare.com [trevelyn@shell ~]$ The preceding (trimmed) output is a successful DNS zone transfer on one of the name servers used by our client target. Traceroute With knowledge of how to glean hostnames and IP addresses from simple queries using Perl, we can take the OSINT a step further and trace our route to the hosts to see what potential target-owned hardware can intercept or relay traffic. For this task, we will use the Net::Traceroute Perl module. Let's take a look at how we can get the IP host information from relaying hosts between us and our target, using this Perl module and the following code: #!/usr/bin/perl -w use strict; use Net::Traceroute; my $dom = shift or die "Usage: perl tracert.pl <domain>"; print "Tracing route to ",$dom,"n"; my $tr = Net::Traceroute->new(host=>$dom,use_tcp=>1); for(my$i=1;$i<=$tr->hops;$i++){        my $hop = $tr->hop_query_host($i,0);        print "IP: ",$hop," hop time: ",$tr->hop_query_time($i,0),               "ms hop status: ",$tr->hop_query_stat($i,0),                " query count: ",$tr->hop_queries($i),"n" if($hop); } In the preceding Perl program, we used the Net::Traceroute Perl module to perform a trace route to the domain given by a command-line argument. The module must be used by first calling the new() method, which we do when defining $tr as a query object. We tell the trace route object $tr that we want to use TCP and also pass the host, which we shift from the command-line arguments. We can pass a lot more parameters to the new() method, one of which is debug=>9 to debug our trace route. A full list can be obtained from the CPAN Search page of the Perl module that can be accessed at http://search.cpan.org/~hag/Net-Traceroute/Traceroute.pm. The hops method is used when constructing the for() loop, which returns an integer value of the hop count. We then assign this to $i and loop through all hop and print statistics, using the methods hop_query_host for the IP address of the host, hop_query_time for the time taken to reach the host, and hop_query_stat that returns the status of the query as an integer value (on our lab machines, it is returned in milliseconds), which can be mapped to the export list of Net::Traceroute according to the module's documentation. Now, let's test this trace route program with a domain and check the output: root@wnld960:~# sudo perl tracert.pl weaknetlabs.com Tracing route to weaknetlabs.com IP: 10.0.0.1 hop time: 0.724ms hop status: 0 query count: 3 IP: 68.85.73.29 hop time: 14.096ms hop status: 0 query count: 3 IP: 69.139.195.37 hop time: 19.173ms hop status: 0 query count: 3 IP: 68.86.94.189 hop time: 31.102ms hop status: 0 query count: 3 IP: 68.86.87.170 hop time: 27.42ms hop status: 0 query count: 3 IP: 50.242.150.186 hop time: 27.808ms hop status: 0 query count: 3 IP: 144.232.20.144 hop time: 33.688ms hop status: 0 query count: 3 IP: 144.232.25.30 hop time: 38.718ms hop status: 0 query count: 3 IP: 144.232.229.46 hop time: 31.242ms hop status: 0 query count: 3 IP: 144.232.9.82 hop time: 99.124ms hop status: 0 query count: 3 IP: 198.144.36.192 hop time: 30.964ms hop status: 0 query count: 3 root@wnld960:~# The output from tracert.pl is just as we expected using the traceroute program of the Linux shell. This functionality can be easily built right into our port scanner application. Shodan Shodan is an online resource that can be used for hardware searching within a specific domain. For instance, a search for hostname:<domain> will provide all the hardware entities found within this specific domain. Shodan is both a public and open source resource for intelligence. Harnessing the full power of Shodan and returning a multipage query is not free. For the examples in this article, the first page of the query results, which are free, were sufficient to provide a suitable amount of information. The returned output is XML, and Perl has some great utilities to parse XML. Luckily, for the purpose of our example, Shodan offers an example query for us to use as export_sample.xml. This XML file contains only one node per host, labeled host. This node contains attributes for the corresponding host and we will use the XML::LibXML::Node class from the XML::LibXML::Node Perl module. First, we will download the XML file and use XML::LibXML to open the local file with the parse_file() method, as shown in the following code: #!/usr/bin/perl -w use strict; use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_file("export_sample.xml"); foreach my $host ($doc->findnodes('/shodan/host')) { print "Host Found:n"; my @attribs = $host->attributes('/shodan/host'); foreach my $host (@attribs){ # get host attributes print $host =~ m/([^=]+)=.*/," => "; print $host =~ m/.*"([^"]+)"/,"n"; } # next print "nn"; } The preceding Perl program will open the export_sample.xml file and navigate through the host nodes using the simple xpath of /shodan/host. For each <host> node, we call the attribute's method from the XML::LibXML::Node class, which returns an array of all attributes with information such as the IP address, hostname, and more. We then run a regular expression pattern on the $host string to parse out the key, and again with another regexp to get the value. Let's see how this returns data from our sample XML file from ShodanHQ.com: root@wnld960:~#perl shodan.pl Host Found: hostnames => internetdevelopment.ro ip => 109.206.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: ip => 113.203.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: hostnames => ip-173-201-71-21.ip.secureserver.net ip => 173.201.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 The preceding output is from our shodan.pl Perl program. It loops through all host nodes and prints the attributes. As we can see, Shodan can provide us with some very useful information that we can possibly use to exploit later in our penetration testing. It's also easy to see, without going into elementary Perl coding examples, that we can find exactly what we are looking for from an XML object's attributes using this simple method. We can use this code for other resources as well. More intelligence Gaining information about the actual physical address is also important during a penetration test. Sure, this is public information, but where do we find it? Well, the PTES describes how most states require a legal entity of a company to register with the State Division, which can provide us with a one-stop go-to place for the physical address information, entity ID, service of process agent information, and more. This can be very useful information on our client target. If obtained, we can extend this intelligence by finding out more about the property owners for physical penetration testing and social engineering by checking the city/county's department of land records, real estate, deeds, or even mortgages. All of this data, if hosted on the Web, can be gathered by automated Perl programs, as we did in the example sections of this article using LWP::UserAgent. Summary As we have seen, being creative with our information-gathering techniques can really shine with the power of regular expressions and the ability to spider links. As we learned in the introduction, it's best to do an automated OSINT gathering process along with a manual process because both processes can reveal information that one might have missed. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Logging Tasks [article]
Read more
  • 0
  • 0
  • 16836

article-image-websockets-wildfly
Packt
30 Dec 2014
22 min read
Save for later

WebSockets in Wildfly

Packt
30 Dec 2014
22 min read
In this article by the authors, Michał Ćmil and Michał Matłoka, of Java EE 7 Development with WildFly, we will cover WebSockets and how they are one of the biggest additions in Java EE 7. In this article, we will explore the new possibilities that they provide to a developer. In our ticket booking applications, we already used a wide variety of approaches to inform the clients about events occurring on the server side. These include the following: JSF polling Java Messaging Service (JMS) messages REST requests Remote EJB requests All of them, besides JMS, were based on the assumption that the client will be responsible for asking the server about the state of the application. In some cases, such as checking if someone else has not booked a ticket during our interaction with the application, this is a wasteful strategy; the server is in the position to inform clients when it is needed. What's more, it feels like the developer must hack the HTTP protocol to get a notification from a server to the client. This is a requirement that has to be implemented in most nontrivial web applications, and therefore, deserves a standardized solution that can be applied by the developers in multiple projects without much effort. WebSockets are changing the game for developers. They replace the request-response paradigm in which the client always initiates the communication with a two-point bidirectional messaging system. After the initial connection, both sides can send independent messages to each other as long as the session is alive. This means that we can easily create web applications that will automatically refresh their state with up-to-date data from the server. You probably have already seen this kind of behavior in Google Docs or live broadcasts on news sites. Now we can achieve the same effect in a simpler and more efficient way than in earlier versions of Java Enterprise Edition. In this article, we will try to leverage these new, exciting features that come with WebSockets in Java EE 7 thanks to JSR 356 (https://jcp.org/en/jsr/detail?id=356) and HTML5. In this article, you will learn the following topics: How WebSockets work How to create a WebSocket endpoint in Java EE 7 How to create an HTML5/AngularJS client that will accept push notifications from an application deployed on WildFly (For more resources related to this topic, see here.) An overview of WebSockets A WebSocket session between the client and server is built upon a standard TCP connection. Although the WebSocket protocol has its own control frames (mainly to create and sustain the connection) coded by the Internet Engineering Task Force in the RFC 6455 (http://tools.ietf.org/html/rfc6455), whose peers are not obliged to use any specific format to exchange application data. You may use plaintext, XML, JSON, or anything else to transmit your data. As you probably remember, this is quite different from SOAP-based WebServices, which had bloated specifications of the exchange protocol. The same goes for RESTful architectures; we no longer have the predefined verb methods from HTTP (GET, PUT, POST, and DELETE), status codes, and the whole semantics of an HTTP request. This liberty means that WebSockets are pretty low level compared to the technologies that we used up to this point, but thanks to this, the communication overhead is minimal. The protocol is less verbose than SOAP or RESTful HTTP, which allows us to achieve higher performance. This, however, comes with a price. We usually like to use the features of higher-level protocols (such as horizontal scaling and rich URL semantics), and with WebSockets, we would need to write them by hand. For standard CRUD-like operations, it would be easier to use a REST endpoint than create everything from scratch. What do we get from WebSockets compared to the standard HTTP communication? First of all, a direct connection between two peers. Normally, when you connect to a web server (which can, for instance, handle a REST endpoint), every subsequent call is a new TCP connection, and your machine is treated like it is a different one every time you make a request. You can, of course, simulate a stateful behavior (so that the server would recognize your machine between different requests) using cookies and increase the performance by reusing the same connection in a short period of time for a specific client, but basically, it is a workaround to overcome the limitations of the HTTP protocol. Once you establish a WebSocket connection between a server and client, you can use the same session (and underlying TCP connection) during the whole communication. Both sides are aware of it, and can send data independently in a full-duplex manner (both sides can send and receive data simultaneously). Using plain HTTP, there is no way for the server to spontaneously start sending data to the client without any request from its side. What's more, the server is aware of all of its WebSocket clients connected, and can even send data between them! The current solution that includes trying to simulate real-time data delivery using HTTP protocol can put a lot of stress on the web server. Polling (asking the server about updates), long polling (delaying the completion of a request to the moment when an update is ready), and streaming (a Comet-based solution with a constantly open HTTP response) are all ways to hack the protocol to do things that it wasn't designed for and have their own limitations. Thanks to the elimination of unnecessary checks, WebSockets can heavily reduce the number of HTTP requests that have to be handled by the web server. The updates are delivered to the user with a smaller latency because we only need one round-trip through the network to get the desired information (it is pushed by the server immediately). All of these features make WebSockets a great addition to the Java EE platform, which fills the gaps needed to easily finish specific tasks, such as sending updates, notifications, and orchestrating multiple client interactions. Despite these advantages, WebSockets are not intended to replace REST or SOAP WebServices. They do not scale so well horizontally (they are hard to distribute because of their stateful nature), and they lack most of the features that are utilized in web applications. URL semantics, complex security, compression, and many other features are still better realized using other technologies. How does WebSockets work To initiate a WebSocket session, the client must send an HTTP request with an upgraded, WebSocket header field. This informs the server that the peer client has asked the server to switch to the WebSocket protocol. You may notice that the same happens in WildFly for Remote EJBs; the initial connection is made using an HTTP request, and is later switched to the remote protocol thanks to the Upgrade mechanism. The standard Upgrade header field can be used to handle any protocol, other than HTTP, which is accepted by both sides (the client and server). In WildFly, this allows to reuse the HTTP port (80/8080) for other protocols and therefore, minimise the number of required ports that should be configured. If the server can understand the WebSocket protocol, the client and server then proceed with the handshaking phase. They negotiate the version of the protocol, exchange security keys, and if everything goes well, the peers can go to the data transfer phase. From now on, the communication is only done using the WebSocket protocol. It is not possible to exchange any HTTP frames using the current connection. The whole life cycle of a connection can be summarized in the following diagram: A sample HTTP request from a JavaScript application to a WildFly server would look similar to this: GET /ticket-agency-websockets/tickets HTTP/1.1 Upgrade: websocket Connection: Upgrade Host: localhost:8080 Origin: http://localhost:8080 Pragma: no-cache Cache-Control: no-cache Sec-WebSocket-Key: TrjgyVjzLK4Lt5s8GzlFhA== Sec-WebSocket-Version: 13 Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits, x-webkit-deflate-frame User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36 Cookie: [45 bytes were stripped] We can see that the client requests an upgrade connection with WebSocket as the target protocol on the URL /ticket-agency-websockets/tickets. It additionally passes information about the requested version and key. If the server supports the request protocol and all the required data is passed by the client, then it would respond with the following frame: HTTP/1.1 101 Switching Protocols X-Powered-By: Undertow 1 Server: Wildfly 8 Origin: http://localhost:8080 Upgrade: WebSocket Sec-WebSocket-Accept: ZEAab1TcSQCmv8RsLHg4RL/TpHw= Date: Sun, 13 Apr 2014 17:04:00 GMT Connection: Upgrade Sec-WebSocket-Location: ws://localhost:8080/ticket-agency-websockets/tickets Content-Length: 0 The status code of the response is 101 (switching protocols) and we can see that the server is now going to start using the WebSocket protocol. The TCP connection initially used for the HTTP request is now the base of the WebSocket session and can be used for transmissions. If the client tries to access a URL, which is only handled by another protocol, then the server can ask the client to do an upgrade request. The server uses the 426 (upgrade required) status code in such cases. The initial connection creation has some overhead (because of the HTTP frames that are exchanged between the peers), but after it is completed, new messages have only 2 bytes of additional headers. This means that when we have a large number of small messages, WebSocket will be an order of magnitude faster than REST protocols simply because there is less data to transmit! If you are wondering about the browser support of WebSockets, you can look it up at http://caniuse.com/websockets. All new versions of major browsers currently support WebSockets; the total coverage is estimated (at the time of writing) at 74 percent. You can see it in the following screenshot: After this theoretical introduction, we are ready to jump into action. We can now create our first WebSocket endpoint! Creating our first endpoint Let's start with a simple example: package com.packtpub.wflydevelopment.chapter8.boundary; import javax.websocket.EndpointConfig; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import java.io.IOException; @ServerEndpoint("/hello") public class HelloEndpoint {    @OnOpen    public void open(Session session, EndpointConfig conf) throws IOException {        session.getBasicRemote().sendText("Hi!");    } } Java EE 7 specification has taken into account developer friendliness, which can be clearly seen in the given example. In order to define your WebSocket endpoint, you just need a few annotations on a Plain Old Java Object (POJO). The first POJO @ServerEndpoint("/hello") defines a path to your endpoint. It's a good time to discuss the endpoint's full address. We placed this sample in the application named ticket-agency-websockets. During the deployment of application, you can spot information in the WildFly log about endpoints creation, as shown in the following command line: 02:21:35,182 INFO [io.undertow.websockets.jsr] (MSC service thread 1-7)UT026003: Adding annotated server endpoint class com.packtpub.wflydevelopment.chapter8.boundary.FirstEndpoint for path /hello 02:21:35,401 INFO [org.jboss.resteasy.spi.ResteasyDeployment](MSC service thread 1-7) Deploying javax.ws.rs.core.Application: classcom.packtpub.wflydevelopment.chapter8.webservice.JaxRsActivator$Proxy$_$$_WeldClientProxy 02:21:35,437 INFO [org.wildfly.extension.undertow](MSC service thread 1-7) JBAS017534: Registered web context:/ticket-agency-websockets The full URL of the endpoint is ws://localhost:8080/ticket-agency-websockets/hello, which is just a concatenation of the server and application address with an endpoint path on an appropriate protocol. The second used annotation @OnOpen defines the endpoint behavior when the connection from the client is opened. It's not the only behavior-related annotation of the WebSocket endpoint. Let's look to the following table: Annotation Description @OnOpen Connection is open. With this annotation, we can use the Session and EndpointConfig parameters. The first parameter represents the connection to the user and allows further communication. The second one provides some client-related information. @OnMessage This annotation is executed when a message from the client is being received. In such a method, you can just have Session and for example, the String parameter, where the String parameter represents the received message. @OnError There are bad times when some errors occur. With this annotation, you can retrieve a Throwable object apart from standard Session. @OnClose When the connection is closed, it is possible to get some data concerning this event in the form of the CloseReason type object. There is one more interesting line in our HelloEndpoint. Using the Session object, it is possible to communicate with the client. This clearly shows that in WebSockets, two-directional communication is easily possible. In this example, we decided to respond to a connected user synchronously (getBasicRemote()) with just a text message Hi! (sendText (String)). Of course, it's also possible to communicate asynchronously and send, for example, sending binary messages using your own binary bandwidth saving protocol. We will present some of these processes in the next example. Expanding our client application It's time to show how you can leverage the WebSocket features in real life. We created the ticket booking application based on the REST API and AngularJS framework. It was clearly missing one important feature; the application did not show information concerning ticket purchases of other users. This is a perfect use case for WebSockets! Since we're just adding a feature to our previous app, we will describe the changes we will introduce to it. In this example, we would like to be able to inform all current users about other purchases. This means that we have to store information about active sessions. Let's start with the registry type object, which will serve this purpose. We can use a Singleton session bean for this task, as shown in the following code: @Singleton public class SessionRegistry {    private final Set<Session> sessions = new HashSet<>();    @Lock(LockType.READ)    public Set<Session> getAll() {        return Collections.unmodifiableSet(sessions);    }    @Lock(LockType.WRITE)    public void add(Session session) {        sessions.add(session);    }    @Lock(LockType.WRITE)    public void remove(Session session) {        sessions.remove(session);    } } We could use Collections.synchronizedSet from standard Java libraries but it's a great chance to remember what we described earlier about container-based concurrency. In SessionRegistry, we defined some basic methods to add, get, and remove sessions. For the sake of collection thread safety during retrieval, we return an unmodifiable view. We defined the registry, so now we can move to the endpoint definition. We will need a POJO, which will use our newly defined registry as shown: @ServerEndpoint("/tickets") public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendText(toJson(seat)));    }    private String toJson(Seat seat) {        final JsonObject jsonObject = Json.createObjectBuilder()                .add("id", seat.getId())                .add("booked", seat.isBooked())                .build();        return jsonObject.toString();    } } Our endpoint is defined in the /tickets address. We injected a SessionRepository to our endpoint. During @OnOpen, we add Sessions to the registry, and during @OnClose, we just remove them. Message sending is performed on the CDI event (the @Observers annotation), which is already fired in our code during TheatreBox.buyTicket(int). In our send method, we retrieve all sessions from SessionRepository, and for each of them, we asynchronously send information about booked seats. We don't really need information about all the Seat fields to realize this feature. That's the reason why we don't use the automatic JSON serialization here. Instead, we decided to use a minimalistic JSON object, which provides only the required data. To do this, we used the new Java API for JSON Processing (JSR-353). Using a fluent-like API, we're able to create a JSON object and add two fields to it. Then, we just convert JSON to the String, which is sent in a text message. Because in our example we send messages in response to a CDI event, we don't have (in the event handler) an out-of-the-box reference to any of the sessions. We have to use our sessionRegistry object to access the active ones. However, if we would like to do the same thing but, for example, in the @OnMessage method, then it is possible to get all active sessions just by executing the session.getOpenSessions() method. These are all the changes required to perform on the backend side. Now, we have to modify our AngularJS frontend to leverage the added feature. The good news is that JavaScript already includes classes that can be used to perform WebSocket communication! There are a few lines of code we have to add inside the module defined in the seat.js file, which are as follows: var ws = new WebSocket("ws://localhost:8080/ticket-agency-websockets/tickets"); ws.onmessage = function (message) {    var receivedData = message.data;    var bookedSeat = JSON.parse(receivedData);    $scope.$apply(function () {        for (var i = 0; i < $scope.seats.length; i++) {           if ($scope.seats[i].id === bookedSeat.id) {                $scope.seats[i].booked = bookedSeat.booked;                break;            }        }    }); }; The code is very simple. We just create the WebSocket object using the URL to our endpoint, and then we define the onmessage function in that object. During the function execution, the received message is automatically parsed from the JSON to JavaScript object. Then, in $scope.$apply, we just iterate through our seats, and if the ID matches, we update the booked state. We have to use $scope.$apply because we are touching an Angular object from outside the Angular world (the onmessage function). Modifications performed on $scope.seats are automatically visible on the website. With this, we can just open our ticket booking website in two browser sessions, and see that when one user buys a ticket, the second users sees almost instantly that the seat state is changed to booked. We can enhance our application a little to inform users if the WebSocket connection is really working. Let's just define onopen and onclose functions for this purpose: ws.onopen = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'info',            msg: 'Push connection from server is working'        });    }); }; ws.onclose = function (event) {    $scope.$apply(function () {        $scope.alerts.push({            type: 'warning',            msg: 'Error on push connection from server '        });    }); }; To inform users about a connection's state, we push different types of alerts. Of course, again we're touching the Angular world from the outside, so we have to perform all operations on Angular from the $scope.$apply function. Running the described code results in the notification, which is visible in the following screenshot: However, if the server fails after opening the website, you might get an error as shown in the following screenshot: Transforming POJOs to JSON In our current example, we transformed our Seat object to JSON manually. Normally, we don't want to do it this way; there are many libraries that will do the transformation for us. One of them is GSON from Google. Additionally, we can register an encoder/decoder class for a WebSocket endpoint that will do the transformation automatically. Let's look at how we can refactor our current solution to use an encoder. First of all, we must add GSON to our classpath. The required Maven dependency is as follows: <dependency>    <groupId>com.google.code.gson</groupId>    <artifactId>gson</artifactId>    <version>2.3</version> </dependency> Next, we need to provide an implementation of the javax.websocket.Encoder.Text interface. There are also versions of the javax.websocket.Encoder.Text interface for binary and streamed data (for both binary and text formats). A corresponding hierarchy of interfaces is also available for decoders (javax.websocket.Decoder). Our implementation is rather simple. This is shown in the following code snippet: public class JSONEncoder implements Encoder.Text<Object> {    private Gson gson;    @Override    public void init(EndpointConfig config) {        gson = new Gson(); [1]    }    @Override    public void destroy() {        // do nothing    }    @Override    public String encode(Object object) throws EncodeException {        return gson.toJson(object); [2]    } } First, we create an instance of GSON in the init method; this action will be executed when the endpoint is created. Next, in the encode method, which is called every time, we send an object through an endpoint. We use JSON to create JSON from an object. This is quite concise when we think how reusable this little class is. If you want more control on the JSON generation process, you can use the GsonBuilder class to configure the GSON object before creation of the GsonBuilder class. We have the encoder in place. Now it's time to alter our endpoint: @ServerEndpoint(value = "/tickets", encoders={JSONEncoder.class})[1] public class TicketEndpoint {    @Inject    private SessionRegistry sessionRegistry;    @OnOpen    public void open(Session session, EndpointConfig conf) {        sessionRegistry.add(session);    }    @OnClose    public void close(Session session, CloseReason reason) {        sessionRegistry.remove(session);    }    public void send(@Observes Seat seat) {        sessionRegistry.getAll().forEach(session -> session.getAsyncRemote().sendObject(seat)); [2]    } } The first change is done on the @ServerEndpoint annotation. We have to define a list of supported encoders; we simply pass our JSONEncoder.class wrapped in an array. Additionally, we have to pass the endpoint name using the value attribute. Earlier, we used the sendText method to pass a string containing a manually created JSON. Now, we want to send an object and let the encoder handle the JSON generation; therefore, we'll use the getAsyncRemote().sendObject() method. That's all! Our endpoint is ready to be used. It will work the same as the earlier version, but now our objects will be fully serialized to JSON, so they will contain every field, not only IDs and be booked. After deploying the server, you can connect to the WebSocket endpoint using one of the Chrome extensions, for instance, the Dark WebSocket terminal from the Chrome store (use the ws://localhost:8080/ticket-agency-websockets/tickets address). When you book tickets using the web application, the WebSocket terminal should show something similar to the output shown in the following screenshot: Of course, it is possible to use different formats other than JSON. If you want to achieve better performance (when it comes to the serialization time and payload size), you may want to try out binary serializers such as Kryo (https://github.com/EsotericSoftware/kryo). They may not be supported by JavaScript, but may come in handy if you would like to use WebSockets for other clients also. Tyrus (https://tyrus.java.net/) is a reference implementation of the WebSocket standard for Java; you can use it in your standalone desktop applications. In that case, besides the encoder (which is used to send messages), you would also need to create a decoder, which can automatically transform incoming messages. An alternative to WebSockets The example we presented in this article is possible to be implemented using an older, lesser-known technology named Server-Sent Events (SSE). SSE allows for one-way communication from the server to client over HTTP. It is much simpler than WebSockets but has a built-in support for things such as automatic reconnection and event identifiers. WebSockets are definitely more powerful, but are not the only way to pass events, so when you need to implement some notifications from the server side, remember about SSE. Another option is to explore the mechanisms oriented around the Comet techniques. Multiple implementations are available and most of them use different methods of transportation to achieve their goals. A comprehensive comparison is available at http://cometdaily.com/maturity.html. Summary In this article, we managed to introduce the new low-level type of communication. We presented how it works underneath and compares to SOAP and REST introduced earlier. We also discussed how the new approach changes the development of web applications. Our ticket booking application was further enhanced to show users the changing state of the seats using push-like notifications. The new additions required very little code changes in our existing project when we take into account how much we are able to achieve with them. The fluent integration of WebSockets from Java EE 7 with the AngularJS application is another great showcase of flexibility, which comes with the new version of the Java EE platform. Resources for Article: Further resources on this subject: Various subsystem configurations [Article] Running our first web application [Article] Creating Java EE Applications [Article]
Read more
  • 0
  • 0
  • 15855

article-image-middleware
Packt
30 Dec 2014
13 min read
Save for later

Middleware

Packt
30 Dec 2014
13 min read
In this article by Mario Casciaro, the author of the book, "Node.js Design Patterns", has described the importance of using a middleware pattern. One of the most distinctive patterns in Node.js is definitely middleware. Unfortunately it's also one of the most confusing for the inexperienced, especially for developers coming from the enterprise programming world. The reason for the disorientation is probably connected with the meaning of the term middleware, which in the enterprise architecture's jargon represents the various software suites that help to abstract lower level mechanisms such as OS APIs, network communications, memory management, and so on, allowing the developer to focus only on the business case of the application. In this context, the term middleware recalls topics such as CORBA, Enterprise Service Bus, Spring, JBoss, but in its more generic meaning it can also define any kind of software layer that acts like a glue between lower level services and the application (literally the software in the middle). (For more resources related to this topic, see here.) Middleware in Express Express (http://expressjs.com) popularized the term middleware in theNode.js world, binding it to a very specific design pattern. In express, in fact, a middleware represents a set of services, typically functions, that are organized in a pipeline and are responsible for processing incoming HTTP requests and relative responses. An express middleware has the following signature: function(req, res, next) { ... } Where req is the incoming HTTP request, res is the response, and next is the callback to be invoked when the current middleware has completed its tasks and that in turn triggers the next middleware in the pipeline. Examples of the tasks carried out by an express middleware are as the following: Parsing the body of the request Compressing/decompressing requests and responses Producing access logs Managing sessions Providing Cross-site Request Forgery (CSRF) protection If we think about it, these are all tasks that are not strictly related to the main functionality of an application, rather, they are accessories, components providing support to the rest of the application and allowing the actual request handlers to focus only on their main business logic. Essentially, those tasks are software in the middle. Middleware as a pattern The technique used to implement middleware in express is not new; in fact, it can be considered the Node.js incarnation of the Intercepting Filter pattern and the Chain of Responsibility pattern. In more generic terms, it also represents a processing pipeline,which reminds us about streams. Today, in Node.js, the word middleware is used well beyond the boundaries of the express framework, and indicates a particular pattern whereby a set of processing units, filters, and handlers, under the form of functions are connected to form an asynchronous sequence in order to perform preprocessing and postprocessing of any kind of data. The main advantage of this pattern is flexibility; in fact, this pattern allows us to obtain a plugin infrastructure with incredibly little effort, providing an unobtrusive way for extending a system with new filters and handlers. If you want to know more about the Intercepting Filter pattern, the following article is a good starting point: http://www.oracle.com/technetwork/java/interceptingfilter-142169.html. A nice overview of the Chain of Responsibility pattern is available at this URL: http://java.dzone.com/articles/design-patterns-uncovered-chain-of-responsibility. The following diagram shows the components of the middleware pattern: The essential component of the pattern is the Middleware Manager, which is responsible for organizing and executing the middleware functions. The most important implementation details of the pattern are as follows: New middleware can be registered by invoking the use() function (the name of this function is a common convention in many implementations of this pattern, but we can choose any name). Usually, new middleware can only be appended at the end of the pipeline, but this is not a strict rule. When new data to process is received, the registered middleware is invoked in an asynchronous sequential execution flow. Each unit in the pipeline receives in input the result of the execution of the previous unit. Each middleware can decide to stop further processing of the data by simply not invoking its callback or by passing an error to the callback. An error situation usually triggers the execution of another sequence of middleware that is specifically dedicated to handling errors. There is no strict rule on how the data is processed and propagated in the pipeline. The strategies include: Augmenting the data with additional properties or functions Replacing the data with the result of some kind of processing Maintaining the immutability of the data and always returning fresh copies as result of the processing The right approach that we need to take depends on the way the Middleware Manager is implemented and on the type of processing carried out by the middleware itself. Creating a middleware framework for ØMQ Let's now demonstrate the pattern by building a middleware framework around the ØMQ (http://zeromq.org) messaging library. ØMQ (also known as ZMQ, or ZeroMQ) provides a simple interface for exchanging atomic messages across the network using a variety of protocols; it shines for its performances, and its basic set of abstractions are specifically built to facilitate the implementation of custom messaging architectures. For this reason, ØMQ is often chosen to build complex distributed systems. The interface of ØMQ is pretty low-level, it only allows us to use strings and binary buffers for messages, so any encoding or custom formatting of data has to be implemented by the users of the library. In the next example, we are going to build a middleware infrastructure to abstract the preprocessing and postprocessing of the data passing through a ØMQ socket, so that we can transparently work with JSON objects but also seamlessly compress the messages traveling over the wire. Before continuing with the example, please make sure to install the ØMQ native libraries following the instructions at this URL: http://zeromq.org/intro:get-the-software. Any version in the 4.0 branch should be enough for working on this example. The Middleware Manager The first step to build a middleware infrastructure around ØMQ is to create a component that is responsible for executing the middleware pipeline when a new message is received or sent. For the purpose, let's create a new module called zmqMiddlewareManager.js and let's start defining it: function ZmqMiddlewareManager(socket) { this.socket = socket; this.inboundMiddleware = []; //[1] this.outboundMiddleware = []; var self = this; socket.on('message', function(message) { //[2] self.executeMiddleware(self.inboundMiddleware, { data: message }); }); } module.exports = ZmqMiddlewareManager; This first code fragment defines a new constructor for our new component. It accepts a ØMQ socket as an argument and: Creates two empty lists that will contain our middleware functions, one for the inbound messages and another one for the outbound messages. Immediately, it starts listening for the new messages coming from the socket by attaching a new listener to the message event. In the listener, we process the inbound message by executing the inboundMiddleware pipeline. The next method of the ZmqMiddlewareManager prototype is responsible for executing the middleware when a new message is sent through the socket: ZmqMiddlewareManager.prototype.send = function(data) { var self = this; var message = { data: data}; self.executeMiddleware(self.outboundMiddleware, message,    function() {    self.socket.send(message.data);    } ); } This time the message is processed using the filters in the outboundMiddleware list and then passed to socket.send() for the actual network transmission. Now, we need a small method to append new middleware functions to our pipelines; we already mentioned that such a method is conventionally called use(): ZmqMiddlewareManager.prototype.use = function(middleware) { if(middleware.inbound) {    this.inboundMiddleware.push(middleware.inbound); }if(middleware.outbound) {    this.outboundMiddleware.unshift(middleware.outbound); } } Each middleware comes in pairs; in our implementation it's an object that contains two properties, inbound and outbound, that contain the middleware functions to be added to the respective list. It's important to observe here that the inbound middleware is pushed to the end of the inboundMiddleware list, while the outbound middleware is inserted at the beginning of the outboundMiddleware list. This is because complementary inbound/outbound middleware functions usually need to be executed in an inverted order. For example, if we want to decompress and then deserialize an inbound message using JSON, it means that for the outbound, we should instead first serialize and then compress. It's important to understand that this convention for organizing the middleware in pairs is not strictly part of the general pattern, but only an implementation detail of our specific example. Now, it's time to define the core of our component, the function that is responsible for executing the middleware: ZmqMiddlewareManager.prototype.executeMiddleware = function(middleware, arg, finish) {var self = this;(    function iterator(index) {      if(index === middleware.length) {        return finish && finish();      }      middleware[index].call(self, arg, function(err) { if(err) {        console.log('There was an error: ' + err.message);      }      iterator(++index);    }); })(0); } The preceding code should look very familiar; in fact, it is a simple implementation of the asynchronous sequential iteration pattern. Each function in the middleware array received in input is executed one after the other, and the same arg object is provided as an argument to each middleware function; this is the trickthat makes it possible to propagate the data from one middleware to the next. At the end of the iteration, the finish() callback is invoked. Please note that for brevity we are not supporting an error middleware pipeline. Normally, when a middleware function propagates an error, another set of middleware specifically dedicated to handling errors is executed. This can be easily implemented using the same technique that we are demonstrating here. A middleware to support JSON messages Now that we have implemented our Middleware Manager, we can create a pair of middleware functions to demonstrate how to process inbound and outbound messages. As we said, one of the goals of our middleware infrastructure is having a filter that serializes and deserializes JSON messages, so let's create a new middleware to take care of this. In a new module called middleware.js; let's include the following code: module.exports.json = function() { return {    inbound: function(message, next) {      message.data = JSON.parse(message.data.toString());      next();    },    outbound: function(message, next) {      message.data = new Buffer(JSON.stringify(message.data));      next();    } } } The json middleware that we just created is very simple: The inbound middleware deserializes the message received as an input and assigns the result back to the data property of message, so that it can be further processed along the pipeline The outbound middleware serializes any data found into message.data Design Patterns Please note how the middleware supported by our framework is quite different from the one used in express; this is totally normal and a perfect demonstration of how we can adapt this pattern to fit our specific need. Using the ØMQ middleware framework We are now ready to use the middleware infrastructure that we just created. To do that, we are going to build a very simple application, with a client sending a ping to a server at regular intervals and the server echoing back the message received. From an implementation perspective, we are going to rely on a request/reply messaging pattern using the req/rep socket pair provided by ØMQ (http://zguide. zeromq.org/page:all#Ask-and-Ye-Shall-Receive). We will then wrap the socketswith our zmqMiddlewareManager to get all the advantages from the middleware infrastructure that we built, including the middleware for serializing/deserializing JSON messages. The server Let's start by creating the server side (server.js). In the first part of the module we initialize our components: var zmq = require('zmq'); var ZmqMiddlewareManager = require('./zmqMiddlewareManager'); var middleware = require('./middleware'); var reply = zmq.socket('rep'); reply.bind('tcp://127.0.0.1:5000'); In the preceding code, we loaded the required dependencies and bind a ØMQ 'rep' (reply) socket to a local port. Next, we initialize our middleware: var zmqm = new ZmqMiddlewareManager(reply); zmqm.use(middleware.zlib()); zmqm.use(middleware.json()); We created a new ZmqMiddlewareManager object and then added two middlewares, one for compressing/decompressing the messages and another one for parsing/ serializing JSON messages. For brevity, we did not show the implementation of the zlib middleware. Now we are ready to handle a request coming from the client, we will do this by simply adding another middleware, this time using it as a request handler: zmqm.use({ inbound: function(message, next) { console.log('Received: ',    message.data); if(message.data.action === 'ping') {     this.send({action: 'pong', echo: message.data.echo});  }    next(); } }); Since this last middleware is defined after the zlib and json middlewares, we can transparently use the decompressed and deserialized message that is available in the message.data variable. On the other hand, any data passed to send() will be processed by the outbound middleware, which in our case will serialize then compress the data. The client On the client side of our little application, client.js, we will first have to initiate a new ØMQ req (request) socket connected to the port 5000, the one used by our server: var zmq = require('zmq'); var ZmqMiddlewareManager = require('./zmqMiddlewareManager'); var middleware = require('./middleware'); var request = zmq.socket('req'); request.connect('tcp://127.0.0.1:5000'); Then, we need to set up our middleware framework in the same way that we did for the server: var zmqm = new ZmqMiddlewareManager(request); zmqm.use(middleware.zlib()); zmqm.use(middleware.json()); Next, we create an inbound middleware to handle the responses coming from the server: zmqm.use({ inbound: function(message, next) {    console.log('Echoed back: ', message.data);    next(); } }); In the preceding code, we simply intercept any inbound response and print it to the console. Finally, we set up a timer to send some ping requests at regular intervals, always using the zmqMiddlewareManager to get all the advantages of our middleware: setInterval(function() { zmqm.send({action: 'ping', echo: Date.now()}); }, 1000); We can now try our application by first starting the server: node server We can then start the client with the following command: node client At this point, we should see the client sending messages and the server echoing them back. Our middleware framework did its job; it allowed us to decompress/compress and deserialize/serialize our messages transparently, leaving the handlers free to focus on their business logic! Summary In this article, we learned about the middleware pattern and the various facets of the pattern, and we also saw how to create a middleware framework and how to use. Resources for Article:  Further resources on this subject: Selecting and initializing the database [article] Exploring streams [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 7953

article-image-customization-microsoft-dynamics-crm
Packt
30 Dec 2014
24 min read
Save for later

Customization in Microsoft Dynamics CRM

Packt
30 Dec 2014
24 min read
 In this article by Nicolae Tarla, author of the book Dynamics CRM Application Structure, we looked at the basic structure of Dynamics CRM, the modules comprising the application, and what each of these modules contain. Now, we'll delve deeper into the application and take a look at how we can customize it. In this chapter, we will take a look at the following topics: Solutions and publishers Entity elements Entity types Extending entities Entity forms, quick view, and quick create forms Entity views and charts Entity relationships Messages Business rules We'll be taking a look at how to work with each of the elements comprising the sales, service, and marketing modules. We will go through the customization options and see how we can extend the system to fit new business requirements. (For more resources related to this topic, see here.) Solutions When we are talking about customizations for Microsoft Dynamics CRM, one of the most important concepts is the solution. The solution is a container of all the configurations and customizations. This packaging method allows customizers to track customizations, export and reimport them into other environments, as well as group specific sets of customizations by functionality or project cycle. Managing solutions is an aspect that should not be taken lightly, as down the road, a properly designed solution packaging model can help a lot, or an incorrect one can create difficulties. Using solutions is a best practice. While you can implement customizations without using solutions, these customizations will be merged into the base solutions and "you will not be able to export the customizations separately from the core elements of the platform. For a comprehensive description of solutions, you can refer to the MSDN documentation available at http://msdn.microsoft.com/en-gb/library/gg334576.aspx#BKMK_UnmanagedandManagedSolutions. Types of solutions Within the context of Dynamics CRM, there are two types of solutions that you will commonly use while implementing customizations: Unmanaged solutions Managed solutions Each one of these solution types has its own strengths and properties and are recommended to be used in various circumstances. In order to create and manage solutions as well as perform system customizations, the user account must be configured as a system customizer or system administrator. Unmanaged solutions An unmanaged solution is the default state of a newly created solution. A solution is unmanaged for the period of time while customization work is being performed in the context of the solution. You cannot customize a managed solution. An unmanaged solution can be converted to a managed solution by exporting it as managed. When the work is completed and the unmanaged solution is ready to be distributed, it is recommended that you package it as a managed solution for distribution. A managed solution, if configured as such, prevents further customizations to the solution elements. For this reason, solution vendors package their solutions as managed. In an unmanaged solution, the system customizer can perform various tasks, "which include: Adding and removing components Deleting components that allow deletion Exporting and importing the solution as an unmanaged solution Exporting the solution as a managed solution Changes made to the components in an unmanaged solution are also applied to all the unmanaged solutions that include these components. This means that all changes from all unmanaged solutions are also applied to the default solution. Deleting an unmanaged solution results in the removal of the container alone, "while the unmanaged components of the solution remain in the system. Deleting a component in an unmanaged solution results in the deletion of this component from the system. In order to remove a component from an unmanaged solution, the component should be removed from the solution, not deleted. Managed solutions Once work is completed in an unmanaged solution and the solution is ready to be distributed, it can be exported as a managed solution. Packaging a solution as a managed solution presents the following advantages: Solution components cannot be added or removed from a managed solution A managed solution cannot be exported from the environment it was deployed in Deleting a managed solution results in the uninstallation of all the component customizations included with the solution. It also results "in the loss of data associated with the components being deleted. A managed solution cannot be installed in the same organization that contains the unmanaged solution which was used to create it. Within a managed solution, certain components can be configured to allow further customization. Through this mechanism, the managed solution provider can enable future customizations that modify aspects of the solution provided. The guidance provided by Microsoft when working with various solution types states that a solution should be used in an unmanaged state between development and test environments, and it should be exported as a managed solution when it is ready to be deployed to a production environment. Solution properties Besides the solution type, each solution contains a solution publisher. This is a set of properties that allow the solution creators to communicate different information to the solution's users, including ways to contact the publisher for additional support. The solution publisher record will be created in all the organizations where the solution is being deployed. The solution publisher record is also important when releasing an update to an existing solution. Based on this common record and the solution properties, an update solution can be released and deployed on top of an existing solution. Using a published solution also allows us to define a custom prefix for all new custom fields created in the context of the solution. The default format for new custom field names is a new field name. Using a custom publisher, we can change "the "new" prefix to a custom prefix specific to our solution. Solution layering When multiple solutions are deployed in an organization, there are two methods by which the system defines the order in which changes take precedence. These methods are merge and top wins. The user interface elements are merged by default. As such, elements such as the default forms, ribbons, command bars, and site map are merged, and all base elements and new custom elements are rendered. For all other solution components, the top wins approach is taken, where the last solution that makes a customization takes precedence. The top wins approach is also taken into consideration when a subset of customizations is being applied on top of a previously applied customization. The system checks the integrity of all solution exports, imports, and other operations. As such, when exporting a solution, if dependent entities are not included, a warning is presented. The customizer has the option to ignore this warning. When importing a solution, if the dependent entities are missing, the import is halted and it fails. Also, deleting a component from a solution is prevented if dependent entities require it to be present. The default solution Dynamics CRM allows you to customize the system without taking advantage of solutions. By default, the system comes with a solution. This is an unmanaged solution, and all system customizations are applied to it by default. The default solution includes all the default components and customizations defined within Microsoft Dynamics CRM. This solution defines the default application behavior. Most of the components in this solution can be further customized. "This solution includes all the out-of-the-box customizations. Also, customizations applied through unmanaged solutions are being merged into the default solution. Entity elements Within a solution, we work with various entities. In Dynamics CRM, there are three main entity types: System entities Business entities Custom entities Each entity is composed of various attributes, while each attribute is defined as a value with a specific data type. We can consider an entity to be a data table. Each "row represents and entity record, while each column represents an entity attribute. As with any table, each attribute has specific properties that define its data type. The system entities in Dynamics CRM are used internally by the application and "are not customizable. Also, they cannot be deleted. As a system customizer or developer, we will work mainly with business management entities and custom entities. Business management entities are the default entities that come with the application. Some are customizable and can "be extended as required. Custom entities are all net new entities that are created "as part of our system customizations. The aspects related to customizing an entity include renaming the entity; modifying, adding, or removing entity attributes; or changing various settings and properties. Let's take a look at all these in detail. Renaming an entity One of the ways to customize an entity is by renaming it. In the general properties "of the entity, the field's display name allows us to change the name of an entity. "The plural name can also be updated accordingly. When renaming an entity, make sure that all the references and messages are updated to reflect the new entity name. Views, charts, messages, business rules, hierarchy settings, and even certain fields can reference the original name, and they should be updated to reflect the new name assigned to the entity. The display name of an entity can be modified for the default value. This is a very common customization. In many instances, we need to modify the default entity name to match the business for which we are customizing the system. For instance, many customers use the term organization instead of account. This is a very easy customization achieved by updating the Display Name and Plural Name fields. While implementing this change, make sure that you also update the entity messages, as a lot of them use the original name of the entity by default.   You can change a message value by double-clicking on the message and entering the new message into the Custom Display String field. Changing entity settings and properties When creating and managing entities in Dynamics CRM, there are generic "entity settings that we have to pay attention to. We can easily get to these settings and properties by navigating to Components | Entities within a solution and selecting an entity from the list. We will get an account entity screen similar to "the following screenshot:   The settings are structured in two main tabs, with various categories on each tab. We will take a look at each set of settings and properties individually in the next sections. Entity definition This area of the General tab groups together general properties and settings related to entity naming properties, ownership, and descriptions. Once an entity is created, the Name value remains fixed and cannot be modified. If the internal Name field needs to be changed, a new entity with the new Name field must be created. Areas that display this entity This section sets the visibility of this entity. An entity can be made available in only one module or more standard modules of the application. The account is a good example as it is present in all the three areas of the application. Options for entity The Options for Entity section contains a subset of sections with various settings "and properties to configure the main properties of the entity, such as whether the entity can be customized by adding business process flows, notes and activities, "and auditing as well as other settings. Pay close attention to the settings marked with a plus, as once these settings are enabled, they cannot be disabled. If you are not sure whether you need these features, disable them. The Process section allows you to enable the entity for Business Process Flows. When enabling an entity for Business Process Flows, specific fields to support this functionality are created. For this reason, once an entity is enabled for Business Process Flows, it cannot be disabled at a later time. In the communication and collaboration area, we can enable the use of notes, related activities, and connections as well as enable sending of e-mails and queues on the entity. Enabling these configurations creates the required fields and relationships in the system, and you cannot disable them later. In addition, you can enable the entity for mail merge for use with access teams and also for document management. Enabling an entity for document management allows you to store documents related to the records of this type in SharePoint if the organization is configured to integrate with SharePoint. The data services section allows you to enable the quick create forms for this entity's records as well as to enable or disable duplicate detection and auditing. When you are enabling auditing, auditing must also be enabled at the organization level. Auditing is a two-step process. The next subsections deal with Outlook and mobile access. Here, we can define whether the entity can be accessed from various mobile devices as well as Outlook and whether the access is read-only or read/write on tablets. The last section allows us to define a custom help section for a specific entity. "Custom help must be enabled at the organization level first. Primary field settings The Primary Field settings tab contains the configuration properties for the entity's primary field. Each entity in the Dynamics CRM platform is defined by a primary field. This field can only be a text field, and the size can be customized as needed. The display name can be adjusted as needed. Also, the requirement level can be selected from one of the three values: optional, business-recommended, or business-required. When it is marked as business-required, the system will require users to enter a value if they are creating or making changes to an entity record form. The primary fields are also presented for customization in the entity field's listing. Business versus custom entities As mentioned previously, there are two types of customizable entities in Dynamics CRM. They are business entities and custom entities. Business entities are customizable entities that are created by Microsoft and come as part of the default solution package. They are part of the three modules: sales, service, and marketing. Custom entities are all the new entities that are being created as part of the customization and platform extending process. Business entities Business entities are part of the default customization provided with the application by Microsoft. They are either grouped into one of the three modules of functionality or are spread across all three. For example, the account and contact entities are present in all the modules, while the case entity belongs to the service module. "Some other business entities are opportunity, lead, marketing list, and so on. Most of the properties of business entities are customizable in Dynamics CRM. However, there are certain items that are not customizable across these entities. These are, in general, the same type of customizations that are not changeable "when creating a custom entity. For example, the entity internal name (the schema name) cannot be changed once an entity has been created. In addition, the primary field properties cannot be modified once an entity is created. Custom entities All new entities created as part of a customization and implemented in Dynamics CRM are custom entities. When creating a new custom entity, we have the freedom to configure all the settings and properties as needed from the beginning. We can use a naming convention that makes sense to the user and generate all the messages from the beginning, taking advantage of this name. A custom entity can be assigned by default to be displayed in one or more of the three main modules or in the settings and help section. If a new module is created and custom entities need to be part of this new module, we can achieve this by customizing the application navigation, commonly referred to as the application sitemap. While customizing the application navigation might not be such a straightforward process, the tools released to the community are available, which makes this job a lot easier and more visual. The default method to customize the navigation is described in detail in the SDK, and it involves exporting a solution with the navigation sitemap configuration, modifying the XML data, and reimporting the updated solution. Extending entities Irrespective of whether we want to extend a customizable business entity or a custom entity, the process is similar. We extend entities by creating new entity forms, views, charts, relationships, and business rules.   Starting with Dynamics CRM 2015, entities configured for hierarchical relationships now support the creation and visualization of hierarchies through hierarchy settings. We will be taking a look at each of these options in detail in the next sections. Entity forms Entities in Dynamics CRM can be accessed from various parts of the system, and their information can be presented in various formats. This feature contributes to "the 360-degree view of customer data. In order to enable this functionality, the entities in Dynamics CRM present a variety of standard views that are available for customization. These include standard entity forms, quick create forms, and quick view forms. In addition, for mobile devices, "we can customize mobile forms. Form types With the current version of Dynamics CRM 2015, most of the updated entities now have four different form types, as follows: The main form The mobile form The quick create form The quick view form Various other forms can be created on an entity, either from scratch or by opening an existing form and saving it with a new name. When complex forms need to be created, in many circumstances, it is much easier to start from an existing entity "form rather than recreating everything. We have role-based forms, which change based on the user's security role, and we can also have more than one form available for users to select from. We can customize which view is presented to the user based on specific form rules or "other business requirements. It is a good practice to define a fallback form for each entity and to give all the users view permissions to this form. Once more than one main forms are created for an entity, you can define the order in which the forms are presented based on permissions. If the user does not have access to any of the higher precedence forms, they will be able to access the fallback form. Working with contingency forms is quite similar; here, a form is defined to be available to users who cannot access any other forms on an entity. The approach for configuring this is a little different though. You create a form with minimal information being displayed on it. Only assign a system administrator role to this form, and select enable for a fallback. With this, you specify a form that will not be visible to anybody other that the system administrator. In addition, configuring the form in this manner also makes it available to users whose security roles do not have a form specified. With such a configuration, if a user is added to a restrictive group that does not allow them to see most forms, they will have this one form available. The main form The main form is the default form associated with an entity. This form will be available by default when you open a record. There can be more than one main form, and these forms can be configured to be available to various security roles. A role must have at least one form available for the role. If more than one form is available for a specific role, then the users will be given the option to select the form they want to use to visualize a record available for it to be selected by the user. Forms that are available for various roles are called role-based forms. As an example, the human resource role can have a specific view in an account, showing more information than a form available for a sales role. At the time of editing, the main form of an entity will look similar to the "following screenshot: A mobile form A mobile form is a stripped-down form that is available for mobile devices with small screens. When customizing mobile forms, you should not only pay attention to the fact that a small screen can only render so much before extensive scrolling becomes exhaustive but also the fact that most mobile devices transfer data wirelessly and, as such, the amount of data should be limited. At the time of editing, the Mobile Entity form looks similar to the Account Mobile form shown in the following screenshot. This is basically just a listing of the fields that are available and the order in which they are presented to the user.   The quick create form The quick create form, while serving a different purpose than quick view forms, "are confined to the same minimalistic approach. Of course, a system customizer "is not necessarily limited to a certain amount of data to be added to these forms, "but it should be mindful of where these forms are being used and how much real estate is dedicated to them. In a quick create form, the minimal amount of data to be added is the required "fields. In order to save a new record, all business-required fields must be filled in. "As such, they should be added to the quick create form. The quick create form are created in the same way as any other type of form. In the solution package, navigate to entities, select the entity in which you want to customize an existing quick create form or add a new one, and expand the forms section; you will see all the existing forms for the specific entity. Here, you can select the form you want to modify or click on New to create a new one. Once the form is open for editing, the process of customizing the form is exactly the same for all forms. You can add or remove fields, customize labels, rearrange fields in the form, and so on. In order to remind the customizer that this is a quick create form, a minimalistic three-column grid is provided by default for this type of form in edit mode, "as shown in the following screenshot: Pay close attention to the fact that you can add only a limited type of controls to a quick create form. Items such as iframes and sub-grids are not available. That's not to say that the layout cannot be changed. You can be as creative as needed when customizing the quick create view. Once you have created the form, save and publish it. Since we have created a relationship between the account and the project earlier, we can add a grid view "to the account displaying all the related child projects. Now, navigating to an account, we can quickly add a new child project by going "to the project's grid view and clicking on the plus symbol to add a project. This will launch the quick create view of the project we just customized. This is how the project window will look:   As you can see in the previous screenshot, the quick create view is displayed as an overlay over the main form. For this reason, the amount of data should be kept to a minimum. This type of form is not meant to replace a full-fledged form but to allow a user to create a new record type with minimal inputs and with no navigation to other records. Another way to access the quick create view for an entity is by clicking on the "Create button situated at the top-right corner of most Dynamics CRM pages, "right before the field that displays your username. This presents the user with "the option to create common out-of-the-box record types available in the system, "as seen in the following screenshot:   Selecting any one of the Records options presents the quick create view. If you opt to create activities in this way, you will not be presented with a quick create form; rather, you will be taken to the full activity form. Once a quick create form record is created in the system, the quick create form closes and a notification is displayed to the user with an option to navigate to the newly created record. This is how the final window should look:   The quick view form The quick view form is a feature added with Dynamics CRM 2013 that allows system customizers to create a minimalistic view to be presented in a related record form. This form presents a summary of a record in a condensed format that allows you to insert it into a related record's form. The process to use a quick view form comprises the following two steps: Create the quick view form for an entity Add the quick view form to the related record The process of creating a quick view form is similar to the process of creating "any other form. The only requirement here is to keep the amount of information minimal, in order to avoid taking up too much real estate on the related record "form. The following screenshot describes the standard Account quick create form:   A very good example is the quick view form for the account entity. This view is created by default in the system. It only includes the account name, e-mail and "phone information, as well as a grid of recent cases and recent activities. We can use this view in a custom project entity. In the project's main form, add a lookup field to define the account related to the project. In the project's form customization, add a Quick View Form tab from the ribbon, as shown in the following screenshot:   Once you add a Quick View Form tab, you are presented with a Quick View Control Properties window. Here, define the name and label for the control and whether you want the label to be displayed in the form. In addition, on this form, you get to define the rules on what is to be displayed "on the form. In the Data Source section, select Account in the Lookup Field and Related Entity dropdown list and in the Quick View Form dropdown list, select "the account card form. This is the name of the account's quick view form defined "in the system. The following screenshot shows the Data Source configuration and the Selected quick view forms field:   Once complete, save and publish the form. Now, if we navigate to a project record, we can select the related account and the quick view will automatically be displayed on the project form, as shown in the "next screenshot:   The default quick view form created for the account entity is displayed now on the project form with all the specified account-related details. This way any updates to the account are immediately reflected in the project form. Taking this approach, it is now much easier to display all the needed information on the same screen so that the user does not have to navigate away and click through a maze to get to all the data needed. Summary Throughout this chapter, we looked at the main component of the three system modules: an entity. We defined what an entity is and we looked at what an entity is composed of. Then, we looked at each of the components in detail and we discussed ways in which we can customize the entities and extend the system. We investigated ways to visually represent the data related to entities and how to relate entities for data integrity. We also looked at how to enhance entity behavior with business rules and the limitations that the business rules have versus more advanced customizations, using scripts or other developer-specific methods. The next chapter will take you into the business aspect of the Dynamics CRM platform, with an in-depth look at all the available business processes. We will revisit business rules, and we will take a look at other ways to enforce business-specific rules and processes using the wizard-driven customizations available with the platform. Resources for Article: Further resources on this subject: Form customizations [article] Introduction to Reporting in Microsoft Dynamics CRM [article] Overview of Microsoft Dynamics CRM 2011 [article]
Read more
  • 0
  • 0
  • 3983

article-image-ride-through-worlds-best-etl-tool-informatica-powercenter
Packt
30 Dec 2014
25 min read
Save for later

A ride through world's best ETL tool – Informatica PowerCenter

Packt
30 Dec 2014
25 min read
In this article, by Rahul Malewar, author of the book, Learning Informatica PowerCenter 9.x, we will go through the basics of Informatica PowerCenter. Informatica Corporation (Informatica), a multi-million dollar company incorporated in February 1993, is an independent provider of enterprise data integration and data quality software and services. The company enables a variety of complex enterprise data integration products, which include PowerCenter, Power Exchange, enterprise data integration, data quality, master data management, business to business (B2B) data exchange, application information lifecycle management, complex event processing, ultra messaging, and cloud data integration. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely used tool in the data integration world. (For more resources related to this topic, see here.) Informatica PowerCenter architecture PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. This lets you access the single licensed software installed on a remote machine via multiple machines. High availability functionality helps minimize service downtime due to unexpected failures or scheduled maintenance in the PowerCenter environment. Informatica architecture is divided into two sections: server and client. Server is the basic administrative unit of Informatica where we configure all services, create users, and assign authentication. Repository, nodes, Integration Service, and code page are some of the important services we configure while we work on the server side of Informatica PowerCenter. Client is the graphical interface provided to the users. Client includes PowerCenter Designer, PowerCenter Workflow Manager, PowerCenter Workflow Monitor, and PowerCenter Repository Manager. The best place to download the Informatica software for training purpose is from EDelivery (www.edelivery.com) website of Oracle. Once you download the files, start the extraction of the zipped files. After you finish extraction, install the server first and later client part of PowerCenter. For installation of Informatica PowerCenter, the minimum requirement is to have a database installed in your machine. Because Informatica uses the space from the Oracle database to store the system-related information and the metadata of the code, which you develop in client tool. Informatica PowerCenter client tools Informatica PowerCenter Designer client tool talks about working of the source files and source tables and similarly talks about working on targets. Designer tool allows import/create flat files and relational databases tables. Informatica PowerCenter allows you to work on both types of flat files, that is, delimited and fixed width files. In delimited files, the values are separated from each other by a delimiter. Any character or number can be used as delimiter but usually for better interpretation we use special characters as delimiter. In delimited files, the width of each field is not a mandatory option as each value gets separated by other using a delimiter. In fixed width files, the width of each field is fixed. The values are separated by each other by the fixed size of the column defined. There can be issues in extracting the data if the size of each column is not maintained properly. PowerCenter Designer tool allows you to create mappings using sources, targets, and transformations. Mappings contain source, target, and transformations linked to each other through links. The group of transformations which can be reused is called as mapplets. Mapplets are another important aspect of Informatica tool. The transformations are most important aspect of Informatica, which allows you to manipulate the data based on your requirements. There are various types of transformations available in Informatica PowerCenter. Every transformation performs specific functionality. Various transformations in Informatica PowerCenter The following are the various transformations in Informatica PowerCenter: Expression transformation is used for row-wise manipulation. For any type of manipulation you wish to do on an individual record, use Expression transformation. Expression transformation accepts the row-wise data, manipulates it, and passes to the target. The transformation receives the data from input port and it sends the data out from output ports. Use the Expression transformation for any row-wise calculation, like if you want to concatenate the names, get total salary, and convert in upper case. Aggregator transformation is used for calculations using aggregate functions on a column as against in the Expression transformation, which is used for row-wise manipulation. You can use aggregate functions, such as SUM, AVG, MAX, MIN, and so on in Aggregator transformation. When you use Aggregator transformation, Integration Services stores the data temporarily in cache memory. Cache memory is created because the data flows in row-wise manner in Informatica and the calculations required in Aggregator transformation is column wise. Unless we store the data temporarily in cache, we cannot perform the aggregate calculations to get the result. Using Group By option in Aggregator transformation, you can get the result of the Aggregate function based on group. Also it is always recommended that we pass sorted input to Aggregator transformation as this will enhance the performance. When you pass the sorted input to Aggregator transformation, Integration Services enhances the performance by storing less data into cache. When you pass unsorted data, Aggregator transformation stores all the data into cache which takes more time. When you pass the sorted data to Aggregator transformation, Aggregator transformation stores comparatively lesser data in the cache. Aggregator passes the result of each group as soon the data for particular group is received. Note that Aggregator transformation does not sort the data. If you have unsorted data, use Sorter transformation to sort the data and then pass the sorted data to Aggregator transformation. Sorter transformation is used to sort the data in ascending or descending order based on single or multiple key. Apart from ordering the data in ascending or descending order, you can also use Sorter transformation to remove duplicates from the data using the distinct option in the properties. Sorter can remove duplicates only if complete record is duplicate and not only particular column. Filter transformation is used to remove unwanted records from the mapping. You define the Filter condition in the Filter transformation. Based on filter condition, the records will be rejected or passed further in mapping. The default condition in Filter transformation is TRUE. Based on the condition defined, if the record returns True, the Filter transformation allows the record to pass. For each record which returns False, the Filter transformation drops those records. It is always recommended to use Filter transformation as early as possible in the mapping for better performance. Router transformation is single input group multiple output group transformation. Router can be used in place of multiple Filter transformations. Router transformation accepts the data once through input group and based on the output groups you define, it sends the data to multiple output ports. You need to define the filter condition in each output group. It is always recommended to use Router in place of multiple filters in the mapping to enhance the performance. Rank transformation is used to get top or bottom specific number of records based on the key. When you create a Rank transformation, a default output port RANKINDEX comes with the transformation. It is not mandatory to use the RANKINDEX port. Sequence Generator transformation is used to generate sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. You need to define the start value, the increment by value, and the end value in the properties. Sequence Generator transformation has only two ports: NEXTVAL and CURRVAL. Both the ports are output port. Sequence Generator does not have any input port. You cannot add or delete any port in Sequence Generator. It is recommended that you should always use the NEXTVAL port first. If the NEXTVAL port is utilized, use the CURRVAL port. You can define the value of CURRVAL in the properties of Sequence Generator transformation. Joiner transformation is used to join two heterogeneous sources. You can join data from same source type also. The basic criteria for joining the data are a matching column in both the source. Joiner transformation has two pipelines, one is called mater and other is called as detail. We do not have left or right join as we have in SQL database. It is always recommended to make table with lesser number of record as master and other one as details. This is because Integration Service picks the data from master source and scans the corresponding record in details table. So if we have lesser number of records in master table, lesser number of times the scanning will happen. This enhances the performance. Joiner transformation has four types of joins: normal join, full outer join, master outer join, details outer join. Union transformation is used the merge the data from multiple sources. Union is multiple input single output transformation. This is opposite of Router transformation, which we discussed earlier. The basic criterion for using Union transformation is that you should have data with matching data type. If you do not have data with matching data type coming from multiple sources, Union transformation will not work. Union transformation merges the data coming from multiple sources and do not remove duplicates, that is, it acts as UNION ALL of SQL statements. As mentioned earlier, Union requires data coming from multiple sources. Union reads the data concurrently from multiple sources and processes the data. You can use heterogeneous sources to merge the data using Union transformation. Source Qualifier transformation acts as virtual source in Informatica. When you drag relational table or flat file in Mapping Designer, Source Qualifier transformation comes along. Source Qualifier is the point where actually Informatica processing starts. The extraction process starts from the Source Qualifier. Lookup transformation is used to lookup of source, Source Qualifier, or target to get the relevant data. You can look up on flat file or relational tables. Lookup transformation works on the similar lines as Joiner with few differences like Lookup does not require two source. Lookup transformations can be connected and unconnected. Lookup transformation extracts the data from the lookup table or file based on the lookup condition. When you create the Lookup transformation you can configure the Lookup transformation to cache the data. Caching the data makes the processing faster since the data is stored internally after cache is created. Once you select to cache the data, Lookup transformation caches the data from the file or table once and then based on the condition defined, lookup sends the output value. Since the data gets stored internally, the processing becomes faster as it does not require checking the lookup condition in file or database. Integration Services queries the cache memory as against checking the file or table for fetching the required data. The cache is created automatically and also it is deleted automatically once the processing is complete. Lookup transformation has four different types of ports. Input ports (I) receive the data from other transformation. This port will be used in Lookup condition. You need to have at least one input port. Output port (O) passes the data out of the Lookup transformation to other transformations. Lookup port (L) is the port for which you wish to bring the data in mapping. Each column is assigned as lookup and output port when you create the Lookup transformation. If you delete the lookup port from the flat file lookup source, the session will fail. If you delete the lookup port from relational lookup table, Integration Services extracts the data only with Lookup port. This helps in reducing the data extracted from the lookup source. Return port (R) is only used in case of unconnected Lookup transformation. This port indicates which data you wish to return in the Lookup transformation. You can define only one port as return port. Return port is not used in case on connected Lookup transformation. Cache is the temporary memory, which is created when you execute the process. Cache is created automatically when the process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define in the transformation level or session level. You usually set the property as default, so as required it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available for lookup transformation. You can define the session property to create the cache either sequentially or concurrently. When you select to create the cache sequentially, Integration Service caches the data in row-wise manner as the records enters the Lookup transformation. When the first record enters the Lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way the cache stores only matching data. It helps in saving the cache space by not storing the unnecessary data. When you select to create cache concurrently, Integration Service does not wait for the data to flow from the source, but it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select concurrent cache, the performance enhances as compared to sequential cache, since the scanning happens internally using the data stored in cache. You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default Integration Service creates a static cache. Lookup cache is created as soon as the first record enters the Lookup transformation. Integration Service does not update the cache while it is processing the data. A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the Lookup transformation properties to make the cache as dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration Service keeps on updating the cache while it is processing the data. The Integration Service marks the record as insert for new row inserted in dynamic cache. For the record which is updated, it marks the record as update in the cache. For every record which no change, the Integration Service marks it as unchanged. Update Strategy transformation is used to INSERT, UPDATE, DELETE, or REJECT record based on defined condition in the mapping. Update Strategy transformation is mostly used when you design mappings for SCD. When you implement SCD, you actually decide how you wish to maintain historical data with the current data. When you wish to maintain no history, complete history, or partial history, you can either use property defined in the session task or you use Update Strategy transformation. When you use Session task, you instruct the Integration Service to treat all records in the same way, that is, either insert, update or delete. When you use Update Strategy transformation in the mapping, the control is no more with the session task. Update Strategy transformation allows you to insert, update, delete or reject record based on the requirement. When you use Update Strategy transformation, the control is no more with session task. You need to define the following functions to perform the corresponding operation: DD_INSERT: This can be used when you wish to insert the records. It is also represented by numeric 0. DD_UPDATE: This can be used when you wish to update the records. It is also represented by numeric 1. DD_DELETE: This can be used when you wish to delete the records. It is also represented by numeric 2. DD_REJECT: This can be used when you wish to reject the records. It is also represented by numeric 3. Normalizer transformation is used in place of Source Qualifier transformation when you wish to read the data from Cobol Copybook source. Also, the Normalizer transformation is used to convert column-wise data to row-wise data. This is similar to transpose feature of MS Excel. You can use this feature if your source is Cobol Copybook file or relational database tables. Normalizer transformation converts column to row and also generate index for each converted row. Stored procedure is a database component. Informatica uses the stored procedure similar to database tables. Stored procedures are set of SQL instructions, which require certain set of input values and in return stored procedure returns output value. The way you either import or create database tables, you can import or create the stored procedure in mapping. To use the Stored Procedure in mapping the stored procedure should exist in the database. Similar to Lookup transformation, stored procedure can also be connected or unconnected transformation in Informatica. When you use connected stored procedure, you pass the value to stored procedure through links. When you use unconnected stored procedure, you pass the value using :SP function. Transaction Control transformation allows you to commit or rollback individual records, based on certain condition. By default, Integration Service commits the data based on the properties you define at the session task level. Using the commit interval property Integration Service commits or rollback the data into target. Suppose you define commit interval as 10,000, Integration Service will commit the data after every 10,000 records. When you use Transaction Control transformation, you get the control at each record to commit or rollback. When you use Transaction Control transformation, you need to define the condition in expression editor of the Transaction Control transformation. When you run the process, the data enters the Transaction Control transformation in row-wise manner. The Transaction Control transformation evaluates each row, based on which it commits or rollback the data. Classification of Transformations The transformations, which we discussed are classified into two categories—active/passive and connected/unconnected. Active/Passive classification of transformations is based on the number of records at the input and output port of the transformation. If the transformation does not change the number of records at its input and output port, it is said to be passive transformation. If the transformation changes the number of records at the input and output port of the transformation, it is said to be active transformation. Also if the transformation changes the sequence of records passing through it, it will be an active transformation as in case of Union transformation. A transformation is said to be connected if it is connected to any source or any target or any other transformation by at least a link. If the transformation is not connected by any link is it classed as unconnected. Only Lookup and stored procedure transformations can be connected and unconnected, rest all transformations are connected. Advanced Features of designer screen Talking about the advanced features of PowerCenter Designer tool, debugger helps you to debug the mappings to find the error in your code. Informatica PowerCenter provides a utility called as debugger to debug the mapping so that you can easily find the issue in the mapping which you created. Using the debugger, you can see the flow of every record across the transformations. Another feature is target load plan, a functionality which allows you to load data in multiple targets in a same mapping maintaining their constraints. The reusable transformations are transformations which allow you to reuse the transformations across multiple mapping. As source and target are reusable components, transformations can also be reused. When you work on any technology, it is always advised that your code should be dynamic. This means you should use the hard coded values as less as possible in your code. It is always recommended that you use the parameters or the variable in your code so you can easily pass these values and need not frequently change the code. This functionality is achieved by using parameter file in Informatica. The value of a variable can change between the session run. The value of parameter will remain constant across the session runs. The difference is very minute so you should define parameter or variable properly as per your requirements. Informatica PowerCenter allows you to compare objects present within repository. You can compare sources, targets, transformations, mapplets, and mappings in PowerCenter Designer under Source Analyzer, Target Designer, Transformation Developer, Mapplet Designer, Mapping Designer respectively. You can compare the objects in the same repository or in multiple repositories. Tracing level in Informatica defines the amount of data you wish to write in the session log when you execute the workflow. Tracing level is a very important aspect in Informatica as it helps in analyzing the error. Tracing level is very helpful in finding the bugs in the process. You can define tracing level in every transformation. Tracing level option is present in every transformation properties window. There are four types of tracing level available: Normal: When you set the tracing level as normal, Informatica stores status information, information about errors, and information about skipped rows. You get detailed information but not at individual row level. Terse: When you set the tracing level as terse, Informatica stores error information and information of rejected records. Terse tracing level occupies lesser space as compared to normal. Verbose initialization: When you set the tracing level as verbose initialize, it stores process details related to startup, details about index and data files created and more details of transformation process in addition to details stored in normal tracing. This tracing level takes more space as compared to normal and terse. Verbose data: This is the most detailed level of tracing level. It occupies more space and takes longer time as compared to other three. It stores row level data in the session log. It writes the truncation information whenever it truncates the data. It also writes the data to error log if you enable row error logging. Default tracing level is normal. You can change the tracing level to terse to enhance the performance. Tracing level can be defined at individual transformation level or you can override the tracing level by defining it at session level. Informatica PowerCenter Workflow Manager Workflow Manager screen is the second and last phase of our development work. In the Workflow Manager session task and workflows are created, which is used to execute mapping. Workflow Manager screen allows you to work on various connections like relations, FTP, and so on. Basically, Workflow Manager contains set of instructions which we define as workflow. The basic building block of workflow is tasks. As we have multiple transformations in designer screen, we have multiple tasks in Workflow Manager Screen. When you create a workflow, you add tasks into it as per your requirement and execute the workflow to see the status in the monitor. Workflow is a combination of multiple tasks connected with links that trigger in proper sequence to execute a process. Every workflow contains start task along with other tasks. When you execute the workflow, you actually trigger start task, which in turn triggers other tasks connected in the flow. Every task performs a specific functionality. You need to use the task based on the functionality you need to achieve. Various tasks in Workflow Manager The following are the tasks in Workflow Manager: Session task is used to execute the mapping. Every session task can execute a single mapping. You need to define the path/connection of the source and target used in the mapping, so the session can extract the data from the defined path and send the data to the mapping for processing. Email task is used to send success or failure email notifications. You can configure your outlook or mailbox with the email task to directly send the notification. Command task is used to execute Unix scripts/commands or Windows commands. Timer task is used to add some time gap or to add delay between two tasks. Timer task have properties related to absolute time and relative time. Assignment task is used to assign a value to workflow variable. Control task is used to control the flow of workflow by stopping or aborting the workflow in case on some error. You can control the flow of complete workflow using control task. Decision task is used to check the status of multiple tasks and hence control the execution of workflow. Link task as against decision task can only check the status of the previous task. Event task is used to wait for a particular event to occur. Usually it is used as file watcher task. Using event wait task we can keep looking for a particular file and then trigger the next task. Evert raise task is used to trigger a particular event defined in workflow. Advanced Workflow Manager Workflow Manager screen has some very important features called as scheduling and incremental aggregation, which allows in easier and convenient processing of data. Scheduling allows you to schedule the workflow as specified timing so the workflow runs at the desired time. You need not manually run the workflow every time, schedule can do the needful. Incremental aggregation and partitioning are advanced features, which allows you to process the data faster. When you run the workflow, Integration Service extracts the data in row wise manner from the source path/connection you defined in session task and makes it flow from the mapping. The data reaches the target through the transformations you defined in the mapping. The data always flow in a row wise manner in Informatica, no matter what so ever is your calculation or manipulation. So if you have 10 records in source, there will be 10 Source to target flows while the process is executed. Informatica PowerCenter Workflow Monitor The Workflow Monitor screen allows the monitoring of the workflows execute in Workflow Manager. Workflow Monitor screen allows you check the status and log files for the Workflow. Using the logs generated, you can easily find the error and rectify the error. Workflow Manager also shows the statistics for number of records extracted from source and number of records loaded into target. Also it gives statistics of error records and bad records. Informatica PowerCenter Repository Manager Repository Manager screen is the fourth client screen, which is basically used for migration (deployment) purpose. This screen is also used for some administration related activities like configuring server with client and creating users. Performance Tuning in Informatica PowerCenter The performance tuning has the contents for the optimizations of various components of Informatica PowerCenter tool, such as source, targets, mappings, sessions, systems. Performance tuning at high level involves two stages, finding the issues called as bottleneck and resolving them. Informatica PowerCenter has features like pushdown optimization and partitioning for better performance. With the defined steps and using the best practices for coding the performance can be enhanced drastically. Slowly Changing Dimensions Using all the understanding of the different client tools you can implement the Data warehousing concepts called as SCD, slowly changing dimensions. Informatica PowerCenter provides wizards, which allow you to easily create different types of SCDs, that is, SCD1, SCD2, and SCD3. Type 1 Dimension mapping (SCD1): It keeps only current data and do not maintain historical data. Type 2 Dimension/Version Number mapping (SCD2): It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Type 2 Dimension/Flag mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 2 Dimension/Effective Date Range mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 3 Dimension mapping: It keeps current as well as historical data in the table. We maintain only partial history by adding new column. Summary With this we have discussed the complete PowerCenter tool in brief. The PowerCenter is the best fit tool for any size and any type of data, which you wish to handle. It also provides compatibility with all the files and databases for processing purpose. The transformations present allow you to manipulate any type of data in any form you wish. The advanced features make your work simple by providing convenient options. The PowerCenter tool can make your life easy and can offer you some great career path if you learn it properly as Informatica PowerCenter tool have huge demand in job market and it is one of the highly paid technologies in IT market. Just grab a book and start walking the path. The end will be a great career. We are always available for help. For any help in installation or any issues related to PowerCenter you can reach me at info@dw-learnwell.com. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 4260

article-image-how-vector-features-are-displayed
Packt
30 Dec 2014
23 min read
Save for later

How Vector Features are Displayed

Packt
30 Dec 2014
23 min read
In this article by Erik Westra, author of the book Building Mapping Applications with QGIS, we will learn how QGIS symbols and renderers are used to control how vector features are displayed on a map. In addition to this, we will also learn saw how symbol layers work. The features within a vector map layer are displayed using a combination of renderer and symbol objects. The renderer chooses which symbol is to be used for a given feature, and the symbol does the actual drawing. There are three basic types of symbols defined by QGIS: Marker symbol: This displays a point as a filled circle Line symbol: This draws a line using a given line width and color Fill symbol: This draws the interior of a polygon with a given color These three types of symbols are implemented as subclasses of the qgis.core.QgsSymbolV2 class: qgis.core.QgsMarkerSymbolV2 qgis.core.QgsLineSymbolV2 qgis.core.QgsFillSymbolV2 You might be wondering why all these classes have "V2" in their name. This is a historical quirk of QGIS. Earlier versions of QGIS supported both an "old" and a "new" system of rendering, and the "V2" naming refers to the new rendering system. The old rendering system no longer exists, but the "V2" naming continues to maintain backward compatibility with existing code. Internally, symbols are rather complex, using "symbol layers" to draw multiple elements on top of each other. In most cases, however, you can make use of the "simple" version of the symbol. This makes it easier to create a new symbol without having to deal with the internal complexity of symbol layers. For example: symbol = QgsMarkerSymbolV2.createSimple({'width' : 1.0,                                        'color' : "255,0,0"}) While symbols draw the features onto the map, a renderer is used to choose which symbol to use to draw a particular feature. In the simplest case, the same symbol is used for every feature within a layer. This is called a single symbol renderer, and is represented by the qgis.core.QgsSingleSymbolRenderV2class. Other possibilities include: Categorized symbol renderer (qgis.core.QgsCategorizedSymbolRendererV2): This renderer chooses a symbol based on the value of an attribute. The categorized symbol renderer has a mapping from attribute values to symbols. Graduated symbol renderer (qgis.core.QgsGraduatedSymbolRendererV2): This type of renderer has a series of ranges of attribute values, and maps each range to an appropriate symbol. Using a single symbol renderer is very straightforward: symbol = ... renderer = QgsSingleSymbolRendererV2(symbol) layer.setRendererV2(renderer) To use a categorized symbol renderer, you first define a list of qgis.core.QgsRendererCategoryV2 objects, and then use that to create the renderer. For example: symbol_male = ... symbol_female = ...   categories = [] categories.append(QgsRendererCategoryV2("M", symbol_male, "Male")) categories.append(QgsRendererCategoryV2("F", symbol_female,                                        "Female"))   renderer = QgsCategorizedSymbolRendererV2("", categories) renderer.setClassAttribute("GENDER") layer.setRendererV2(renderer) Notice that the QgsRendererCategoryV2 constructor takes three parameters: the desired value, the symbol to use, and the label used to describe that category. Finally, to use a graduated symbol renderer, you define a list of qgis.core.QgsRendererRangeV2 objects and then use that to create your renderer. For example: symbol1 = ... symbol2 = ...   ranges = [] ranges.append(QgsRendererRangeV2(0, 10, symbol1, "Range 1")) ranges.append(QgsRendererRange(11, 20, symbol2, "Range 2"))   renderer = QgsGraduatedSymbolRendererV2("", ranges) renderer.setClassAttribute("FIELD") layer.setRendererV2(renderer) Working with symbol layers Internally, symbols consist of one or more symbol layers that are displayed one on top of the other to draw the vector feature: The symbol layers are drawn in the order in which they are added to the symbol. So, in this example, Symbol Layer 1 will be drawn before Symbol Layer 2. This has the effect of drawing the second symbol layer on top of the first. Make sure you get the order of your symbol layers correct, or you may find a symbol layer completely obscured by another layer. While the symbols we have been working with so far have had only one layer, there are some clever tricks you can perform with multilayer symbols. When you create a symbol, it will automatically be initialized with a default symbol layer. For example, a line symbol (an instance of QgsLineSymbolV2) will be created with a single layer of type QgsSimpleLineSymbolLayerV2. This layer is used to draw the line feature onto the map. To work with symbol layers, you need to remove this default layer and replace it with your own symbol layer or layers. For example: symbol = QgsSymbolV2.defaultSymbol(layer.geometryType()) symbol.deleteSymbolLayer(0) # Remove default symbol layer.   symbol_layer_1 = QgsSimpleFillSymbolLayerV2() symbol_layer_1.setFillColor(QColor("yellow"))   symbol_layer_2 = QgsLinePatternFillSymbolLayer() symbol_layer_2.setLineAngle(30) symbol_layer_2.setDistance(2.0) symbol_layer_2.setLineWidth(0.5) symbol_layer_2.setColor(QColor("green"))   symbol.appendSymbolLayer(symbol_layer_1) symbol.appendSymbolLayer(symbol_layer_2) The following methods can be used to manipulate the layers within a symbol: symbol.symbolLayerCount(): This returns the number of symbol layers within this symbol symbol.symbolLayer(index): This returns the given symbol layer within the symbol. Note that the first symbol layer has an index of zero. symbol.changeSymbolLayer(index, symbol_layer): This replaces a given symbol layer within the symbol symbol.appendSymbolLayer(symbol_layer): This appends a new symbol layer to the symbol symbol.insertSymbolLayer(index, symbol_layer): This inserts a symbol layer at a given index symbol.deleteSymbolLayer(index): This removes the symbol layer at the given index Remember that to use the symbol once you've created it, you create an appropriate renderer and then assign that renderer to your map layer. For example: renderer = QgsSingleSymbolRendererV2(symbol) layer.setRendererV2(renderer) The following symbol layer classes are available for you to use: PyQGIS class Description Example QgsSimpleMarkerSymbolLayerV2 This displays a point geometry as a small colored circle.   QgsEllipseSymbolLayerV2 This displays a point geometry as an ellipse.   QgsFontMarkerSymbolLayerV2 This displays a point geometry as a single character. You can choose the font and character to be displayed.   QgsSvgMarkerSymbolLayerV2 This displays a point geometry using a single SVG format image.   QgsVectorFieldSymbolLayer This displays a point geometry by drawing a displacement line. One end of the line is the coordinate of the point, while the other end is calculated using attributes of the feature.   QgsSimpleLineSymbolLayerV2 This displays a line geometry or the outline of a polygon geometry using a line of a given color, width, and style.   QgsMarkerLineSymbolLayerV2 This displays a line geometry or the outline of a polygon geometry by repeatedly drawing a marker symbol along the length of the line.   QgsSimpleFillSymbolLayerV2 This displays a polygon geometry by filling the interior with a given solid color and then drawing a line around the perimeter.   QgsGradientFillSymbolLayerV2 This fills the interior of a polygon geometry using a color or grayscale gradient.   QgsCentroidFillSymbolLayerV2 This draws a simple dot at the centroid of a polygon geometry.   QgsLinePatternFillSymbolLayer This draws the interior of a polygon geometry using a repeated line. You can choose the angle, width, and color to use for the line.   QgsPointPatternFillSymbolLayer This draws the interior of a polygon geometry using a repeated point.   QgsSVGFillSymbolLayer This draws the interior of a polygon geometry using a repeated SVG format image.   These predefined symbol layers, either individually or in various combinations, give you enormous flexibility in how features are to be displayed. However, if these aren't enough for you, you can also implement your own symbol layers using Python. We will look at how this can be done later in this article. Combining symbol layers By combining symbol layers, you can achieve a range of complex visual effects. For example, you could combine an instance of QgsSimpleMarkerSymbolLayerV2 with a QgsVectorFieldSymbolLayer to display a point geometry using two symbols at once: One of the main uses of symbol layers is to draw different LineString or PolyLine symbols to represent different types of roads. For example, you can draw a complex road symbol by combining multiple symbol layers, like this: This effect is achieved using three separate symbol layers: Here is the Python code used to generate the above map symbol: symbol = QgsLineSymbolV2.createSimple({}) symbol.deleteSymbolLayer(0) # Remove default symbol layer.   symbol_layer = QgsSimpleLineSymbolLayerV2() symbol_layer.setWidth(4) symbol_layer.setColor(QColor("light gray")) symbol_layer.setPenCapStyle(Qt.FlatCap) symbol.appendSymbolLayer(symbol_layer)   symbol_layer = QgsSimpleLineSymbolLayerV2() symbol_layer.setColor(QColor("black")) symbol_layer.setWidth(2) symbol_layer.setPenCapStyle(Qt.FlatCap) symbol.appendSymbolLayer(symbol_layer)   symbol_layer = QgsSimpleLineSymbolLayerV2() symbol_layer.setWidth(1) symbol_layer.setColor(QColor("white")) symbol_layer.setPenStyle(Qt.DotLine) symbol.appendSymbolLayer(symbol_layer) As you can see, you can set the line width, color, and style to create whatever effect you want. As always, you have to define the layers in the correct order, with the back-most symbol layer defined first. By combining line symbol layers in this way, you can create almost any type of road symbol that you want. You can also use symbol layers when displaying polygon geometries. For example, you can draw QgsPointPatternFillSymbolLayer on top of QgsSimpleFillSymbolLayerV2 to have repeated points on top of a simple filled polygon, like this: Finally, you can make use of transparency to allow the various symbol layers (or entire symbols) to blend into each other. For example, you can create a pinstripe effect by combining two symbol layers, like this: symbol = QgsFillSymbolV2.createSimple({}) symbol.deleteSymbolLayer(0) # Remove default symbol layer.   symbol_layer = QgsGradientFillSymbolLayerV2() symbol_layer.setColor2(QColor("dark gray")) symbol_layer.setColor(QColor("white")) symbol.appendSymbolLayer(symbol_layer)   symbol_layer = QgsLinePatternFillSymbolLayer() symbol_layer.setColor(QColor(0, 0, 0, 20)) symbol_layer.setLineWidth(2) symbol_layer.setDistance(4) symbol_layer.setLineAngle(70) symbol.appendSymbolLayer(symbol_layer) The result is quite subtle and visually pleasing: In addition to changing the transparency for a symbol layer, you can also change the transparency for the symbol as a whole. This is done by using the setAlpha() method, like this: symbol.setAlpha(0.3) The result looks like this: Note that setAlpha() takes a floating point number between 0.0 and 1.0, while the transparency of a QColor object, like the ones we used earlier, is specified using an alpha value between 0 and 255. Implementing symbol layers in Python If the built-in symbol layers aren't flexible enough for your needs, you can implement your own symbol layers using Python. To do this, you create a subclass of the appropriate type of symbol layer (QgsMarkerSymbolLayerV2, QgsLineSymbolV2, or QgsFillSymbolV2) and implement the various drawing methods yourself. For example, here is a simple marker symbol layer that draws a cross for a Point geometry: class CrossSymbolLayer(QgsMarkerSymbolLayerV2):    def __init__(self, length=10.0, width=2.0):        QgsMarkerSymbolLayerV2.__init__(self)        self.length = length        self.width = width   def layerType(self):        return "Cross"   def properties(self):        return {'length' : self.length,               'width' : self.width}      def clone(self): return CrossSymbolLayer(self.length, self.width)      def startRender(self, context):        self.pen = QPen()        self.pen.setColor(self.color()) self.pen.setWidth(self.width)      def stopRender(self, context): self.pen = None   def renderPoint(self, point, context):        left = point.x() - self.length        right = point.x() + self.length        bottom = point.y() - self.length        top = point.y() + self.length          painter = context.renderContext().painter()        painter.setPen(self.pen)        painter.drawLine(left, bottom, right, top)        painter.drawLine(right, bottom, left, top) Using this custom symbol layer in your code is straightforward: symbol = QgsMarkerSymbolV2.createSimple({}) symbol.deleteSymbolLayer(0)   symbol_layer = CrossSymbolLayer() symbol_layer.setColor(QColor("gray"))   symbol.appendSymbolLayer(symbol_layer) Running this code will draw a cross at the location of each point geometry, as follows: Of course, this is a simple example, but it shows you how to use custom symbol layers implemented in Python. Let's now take a closer look at the implementation of the CrossSymbolLayer class, and see what each method does: __init__(): Notice how the __init__ method accepts parameters that customize the way the symbol layer works. These parameters, which should always have default values assigned to them, are the properties associated with the symbol layer. If you want to make your custom symbol available within the QGIS Layer Properties window, you will need to register your custom symbol layer and tell QGIS how to edit the symbol layer's properties. We will look at this shortly. layerType(): This method returns a unique name for your symbol layer. properties(): This should return a dictionary that contains the various properties used by this symbol layer. The properties returned by this method will be stored in the QGIS project file, and used later to restore the symbol layer. clone(): This method should return a copy of the symbol layer. Since we have defined our properties as parameters to the __init__ method, implementing this method simply involves creating a new instance of the class and copying the properties from the current symbol layer to the new instance. startRender(): This method is called before the first feature in the map layer is rendered. This can be used to define any objects that will be required to draw the feature. Rather than creating these objects each time, it is more efficient (and therefore faster) to create them only once to render all the features. In this example, we create the QPen object that we will use to draw the Point geometries. stopRender(): This method is called after the last feature has been rendered. This can be used to release the objects created by the startRender() method. renderPoint(): This is where all the work is done for drawing point geometries. As you can see, this method takes two parameters: the point at which to draw the symbol, and the rendering context (an instance of QgsSymbolV2RenderContext) to use for drawing the symbol. The rendering context provides various methods for accessing the feature being displayed, as well as information about the rendering operation, the current scale factor, etc. Most importantly, it allows you to access the PyQt QPainter object needed to actually draw the symbol onto the screen. The renderPoint() method is only used for symbol layers that draw point geometries. For line geometries, you should implement the renderPolyline() method, which has the following signature: def renderPolyline(self, points, context): The points parameter will be a QPolygonF object containing the various points that make up the LineString, and context will be the rendering context to use for drawing the geometry. If your symbol layer is intended to work with polygons, you should implement the renderPolygon() method, which looks like this: def renderPolygon(self, outline, rings, context): Here, outline is a QPolygonF object that contains the points that make up the exterior of the polygon, and rings is a list of QPolygonF objects that define the interior rings or "holes" within the polygon. As always, context is the rendering context to use when drawing the geometry. A custom symbol layer created in this way will work fine if you just want to use it within your own external PyQGIS application. However, if you want to use a custom symbol layer within a running copy of QGIS, and in particular, if you want to allow end users to work with the symbol layer using the Layer Properties window, there are some extra steps you will have to take, which are as follows: If you want the symbol to be visually highlighted when the user clicks on it, you will need to change your symbol layer's renderXXX() method to see if the feature being drawn has been selected by the user, and if so, change the way it is drawn. The easiest way to do this is to change the geometry's color. For example: if context.selected():    color = context.selectionColor() else:    color = self.color To allow the user to edit the symbol layer's properties, you should create a subclass of QgsSymbolLayerV2Widget, which defines the user interface to edit the properties. For example, a simple widget for the purpose of editing the length and width of a CrossSymbolLayer can be defined as follows: class CrossSymbolLayerWidget(QgsSymbolLayerV2Widget):    def __init__(self, parent=None):        QgsSymbolLayerV2Widget.__init__(self, parent)        self.layer = None          self.lengthField = QSpinBox(self)        self.lengthField.setMinimum(1)        self.lengthField.setMaximum(100)        self.connect(self.lengthField,                      SIGNAL("valueChanged(int)"),                      self.lengthChanged)          self.widthField = QSpinBox(self)        self.widthField.setMinimum(1)        self.widthField.setMaximum(100)        self.connect(self.widthField,                      SIGNAL("valueChanged(int)"),                      self.widthChanged)          self.form = QFormLayout()        self.form.addRow('Length', self.lengthField)        self.form.addRow('Width', self.widthField)          self.setLayout(self.form)      def setSymbolLayer(self, layer):        if layer.layerType() == "Cross":            self.layer = layer            self.lengthField.setValue(layer.length)            self.widthField.setValue(layer.width)      def symbolLayer(self):        return self.layer      def lengthChanged(self, n):        self.layer.length = n        self.emit(SIGNAL("changed()"))      def widthChanged(self, n):        self.layer.width = n        self.emit(SIGNAL("changed()")) We define the contents of our widget using the standard __init__() initializer. As you can see, we define two fields, lengthField and widthField, which let the user change the length and width properties respectively, for our symbol layer. The setSymbolLayer() method tells the widget which QgsSymbolLayerV2 object to use, while the symbolLayer() method returns the QgsSymbolLayerV2 object this widget is editing. Finally, the two XXXChanged() methods are called when the user changes the value of the fields, allowing us to update the symbol layer's properties to match the value set by the user. Finally, you will need to register your symbol layer. To do this, you create a subclass of QgsSymbolLayerV2AbstractMetadata and pass it to the QgsSymbolLayerV2Registry object's addSymbolLayerType() method. Here is an example implementation of the metadata for our CrossSymbolLayer class, along with the code to register it within QGIS: class CrossSymbolLayerMetadata(QgsSymbolLayerV2AbstractMetadata):    def __init__(self):        QgsSymbolLayerV2AbstractMetadata.__init__(self, "Cross", "Cross marker", QgsSymbolV2.Marker)      def createSymbolLayer(self, properties):        if "length" in properties:            length = int(properties['length'])        else:            length = 10        if "width" in properties:            width = int(properties['width'])        else:            width = 2        return CrossSymbolLayer(length, width)      def createSymbolLayerWidget(self, layer):        return CrossSymbolLayerWidget()   registry = QgsSymbolLayerV2Registry.instance() registry.addSymbolLayerType(CrossSymbolLayerMetadata()) Note that the parameters of QgsSymbolLayerV2AbstractMetadata.__init__() are as follows: The unique name for the symbol layer, which must match the name returned by the symbol layer's layerType() method. A display name for this symbol layer, as shown to the user within the Layer Properties window. The type of symbol that this symbol layer will be used for. The createSymbolLayer() method is used to restore the symbol layer based on the properties stored in the QGIS project file when the project was saved. The createSymbolLayerWidget() method is called to create the user interface widget that lets the user view and edit the symbol layer's properties. Implementing renderers in Python If you need to choose symbols based on more complicated criteria than what the built-in renderers will provide, you can write your own custom QgsFeatureRendererV2 subclass using Python. For example, the following Python code implements a simple renderer that alternates between odd and even symbols as point features are displayed: class OddEvenRenderer(QgsFeatureRendererV2):    def __init__(self): QgsFeatureRendererV2.__init__(self, "OddEvenRenderer")        self.evenSymbol = QgsMarkerSymbolV2.createSimple({})        self.evenSymbol.setColor(QColor("light gray"))        self.oddSymbol = QgsMarkerSymbolV2.createSimple({})        self.oddSymbol.setColor(QColor("black"))        self.n = 0      def clone(self):        return OddEvenRenderer()      def symbolForFeature(self, feature):        self.n = self.n + 1        if self.n % 2 == 0:            return self.evenSymbol        else:            return self.oddSymbol      def startRender(self, context, layer):        self.n = 0        self.oddSymbol.startRender(context)        self.evenSymbol.startRender(context)      def stopRender(self, context):        self.oddSymbol.stopRender(context)        self.evenSymbol.stopRender(context)      def usedAttributes(self):        return [] Using this renderer will cause the various point geometries to be displayed in alternating colors, for example: Let's take a closer look at how this class was implemented, and what the various methods do: __init__(): This is your standard Python initializer. Notice how we have to provide a unique name for the renderer when calling the QgsFeatureRendererV2.__init__() method; this is used to keep track of the various renderers within QGIS itself. clone(): This creates a copy of this renderer. If your renderer uses properties to control how it works, this method should copy those properties into the new renderer object. symbolForFeature(): This returns the symbol to use for drawing the given feature. startRender(): This prepares to start rendering the features within the map layer. As the renderer can make use of multiple symbols, you need to implement this so that your symbols are also given a chance to prepare for rendering. stopRender(): This finishes rendering the features. Once again, you need to implement this so that your symbols can have a chance to clean up once the rendering process has finished. usedAttributes():This method should be implemented to return the list of attributes that the renderer requires if your renderer makes use of feature attributes to choose between the various symbols,. If you wish, you can also implement your own widget that lets the user change the way the renderer works. This is done by subclassing QgsRendererV2Widget and setting up the widget to edit the renderer's various properties in the same way that we implemented a subclass of QgsSymbolLayerV2Widget to edit the properties for a symbol layer. You will also need to provide metadata about your new renderer (by subclassing QgsRendererV2AbstractMetadata) and use the QgsRendererV2Registry object to register your new renderer. If you do this, the user will be able to select your custom renderer for new map layers, and change the way your renderer works by editing the renderer's properties. Summary In this article, we learned how QGIS symbols and renderers are used to control how vector features are displayed on a map. We saw that there are three standard types of symbols: marker symbols for drawing points, line symbols for drawing lines, and fill symbols for drawing the interior of a polygon. We then learned how to instantiate a "simple" version of each of these symbols for use in your programs. We next looked at the built-in renderers, and how these can be used to choose the same symbol for every feature (using the QgsSingleSymbolRenderV2 class), to select a symbol based on the exact value of an attribute (using QgsCategorizedSymbolRendererV2), and to choose a symbol based on a range of attribute values (using the QgsGraduatedSymbolRendererV2 class). We then saw how symbol layers work, and how to manipulate the layers within a symbol. We looked at all the different types of symbol layers built into QGIS, and learned how they can be combined to produce sophisticated visual effects. Finally, we saw how to implement our own symbol layers using Python, and how to write your own renderer from scratch if one of the existing renderer classes doesn't meet your needs. Using these various PyQGIS classes, you have an extremely powerful set of tools at your disposal for displaying vector data within a map. While simple visual effects can be achieved with a minimum of fuss, you can produce practically any visual effect you want using an appropriate combination of built-in or custom-written QGIS symbols and renderers. Resources for Article: Further resources on this subject: Combining Vector and Raster Datasets [article] QGIS Feature Selection Tools [article] Creating a Map [article]
Read more
  • 0
  • 0
  • 4082
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-creating-map
Packt
29 Dec 2014
11 min read
Save for later

Creating a Map

Packt
29 Dec 2014
11 min read
In this article by Thomas Newton and Oscar Villarreal, authors of the book Learning D3.js Mapping, we will cover the following topics through a series of experiments: Foundation – creating your basic map Experiment 1 – adjusting the bounding box Experiment 2 – creating choropleths Experiment 3 – adding click events to our visualization (For more resources related to this topic, see here.) Foundation – creating your basic map In this section, we will walk through the basics of creating a standard map. Let's walk through the code to get a step-by-step explanation of how to create this map. The width and height can be anything you want. Depending on where your map will be visualized (cellphones, tablets, or desktops), you might want to consider providing a different width and height: var height = 600; var width = 900; The next variable defines a projection algorithm that allows you to go from a cartographic space (latitude and longitude) to a Cartesian space (x,y)—basically a mapping of latitude and longitude to coordinates. You can think of a projection as a way to map the three-dimensional globe to a flat plane. There are many kinds of projections, but geo.mercator is normally the default value you will use: var projection = d3.geo.mercator(); var mexico = void 0; If you were making a map of the USA, you could use a better projection called albersUsa. This is to better position Alaska and Hawaii. By creating a geo.mercator projection, Alaska would render proportionate to its size, rivaling that of the entire US. The albersUsa projection grabs Alaska, makes it smaller, and puts it at the bottom of the visualization. The following screenshot is of geo.mercator:   This following screenshot is of geo.albersUsa:   The D3 library currently contains nine built-in projection algorithms. An overview of each one can be viewed at https://github.com/mbostock/d3/wiki/Geo-Projections. Next, we will assign the projection to our geo.path function. This is a special D3 function that will map the JSON-formatted geographic data into SVG paths. The data format that the geo.path function requires is named GeoJSON: var path = d3.geo.path().projection(projection); var svg = d3.select("#map")    .append("svg")    .attr("width", width)    .attr("height", height); Including the dataset The necessary data has been provided for you within the data folder with the filename geo-data.json: d3.json('geo-data.json', function(data) { console.log('mexico', data); We get the data from an AJAX call. After the data has been collected, we want to draw only those parts of the data that we are interested in. In addition, we want to automatically scale the map to fit the defined height and width of our visualization. If you look at the console, you'll see that "mexico" has an objects property. Nested inside the objects property is MEX_adm1. This stands for the administrative areas of Mexico. It is important to understand the geographic data you are using, because other data sources might have different names for the administrative areas property:   Notice that the MEX_adm1 property contains a geometries array with 32 elements. Each of these elements represents a state in Mexico. Use this data to draw the D3 visualization. var states = topojson.feature(data, data.objects.MEX_adm1); Here, we pass all of the administrative areas to the topojson.feature function in order to extract and create an array of GeoJSON objects. The preceding states variable now contains the features property. This features array is a list of 32 GeoJSON elements, each representing the geographic boundaries of a state in Mexico. We will set an initial scale and translation to 1 and 0,0 respectively: // Setup the scale and translate projection.scale(1).translate([0, 0]); This algorithm is quite useful. The bounding box is a spherical box that returns a two-dimensional array of min/max coordinates, inclusive of the geographic data passed: var b = path.bounds(states); To quote the D3 documentation: "The bounding box is represented by a two-dimensional array: [[left, bottom], [right, top]], where left is the minimum longitude, bottom is the minimum latitude, right is maximum longitude, and top is the maximum latitude." This is very helpful if you want to programmatically set the scale and translation of the map. In this case, we want the entire country to fit in our height and width, so we determine the bounding box of every state in the country of Mexico. The scale is calculated by taking the longest geographic edge of our bounding box and dividing it by the number of pixels of this edge in the visualization: var s = .95 / Math.max((b[1][0] - b[0][0]) / width, (b[1][1] - b[0][1]) / height); This can be calculated by first computing the scale of the width, then the scale of the height, and, finally, taking the larger of the two. All of the logic is compressed into the single line given earlier. The three steps are explained in the following image:   The value 95 adjusts the scale, because we are giving the map a bit of a breather on the edges in order to not have the paths intersect the edges of the SVG container item, basically reducing the scale by 5 percent. Now, we have an accurate scale of our map, given our set width and height. var t = [(width - s * (b[1][0] + b[0][0])) / 2, (height - s * (b[1][1] + b[0][1])) / 2]; When we scale in SVG, it scales all the attributes (even x and y). In order to return the map to the center of the screen, we will use the translate function. The translate function receives an array with two parameters: the amount to translate in x, and the amount to translate in y. We will calculate x by finding the center (topRight – topLeft)/2 and multiplying it by the scale. The result is then subtracted from the width of the SVG element. Our y translation is calculated similarly but using the bottomRight – bottomLeft values divided by 2, multiplied by the scale, then subtracted from the height. Finally, we will reset the projection to use our new scale and translation: projection.scale(s).translate(t); Here, we will create a map variable that will group all of the following SVG elements into a <g> SVG tag. This will allow us to apply styles and better contain all of the proceeding paths' elements: var map = svg.append('g').attr('class', 'boundary'); Finally, we are back to the classic D3 enter, update, and exit pattern. We have our data, the list of Mexico states, and we will join this data to the path SVG element:    mexico = map.selectAll('path').data(states.features);      //Enter    mexico.enter()        .append('path')        .attr('d', path); The enter section and the corresponding path functions are executed on every data element in the array. As a refresher, each element in the array represents a state in Mexico. The path function has been set up to correctly draw the outline of each state as well as scale and translate it to fit in our SVG container. Congratulations! You have created your first map! Experiment 1 – adjusting the bounding box Now that we have our foundation, let's start with our first experiment. For this experiment, we will manually zoom in to a state of Mexico using what we learned in the previous section. For this experiment, we will modify one line of code: var b = path.bounds(states.features[5]); Here, we are telling the calculation to create a boundary based on the sixth element of the features array instead of every state in the country of Mexico. The boundaries data will now run through the rest of the scaling and translation algorithms to adjust the map to the one shown in the following screenshot:   We have basically reduced the min/max of the boundary box to include the geographic coordinates for one state in Mexico (see the next screenshot), and D3 has scaled and translated this information for us automatically:   This can be very useful in situations where you might not have the data that you need in isolation from the surrounding areas. Hence, you can always zoom in to your geography of interest and isolate it from the rest. Experiment 2 – creating choropleths One of the most common uses of D3.js maps is to make choropleths. This visualization gives you the ability to discern between regions, giving them a different color. Normally, this color is associated with some other value, for instance, levels of influenza or a company's sales. Choropleths are very easy to make in D3.js. In this experiment, we will create a quick choropleth based on the index value of the state in the array of all the states. We will only need to modify two lines of code in the update section of our D3 code. Right after the enter section, add the following two lines: //Update var color = d3.scale.linear().domain([0,33]).range(['red',   'yellow']); mexico.attr('fill', function(d,i) {return color(i)}); The color variable uses another valuable D3 function named scale. Scales are extremely powerful when creating visualizations in D3; much more detail on scales can be found at https://github.com/mbostock/d3/wiki/Scales. For now, let's describe what this scale defines. Here, we created a new function called color. This color function looks for any number between 0 and 33 in an input domain. D3 linearly maps these input values to a color between red and yellow in the output range. D3 has included the capability to automatically map colors in a linear range to a gradient. This means that executing the new function, color, with 0 will return the color red, color(15) will return an orange color, and color(33) will return yellow. Now, in the update section, we will set the fill property of the path to the new color function. This will provide a linear scale of colors and use the index value i to determine what color should be returned. If the color was determined by a different value of the datum, for instance, d.sales, then you would have a choropleth where the colors actually represent sales. The preceding code should render something as follows: Experiment 3 – adding click events to our visualization We've seen how to make a map and set different colors to the different regions of this map. Next, we will add a little bit of interactivity. This will illustrate a simple reference to bind click events to maps. First, we need a quick reference to each state in the country. To accomplish this, we will create a new function called geoID right below the mexico variable: var height = 600; var width = 900; var projection = d3.geo.mercator(); var mexico = void 0;   var geoID = function(d) {    return "c" + d.properties.ID_1; }; This function takes in a state data element and generates a new selectable ID based on the ID_1 property found in the data. The ID_1 property contains a unique numeric value for every state in the array. If we insert this as an id attribute into the DOM, then we would create a quick and easy way to select each state in the country. The following is the geoID function, creating another function called click: var click = function(d) {    mexico.attr('fill-opacity', 0.2); // Another update!    d3.select('#' + geoID(d)).attr('fill-opacity', 1); }; This method makes it easy to separate what the click is doing. The click method receives the datum and changes the fill opacity value of all the states to 0.2. This is done so that when you click on one state and then on the other, the previous state does not maintain the clicked style. Notice that the function call is iterating through all the elements of the DOM, using the D3 update pattern. After making all the states transparent, we will set a fill-opacity of 1 for the given clicked item. This removes all the transparent styling from the selected state. Notice that we are reusing the geoID function that we created earlier to quickly find the state element in the DOM. Next, let's update the enter method to bind our new click method to every new DOM element that enter appends: //Enter mexico.enter()      .append('path')      .attr('d', path)      .attr('id', geoID)      .on("click", click); We also added an attribute called id; this inserts the results of the geoID function into the id attribute. Again, this makes it very easy to find the clicked state. The code should produce a map as follows. Check it out and make sure that you click on any of the states. You will see its color turn a little brighter than the surrounding states. Summary You learned how to build many different kinds of maps that cover different kinds of needs. Choropleths and data visualizations on maps are some of the most common geographic-based data representations that you will come across. Resources for Article: Further resources on this subject: Using Canvas and D3 [article] Interacting with your Visualization [article] Simple graphs with d3.js [article]
Read more
  • 0
  • 0
  • 2100

article-image-c-ngui
Packt
29 Dec 2014
23 min read
Save for later

C# with NGUI

Packt
29 Dec 2014
23 min read
In this article by Charles Pearson, the author of Learning NGUI for Unity, we will talk about C# scripting with NGUI. We will learn how to handle events and interact with them through code. We'll use them to: Play tweens with effects through code Implement a localized tooltip system Localize labels through code Assign callback methods to events using both code and the Inspector view We'll learn many more useful C# tips throughout the book. Right now, let's start with events and their associated methods. (For more resources related to this topic, see here.) Events When scripting in C# with the NGUI plugin, some methods will often be used. For example, you will regularly need to know if an object is currently hovered upon, pressed, or clicked. Of course, you could code your own system—but NGUI handles that very well, and it's important to use it at its full potential in order to gain development time. Available methods When you create and attach a script to an object that has a collider on it (for example, a button or a 3D object), you can add the following useful methods within the script to catch events: OnHover(bool state): This method is called when the object is hovered or unhovered. The state bool gives the hover state; if state is true, the cursor just entered the object's collider. If state is false, the cursor has just left the collider's bounds. OnPress(bool state): This method works in the exact same way as the previous OnHover() method, except it is called when the object is pressed. It also works for touch-enabled devices. If you need to know which mouse button was used to press the object, use the UICamera.currentTouchID variable; if this int is equal to -1, it's a left-click. If it's equal to -2, it's a right-click. Finally, if it's equal to -3, it's a middle-click. OnClick(): This method is similar to OnPress(), except that this method is exclusively called when the click is validated, meaning when an OnPress(true) event occurs followed by an OnPress(false) event. It works with mouse click and touch (tap). In order to handle double clicks, you can also use the OnDoubleClick() method, which works in the same way. OnDrag(Vector2 delta): This method is called at each frame when the mouse or touch moves between the OnPress(true) and OnPress(false) events. The Vector2 delta argument gives you the object's movement since the last frame. OnDrop(GameObject droppedObj): This method is called when an object is dropped on the GameObject on which this script is attached. The dropped GameObject is passed as the droppedObj parameter. OnSelect(): This method is called when the user clicks on the object. It will not be called again until another object is clicked on or the object is deselected (click on empty space). OnTooltip(bool state): This method is called when the cursor is over the object for more than the duration defined by the Tooltip Delay inspector parameter of UICamera. If the Sticky Tooltip option of UICamera is checked, the tooltip remains visible until the cursor moves outside the collider; otherwise, it disappears as soon as the cursor moves. OnScroll(float delta): This method is called when the mouse's scroll wheel is moved while the object is hovered—the delta parameter gives you the amount and direction of the scroll. If you attach your script on a 3D object to catch these events, make sure it is on a layer included in Event Mask of UICamera. Now that we've seen the available event methods, let's see how they are used in a simple example. Example To illustrate when these events occur and how to catch them, you can create a new EventTester.cs script with the following code: void OnHover(bool state) { Debug.Log(this.name + " Hover: " + state); }   void OnPress(bool state) { Debug.Log(this.name + " Pressed: " + state); }   void OnClick() { Debug.Log(this.name + " Clicked"); }   void OnDrag(Vector2 delta) { Debug.Log(this.name + " Drag: " + delta); }   void OnDrop(GameObject droppedObject) { Debug.Log(droppedObject.name + " dropped on " + this.name); }   void OnSelect(bool state) { Debug.Log(this.name + " Selected: " + state); }   void OnTooltip(bool state) { Debug.Log("Show " + this.name + "'s Tooltip: " + state); }   void OnScroll(float delta) { Debug.Log("Scroll of " + delta + " on " + this.name); } The above highlighted lines are the event methods we discussed, implemented with their respective necessary arguments. Attach our Event Tester component now to any GameObject with a collider, like our Main | Buttons | Play button. Hit Unity's play button. From now on, events that occur on the object they're attached to are now tracked in the Console output:   I recommend that you keep the EventTester.cs script in a handy file directory as a reminder for available event methods in the future. Indeed, for each event, you can simply replace the Debug.Log() lines with the instructions you need. Now we know how to catch events through code. Let's use them to display a tooltip! Creating tooltips Let's use the OnTooltip() event to show a tooltip for our buttons and different options, as shown in the following screenshot:   The tooltip object shown in the preceding screenshot, which we are going to create, is composed of four elements: Tooltip: The tooltip container, with the Tooltip component attached. Background: The background sprite that wraps around Label. Border: A yellow border that wraps around Background. Label: The label that displays the tooltip's text. We will also make sure the tooltip is localized using NGUI methods. The tooltip object In order to create the tooltip object, we'll first create its visual elements (widgets), and then we'll attach the Tooltip component to it in order to define it as NGUI's tooltip. Widgets First, we need to create the tooltip object's visual elements: Select our UI Root GameObject in the Hierarchy view. Hit Alt + Shift + N to create a new empty child GameObject. Rename this new child from GameObject to Tooltip. Add the NGUI Panel (UIPanel) component to it. Set this new Depth of UIPanel to 10. In the preceding steps, we've created the tooltip container. It has UIPanel with a Depth value of 10 in order to make sure our tooltip will remain on top of other panels. Now, let's create the faintly transparent background sprite: With Tooltip selected, hit Alt + Shift + S to create a new child sprite. Rename this new child from Sprite to Background. Select our new Tooltip | Background GameObject, and configure UISprite, as follows: Perform the following steps: Make sure Atlas is set to Wooden Atlas. Set Sprite to the Window sprite. Make sure Type is set to Sliced. Change Color Tint to {R: 90, G: 70, B: 0, A: 180}. Set Pivot to top-left (left arrow + up arrow). Change Size to 500 x 85. Reset its Transform position to {0, 0, 0}. Ok, we can now easily add a fully opaque border with the following trick: With Tooltip | Background selected, hit Ctrl + D to duplicate it. Rename this new duplicate to Border. Select Tooltip | Border and configure its attached UI Sprite, as follows:   Perform the following steps: Disable the Fill Center option. Change Color Tint to {R: 255, G: 220, B: 0, A: 255}. Change the Depth value to 1. Set Anchors Type to Unified. Make sure the Execute parameter is set to OnUpdate. Drag Tooltip | Background in to the new Target field. By not filling the center of the Border sprite, we now have a yellow border around our background. We used anchors to make sure this border always wraps the background even during runtime—thanks to the Execute parameter set to OnUpdate. Right now, our Game and Hierarchy views should look like this:   Let's create the tooltip's label. With Tooltip selected, hit Alt + Shift + L to create a new label. For the new Label GameObject, set the following parameters for UILabel:   Set Font Type to NGUI, and Font to Arimo20 with a size of 40. Change Text to [FFCC00]This[FFFFFF] is a tooltip. Change Overflow to ResizeHeight. Set Effect to Outline, with an X and Y of 1 and black color. Set Pivot to top-left (left arrow + up arrow). Change X Size to 434. The height adjusts to the text amount. Set the Transform position to {33, -22, 0}. Ok, good. We now have a label that can display our tooltip's text. This label's height will adjust automatically as the text gets longer or shorter. Let's configure anchors to make sure the background always wraps around the label: Select our Tooltip | Background GameObject. Set Anchors Type to Unified. Drag Tooltip | Label in the new Target field. Set the Execute parameter to OnUpdate. Great! Now, if you edit our tooltip's text label to a very large text, you'll see that it adjusts automatically, as shown in the following screenshot:   UITooltip We can now add the UITooltip component to our tooltip object: Select our UI Root | Tooltip GameObject. Click the Add Component button in the Inspector view. Type tooltip with your keyboard to search for components. Select Tooltip and hit Enter or click on it with your mouse. Configure the newly attached UITooltip component, as follows: Drag UI Root | Tooltip | Label in the Text field. Drag UI Root | Tooltip | Background in the Background field. The tooltip object is ready! It is now defined as a tooltip for NGUI. Now, let's see how we can display it when needed using a few simple lines of code. Displaying the tooltip We must now show the tooltip when needed. In order to do that, we can use the OnTooltip() event, in which we request to display the tooltip with localized text: Select our three Main | Buttons | Exit, Options, and Play buttons. Click the Add Component button in the Inspector view. Type ShowTooltip with your keyboard. Hit Enter twice to create and attach the new ShowTooltip.cs script to it. Open this new ShowTooltip.cs script. First, we need to add this public key variable to define which text we want to display: // The localization key of the text to display public string key = ""; Ok, now add the following OnTooltip() method that retrieves the localized text and requests to show or hide the tooltip depending on the state bool: // When the OnTooltip event is triggered on this object void OnTooltip(bool state) { // Get the final localized text string finalText = Localization.Get(key);   // If the tooltip must be removed... if(!state) { // ...Set the finalText to nothing finalText = ""; }   // Request the tooltip display UITooltip.ShowText(finalText); } Save the script. As you can see in the preceding code, the Localization.Get(string key) method returns localized text of the corresponding key parameter that is passed. You can now use it to localize a label through code anytime! In order to hide the tooltip, we simply request UITooltip to show an empty tooltip. To use Localization.Get(string key), your label must not have a UILocalize component attached to it; otherwise, the value of UILocalize will overwrite anything you assign to UILabel. Ok, we have added the code to show our tooltip with localized text. Now, open the Localization.txt file, and add these localized strings: // Tooltips Play_Tooltip, "Launch the game!", "Lancer le jeu !" Options_Tooltip, "Change language, nickname, subtitles...", "Changer la langue, le pseudo, les sous-titres..." Exit_Tooltip, "Leaving us already?", "Vous nous quittez déjà ?" Now that our localized strings are added, we could manually configure the key parameter for our three buttons' Show Tooltip components to respectively display Play_Tooltip, Options_Tooltip, and Exit_Tooltip. But that would be a repetitive action, and if we want to add localized tooltips easily for future and existing objects, we should implement the following system: if the key parameter is empty, we'll try to get a localized text based on the GameObject's name. Let's do this now! Open our ShowTooltip.cs script, and add this Start() method: // At start void Start() { // If key parameter isn't defined in inspector... if(string.IsNullOrEmpty(key)) { // ...Set it now based on the GameObject's name key = name + "_Tooltip"; } } Click on Unity's play button. That's it! When you leave your cursor on any of our three buttons, a localized tooltip appears:   The preceding tooltip wraps around the displayed text perfectly, and we didn't have to manually configure their Show Tooltip components' key parameters! Actually, I have a feeling that the display delay is too long. Let's correct this: Select our UI Root | Camera GameObject. Set Tooltip Delay of UICamera to 0.3. That's better—our localized tooltip appears after 0.3 seconds of hovering. Adding the remaining tooltips We can now easily add tooltips for our Options page's element. The tooltip works on any GameObject with a collider attached to it. Let's use a search by type to find them: In the Hierarchy view's search bar, type t:boxcollider Select Checkbox, Confirm, Input, List (both), Music, and SFX: Click on the Add Component button in the Inspector view. Type show with your keyboard to search the components. Hit Enter or click on the Show Tooltip component to attach it to them. For the objects with generic names, such as Input and List, we need to set their key parameter manually, as follows: Select the Checkbox GameObject, and set Key to Sound_Tooltip. Select the Input GameObject, and set Key to Nickname_Tooltip. For the List for language selection, set Key to Language_Tooltip. For the List for subtitles selection, set Key to Subtitles_Tooltip. To know if the selected list is the language or subtitles list, look at Options of its UIPopup List: if it has the None option, then it's the subtitles selection. Finally, we need to add these localization strings in the Localization.txt file: Sound_Tooltip, "Enable or disable game sound", "Activer ou désactiver le son du jeu" Nickname_Tooltip, "Name used during the game", "Pseudo utilisé lors du jeu" Language_Tooltip, "Game and user interface language", "Langue du jeu et de l'interface" Subtitles_Tooltip, "Subtitles language", "Langue des sous-titres" Confirm_Tooltip, "Confirm and return to main menu", "Confirmer et retourner au menu principal" Music_Tooltip, "Game music volume", "Volume de la musique" SFX_Tooltip, "Sound effects volume", "Volume des effets" Hit Unity's play button. We now have localized tooltips for all our options! We now know how to easily use NGUI's tooltip system. It's time to talk about Tween methods. Tweens The tweens we have used until now were components we added to GameObjects in the scene. It is also possible to easily add tweens to GameObjects through code. You can see all available tweens by simply typing Tween inside any method in your favorite IDE. You will see a list of Tween classes thanks to auto-completion, as shown in the following screenshot:   The strong point of these classes is that they work in one line and don't have to be executed at each frame; you just have to call their Begin() method! Here, we will apply tweens on widgets, but keep in mind that it works in the exact same way with other GameObjects since NGUI widgets are GameObjects. Tween Scale Previously, we've used the Tween Scale component to make our main window disappear when the Exit button is pressed. Let's do the same when the Play button is pressed, but this time we'll do it through code to understand how it's done. DisappearOnClick Script We will first create a new DisappearOnClick.cs script that will tween a target's scale to {0.01, 0.01, 0.01} when the GameObject it's attached to is clicked on: Select our UI Root | Main | Buttons | Play GameObject. Click the Add Component button in the Inspector view. Type DisappearOnClick with your keyboard. Hit Enter twice to create and add the new DisappearOnClick.cs script. Open this new DisappearOnClick.cs script. First, we must add this public target GameObject to define which object will be affected by the tween, and a duration float to define the speed: // Declare the target we'll tween down to {0.01, 0.01, 0.01} public GameObject target; // Declare a float to configure the tween's duration public float duration = 0.3f; Ok, now, let's add the following OnClick() method, which creates a new tween towards {0.01, 0.01, 0.01} on our desired target using the duration variable: // When this object is clicked private void OnClick() { // Create a tween on the target TweenScale.Begin(target, duration, Vector3.one * 0.01f); } In the preceding code, we scale down the target for the desired duration, towards 0.01f. Save the script. Good. Now, we simply have to assign our variables in the Inspector view: Go back to Unity and select our Play button GameObject. Drag our UI Root | Main object in the DisappearOnClick Target field. Great. Now, hit Unity's play button. When you click the menu's Play button, our main menu is scaled down to {0.01, 0.01, 0.01}, with the simple TweenScale.Begin() line! Now that we've seen how to make a basic tween, let's see how to add effects. Tween effects Right now, our tween is simple and linear. In order to add an effect to the tween, we first need to store it as UITweener, which is its parent class. Replace lines of our OnClick() method by these to first store it and set an effect: // Retrieve the new target's tween UITweener tween = TweenScale.Begin(target, duration, Vector3.one * 0.01f); // Set the new tween's effect method tween.method = UITweener.Method.EaseInOut; That's it. Our tween now has an EaseInOut effect. You also have the following tween effect methods:   Perform the following steps: BounceIn: Bouncing effect at the start of tween BounceOut: Bouncing effect at the end of tween EaseIn: Smooth acceleration effect at the start of tween EaseInOut: Smooth acceleration and deceleration EaseOut: Smooth deceleration effect at the end of tween Linear: Simple linear tween without any effects Great. We now know how to add tween effects through code. Now, let's see how we can set event delegates through code. You can set the tween's ignoreTimeScale to true if you want it to always run at normal speed even if your Time.timeScale variable is different from 1. Event delegates Many NGUI components broadcast events, for which you can set an event delegate—also known as a callback method—executed when the event is triggered. We did it through the Inspector view by assigning the Notify and Method fields when buttons were clicked. For any type of tween, you can set a specific event delegate for when the tween is finished. We'll see how to do this through code. Before we continue, we must create our callback first. Let's create a callback that loads a new scene. The callback Open our MenuManager.cs script, and add this static LoadGameScene() callback method: public static void LoadGameScene() { //Load the Game scene now Application.LoadLevel("Game"); } Save the script. The preceding code requests to load the Game scene. To ensure Unity finds our scenes at runtime, we'll need to create the Game scene and add both Menu and Game scenes to the build settings: Navigate to File | Build Settings. Click on the Add Current button (don't close the window now). In Unity, navigate to File | New Scene. Navigate to File | Save Scene as… Save the scene as Game.unity. Click on the Add Current button of the Build Settings window and close it. Navigate to File | Open Scene and re-open our Menu.unity scene. Ok, now that both scenes have been added to the build settings, we are ready to link our callback to our event. Linking a callback to an event Now that our LoadGameScene() callback method is written, we must link it to our event. We have two solutions. First, we'll see how to assign it using code exclusively, and then we'll create a more flexible system using NGUI's Notify and Method fields. Code In order to set a callback for a specific event, a generic solution exists for all NGUI events you might encounter: the EventDelegate.Set() method. You can also add multiple callbacks to an event using EventDelegate.Add(). Add this line at the end of the OnClick() method of DisappearOnClick.cs: // Set the tween's onFinished event to our LoadGameScene callback EventDelegate.Set(tween.onFinished, MenuManager.LoadGameScene); Instead of the preceding line, we can also use the tween-specific SetOnFinished() convenience method to do this. We'll get the exact same result with fewer words: // Another way to assign our method to the onFinished event tween.SetOnFinished(MenuManager.LoadGameScene); Great. If you hit Unity's play button and click on our main menu's Play button, you'll see that our Game scene is loaded as soon as the tween has finished! It is possible to remove the link of an existing event delegate to a callback by calling EventDelegate.Remove(eventDelegate, callback);. Now, let's see how to link an event delegate to a callback using the Inspector view. Inspector Now that we have seen how to set event delegates through code, let's see how we can create a variable to let us choose which method to call within the Inspector view, like this:   The method to call when the target disappears can be set any time without editing the code The On Disappear variable shown in the preceding screenshot is of the type EventDelegate. We can declare it right now with the following line as a global variable for our DisappearOnClick.cs script: // Declare an event delegate variable to be set in Inspector public EventDelegate onDisappear; Now, let's change the OnClick() method's last line to make sure the tween's onFinished event calls the defined onDisappear callback: // Set the tween's onFinished event to the selected callback tween.SetOnFinished(onDisappear); Ok. Great. Save the script and go to Unity. Select our main menu's Play button: a new On Disappear field has appeared. Drag UI Root—which holds our MenuManager.cs script—in the Notify field. Now, try to select our MenuManager | LoadGameScene method. Surprisingly, it doesn't appear, and you can only select the script's Exit method… why is that? That is simply because our LoadGameScene() method is currently static. If we want it to be available in the Inspector view, we need to remove its static property: Open our MenuManager.cs script. Remove the static keyword from our LoadGameScene() method. Save the script and return to Unity. You can now select it in the drop-down list:   Great! We have set our callback through the Inspector view; the Game scene will be loaded when the menu disappears. Now that we have learned how to assign event delegates to callback methods through code and the Inspector view, let's see how to assign keyboard keys to user interface elements. Keyboard keys In this section, we'll see how to add keyboard control to our UI. First, we'll see how to bind keys to buttons, and then we'll add a navigation system using keyboard arrows. UIKey binding The UIKey Binding component assigns a specific key to the widget it's attached to. We'll use it now to assign the keyboard's Escape key to our menu's Exit button: Select our UI Root | Main | Buttons | Exit GameObject. Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Binding and hit Enter or click on it with your mouse. Let's see its available parameters. Parameters We've just added the following UIKey Binding component to our Exit button, as follows:   The newly attached UIKey Binding component has three parameters: Key Code: Which key would you like to bind to an action? Modifier: If you want a two-button combination. Select on the four available modifiers: Shift, Control, Alt or None. Action: Which action should we bind to this key? You can simulate a button click with PressAndClick, a selection with Select, or both with All. Ok, now, we'll configure it to see how it works. Configuration Simply set the Key Code field to Escape. Now, hit Unity's play button. When you hit the Escape key of our keyboard, it reacts as if the Exit button was pressed! We can now move on to see how to add keyboard and controller navigation to the UI. UIKey navigation The UIKey Navigation component helps us assign objects to select using the keyboard arrows or controller directional-pad. For most widgets, the automatic configuration is enough, but we'll need to use the override parameters in some cases to have the behavior we need. The nickname input field has neither the UIButton nor the UIButton Scale components attached to it. This means that there will be no feedback to show the user it's currently selected with the keyboard navigation, which is a problem. We can correct this right now. Select UI Root | Options | Nickname | Input, and then: Add the Button component (UIButton) to it. Add the Button Scale component (UIButton Scale) to it. Center Pivot of UISprite (middle bar + middle bar). Reset Center of Box Collider to {0, 0, 0}. The Nickname | Input GameObject should have an Inspector view like this:   Ok. We'll now add the Key Navigation component (UIKey Navigation) to most of the buttons in the scene. In order to do that, type t:uibutton in the Hierarchy view's search bar to display only GameObjects with the UIButton component attached to them:   Ok. With the preceding search filter, select the following GameObjects:   Now, with the preceding selection, follow these steps: Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Navigation and hit Enter or click on it with your mouse. We've added the UIKey Navigation component to our selection. Let's see its parameters. Parameters We've just added the following UIKey Navigation component to our objects:   The newly attached UIKey Navigation component has four parameter groups: Starts Selected: Is this widget selected by default at the start? Select on Click: Which widget should be selected when this widget is clicked on — or the Enter key/confirm button has been pressed? This option can be used to select a specific widget when a new page is displayed. Constraint: Use this to limit the navigation movement from this widget: None: The movement is free from this widget Vertical: From this widget, you can only go up or down Horizontal: From this widget, you can only move left or right Explicit: Only move to widgets specified in the Override Override: Use the Left, Right, Up, and Down fields to force the input to select the specified objects. If the Constraint parameter is set to Explicit, only widgets specified here can be selected. Otherwise, automatic configuration still works for fields left to None. Summary This article thus has given an introduction to how C# is used in Unity. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15832

article-image-heads-mvvmcross
Packt
29 Dec 2014
33 min read
Save for later

Heads up to MvvmCross

Packt
29 Dec 2014
33 min read
In this article, by Mark Reynolds, author of the book Xamarin Essentials, we will take the next step and look at how the use of design patterns and frameworks can increase the amount of code that can be reused. We will cover the following topics: An introduction to MvvmCross The MVVM design pattern Core concepts Views, ViewModels, and commands Data binding Navigation (ViewModel to ViewModel) The project organization The startup process Creating NationalParks.MvvmCross Our approach will be to introduce the core concepts at a high level and then dive in and create the national parks sample app using MvvmCross. This will give you a basic understanding of how to use the framework and the value associated with its use. With that in mind, let's get started. (For more resources related to this topic, see here.) Introducing MvvmCross MvvmCross is an open source framework that was created by Stuart Lodge. It is based on the Model-View-ViewModel (MVVM) design pattern and is designed to enhance code reuse across numerous platforms, including Xamarin.Android, Xamarin.iOS, Windows Phone, Windows Store, WPF, and Mac OS X. The MvvmCross project is hosted on GitHub and can be accessed at https://github.com/MvvmCross/MvvmCross. The MVVM pattern MVVM is a variation of the Model-View-Controller pattern. It separates logic traditionally placed in a View object into two distinct objects, one called View and the other called ViewModel. The View is responsible for providing the user interface and the ViewModel is responsible for the presentation logic. The presentation logic includes transforming data from the Model into a form that is suitable for the user interface to work with and mapping user interaction with the View into requests sent back to the Model. The following diagram depicts how the various objects in MVVM communicate: While MVVM presents a more complex implementation model, there are significant benefits of it, which are as follows: ViewModels and their interactions with Models can generally be tested using frameworks (such as NUnit) that are much easier than applications that combine the user interface and presentation layers ViewModels can generally be reused across different user interface technologies and platforms These factors make the MVVM approach both flexible and powerful. Views Views in an MvvmCross app are implemented using platform-specific constructs. For iOS apps, Views are generally implemented as ViewControllers and XIB files. MvvmCross provides a set of base classes, such as MvxViewContoller, that iOS ViewControllers inherit from. Storyboards can also be used in conjunction with a custom presenter to create Views; we will briefly discuss this option in the section titled Implementing the iOS user interface later in this article. For Android apps, Views are generally implemented as MvxActivity or MvxFragment along with their associated layout files. ViewModels ViewModels are classes that provide data and presentation logic to views in an app. Data is exposed to a View as properties on a ViewModel, and logic that can be invoked from a View is exposed as commands. ViewModels inherit from the MvxViewModel base class. Commands Commands are used in ViewModels to expose logic that can be invoked from the View in response to user interactions. The command architecture is based on the ICommand interface used in a number of Microsoft frameworks such as Windows Presentation Foundation (WPF) and Silverlight. MvvmCross provides IMvxCommand, which is an extension of ICommand, along with an implementation named MvxCommand. The commands are generally defined as properties on a ViewModel. For example: public IMvxCommand ParkSelected { get; protected set; } Each command has an action method defined, which implements the logic to be invoked: protected void ParkSelectedExec(NationalPark park) {    . . .// logic goes here } The commands must be initialized and the corresponding action method should be assigned: ParkSelected =    new MvxCommand<NationalPark> (ParkSelectedExec); Data binding Data binding facilitates communication between the View and the ViewModel by establishing a two-way link that allows data to be exchanged. The data binding capabilities provided by MvvmCross are based on capabilities found in a number of Microsoft XAML-based UI frameworks such as WPF and Silverlight. The basic idea is that you would like to bind a property in a UI control, such as the Text property of an EditText control in an Android app to a property of a data object such as the Description property of NationalPark. The following diagram depicts this scenario: The binding modes There are four different binding modes that can be used for data binding: OneWay binding: This mode tells the data binding framework to transfer values from the ViewModel to the View and transfer any updates to properties on the ViewModel to their bound View property. OneWayToSource binding: This mode tells the data binding framework to transfer values from the View to the ViewModel and transfer updates to View properties to their bound ViewModel property. TwoWay binding: This mode tells the data binding framework to transfer values in both directions between the ViewModel and View, and updates on either object will cause the other to be updated. This binding mode is useful when values are being edited. OneTime binding: This mode tells the data binding framework to transfer values from ViewModel to View when the binding is established; in this mode, updates to ViewModel properties are not monitored by the View. The INotifyPropertyChanged interface The INotifyPropertyChanged interface is an integral part of making data binding work effectively; it acts as a contract between the source object and the target object. As the name implies, it defines a contract that allows the source object to notify the target object when data has changed, thus allowing the target to take any necessary actions such as refreshing its display. The interface consists of a single event—the PropertyChanged event—that the target object can subscribe to and that is triggered by the source if a property changes. The following sample demonstrates how to implement INotifyPropertyChanged: public class NationalPark : INotifyPropertyChanged {   public event PropertyChangedEventHandler      PropertyChanged; // rather than use "… code" it is safer to use // the comment form string _name; public string Name {    get { return _name; }    set    {        if (value.Equals (_name,            StringComparison.Ordinal))        {      // Nothing to do - the value hasn't changed;      return;        }        _name = value;        OnPropertyChanged();    } } . . . void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {      var handler = PropertyChanged; if (handler != null) {      handler(this,            new PropertyChangedEventArgs(propertyName)); } } } Binding specifications Bindings can be specified in a couple of ways. For Android apps, bindings can be specified in layout files. The following example demonstrates how to bind the Text property of a TextView instance to the Description property in a NationalPark instance: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/descrTextView"    local_MvxBind="Text Park.Description" /> For iOS, binding must be accomplished using the binding API. CreateBinding() is a method than can be found on MvxViewController. The following example demonstrates how to bind the Description property to a UILabel instance: this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).    Apply (); Navigating between ViewModels Navigating between various screens within an app is an important capability. Within a MvvmCross app, this is implemented at the ViewModel level so that navigation logic can be reused. MvvmCross supports navigation between ViewModels through use of the ShowViewModel<T>() method inherited from MvxNavigatingObject, which is the base class for MvxViewModel. The following example demonstrates how to navigate to DetailViewModel: ShowViewModel<DetailViewModel>(); Passing parameters In many situations, there is a need to pass information to the destination ViewModel. MvvmCross provides a number of ways to accomplish this. The primary method is to create a class that contains simple public properties and passes an instance of the class into ShowViewModel<T>(). The following example demonstrates how to define and use a parameters class during navigation: public class DetailParams {    public int ParkId { get; set; } }   // using the parameters class ShowViewModel<DetailViewModel>( new DetailViewParam() { ParkId = 0 }); To receive and use parameters, the destination ViewModel implements an Init() method that accepts an instance of the parameters class: public class DetailViewModel : MvxViewModel {    . . .    public void Init(DetailViewParams parameters)    {        // use the parameters here . . .    } } Solution/project organization Each MvvmCross solution will have a single core PCL project that houses the reusable code and a series of platform-specific projects that contain the various apps. The following diagram depicts the general structure: The startup process MvvmCross apps generally follow a standard startup sequence that is initiated by platform-specific code within each app. There are several classes that collaborate to accomplish the startup; some of these classes reside in the core project and some of them reside in the platform-specific projects. The following sections describe the responsibilities of each of the classes involved. App.cs The core project has an App class that inherits from MvxApplication. The App class contains an override to the Initialize() method so that at a minimum, it can register the first ViewModel that should be presented when the app starts: RegisterAppStart<ViewModels.MasterViewModel>(); Setup.cs Android and iOS projects have a Setup class that is responsible for creating the App object from the core project during the startup. This is accomplished by overriding the CreateApp() method: protected override IMvxApplication CreateApp() {    return new Core.App(); } For Android apps, Setup inherits from MvxAndroidSetup. For iOS apps, Setup inherits from MvxTouchSetup. The Android startup Android apps are kicked off using a special Activity splash screen that calls the Setup class and initiates the MvvmCross startup process. This is all done automatically for you; all you need to do is include the splash screen definition and make sure it is marked as the launch activity. The definition is as follows: [Activity( Label="NationalParks.Droid", MainLauncher = true, Icon="@drawable/icon", Theme="@style/Theme.Splash", NoHistory=true, ScreenOrientation = ScreenOrientation.Portrait)] public class SplashScreen : MvxSplashScreenActivity {    public SplashScreen():base(Resource.Layout.SplashScreen)    {    } } The iOS startup The iOS app startup is slightly less automated and is initiated from within the FinishedLaunching() method of AppDelegate: public override bool FinishedLaunching (    UIApplication app, NSDictionary options) {    _window = new UIWindow (UIScreen.MainScreen.Bounds);      var setup = new Setup(this, _window);    setup.Initialize();    var startup = Mvx.Resolve<IMvxAppStart>();    startup.Start();      _window.MakeKeyAndVisible ();      return true; } Creating NationalParks.MvvmCross Now that we have basic knowledge of the MvvmCross framework, let's put that knowledge to work and convert the NationalParks app to leverage the capabilities we just learned. Creating the MvvmCross core project We will start by creating the core project. This project will contain all the code that will be shared between the iOS and Android app primarily in the form of ViewModels. The core project will be built as a Portable Class Library. To create NationalParks.Core, perform the following steps: From the main menu, navigate to File | New Solution. From the New Solution dialog box, navigate to C# | Portable Library, enter NationalParks.Core for the project Name field, enter NationalParks.MvvmCross for the Solution field, and click on OK. Add the MvvmCross starter package to the project from NuGet. Select the NationalParks.Core project and navigate to Project | Add Packages from the main menu. Enter MvvmCross starter in the search field. Select the MvvmCross – Hot Tuna Starter Pack entry and click on Add Package. A number of things were added to NationalParks.Core as a result of adding the package, and they are as follows: A packages.config file, which contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to actual libraries in the Packages folder of the overall solution. A ViewModels folder with a sample ViewModel named FirstViewModel. An App class in App.cs, which contains an Initialize() method that starts the MvvmCross app by calling RegisterAppStart() to start FirstViewModel. We will eventually be changing this to start the MasterViewModel class, which will be associated with a View that lists national parks. Creating the MvvmCross Android app The next step is to create an Android app project in the same solution. To create NationalParks.Droid, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog box, navigate to C# | Android | Android Application, enter NationalParks.Droid for the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.Droid and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.Droid as a result of adding the package, which are as follows: packages.config: This file contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView : This class is present in the Views folder, which corresponds to FirstViewModel, which was created in NationalParks.Core. FirstView: This layout is present in Resourceslayout, which is used by the FirstView activity. This is a traditional Android layout file with the exception that it contains binding declarations in the EditView and TextView elements. Setup: This file inherits from MvxAndroidSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). SplashScreen: This class inherits from MvxSplashScreenActivity. The SplashScreen class is marked as the main launcher activity and thus initializes the MvvmCross app with a call to Setup.Initialize(). Add a reference to NationalParks.Core by selecting the References folder, right-click on it, select Edit References, select the Projects tab, check NationalParks.Core, and click on OK. Remove MainActivity.cs as it is no longer needed and will create a build error. This is because it is marked as the main launch and so is the new SplashScreen class. Also, remove the corresponding Resourceslayoutmain.axml layout file. Run the app. The app will present FirstViewModel, which is linked to the corresponding FirstView instance with an EditView class, and TextView presents the same Hello MvvmCross text. As you edit the text in the EditView class, the TextView class is automatically updated by means of data binding. The following screenshot depicts what you should see: Reusing NationalParks.PortableData and NationalParks.IO Before we start creating the Views and ViewModels for our app, we first need to bring in some code from our previous efforts that can be used to maintain parks. For this, we will simply reuse the NationalParksData singleton and the FileHandler classes that were created previously. To reuse the NationalParksData singleton and FileHandler classes, complete the following steps: Copy NationalParks.PortableData and NationalParks.IO from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the NationalParks.MvvmCross solution folder. Add a reference to NationalParks.PortableData in the NationalParks.Droid project. Create a folder named NationalParks.IO in the NationalParks.Droid project and add a link to FileHandler.cs from the NationalParks.IO project. Recall that the FileHandler class cannot be contained in the Portable Class Library because it uses file IO APIs that cannot be references from a Portable Class Library. Compile the project. The project should compile cleanly now. Implementing the INotifyPropertyChanged interface We will be using data binding to bind UI controls to the NationalPark object and thus, we need to implement the INotifyPropertyChanged interface. This ensures that changes made to properties of a park are reported to the appropriate UI controls. To implement INotifyPropertyChanged, complete the following steps: Open NationalPark.cs in the NationalParks.PortableData project. Specify that the NationalPark class implements INotifyPropertyChanged interface. Select the INotifyPropertyChanged interface, right-click on it, navigate to Refactor | Implement interface, and press Enter. Enter the following code snippet: public class NationalPark : INotifyPropertyChanged {    public event PropertyChangedEventHandler        PropertyChanged;    . . . } Add an OnPropertyChanged() method that can be called from each property setter method: void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {    var handler = PropertyChanged;    if (handler != null)    {        handler(this,            new PropertyChangedEventArgs(propertyName));    } } Update each property definition to call the setter in the same way as it is depicted for the Name property: string _name; public string Name { get { return _name; } set {    if (value.Equals (_name, StringComparison.Ordinal))    {      // Nothing to do - the value hasn't changed; return;    }    _name = value;    OnPropertyChanged(); } } Compile the project. The project should compile cleanly. We are now ready to use the NationalParksData singleton in our new project, and it supports data binding. Implementing the Android user interface Now, we are ready to create the Views and ViewModels required for our app. The app we are creating will follow the following flow: A master list view to view national parks A detail view to view details of a specific park An edit view to edit a new or previously existing park The process for creating views and ViewModels in an Android app generally consists of three different steps: Create a ViewModel in the core project with the data and event handlers (commands) required to support the View. Create an Android layout with visual elements and data binding specifications. Create an Android activity, which corresponds to the ViewModel and displays the layout. In our case, this process will be slightly different because we will reuse some of our previous work, specifically, the layout files and the menu definitions. To reuse layout files and menu definitions, perform the following steps: Copy Master.axml, Detail.axml, and Edit.axml from the Resourceslayout folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourceslayout folder in the NationalParks.Droid project, and add them to the project by selecting the layout folder and navigating to Add | Add Files. Copy MasterMenu.xml, DetailMenu.xml, and EditMenu.xml from the Resourcesmenu folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourcesmenu folder in the NationalParks.Droid project, and add them to the project by selecting the menu folder and navigating to Add | Add Files. Implementing the master list view We are now ready to implement the first of our View/ViewModel combinations, which is the master list view. Creating MasterViewModel The first step is to create a ViewModel and add a property that will provide data to the list view that displays national parks along with some initialization code. To create MasterViewModel, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, and navigate to Add | New File. In the New File dialog box, navigate to General | Empty Class, enter MasterViewModel for the Name field, and click on New. Modify the class definition so that MasterViewModel inherits from MvxViewModel; you will also need to add a few using directives: . . . using Cirrious.CrossCore.Platform; using Cirrious.MvvmCross.ViewModels; . . . namespace NationalParks.Core.ViewModels { public class MasterViewModel : MvxViewModel {          . . .    } } Add a property that is a list of NationalPark elements to MasterViewModel. This property will later be data-bound to a list view: private List<NationalPark> _parks; public List<NationalPark> Parks {    get { return _parks; }    set { _parks = value;          RaisePropertyChanged(() => Parks);    } } Override the Start() method on MasterViewModel to load the _parks collection with data from the NationalParksData singleton. You will need to add a using directive for the NationalParks.PortableData namespace again: . . . using NationalParks.PortableData; . . . public async override void Start () {    base.Start ();    await NationalParksData.Instance.Load ();    Parks = new List<NationalPark> (        NationalParksData.Instance.Parks); } We now need to modify the app startup sequence so that MasterViewModel is the first ViewModel that's started. Open App.cs in NationalParks.Core and change the call to RegisterAppStart() to reference MasterViewModel:RegisterAppStart<ViewModels.MasterViewModel>(); Updating the Master.axml layout Update Master.axml so that it can leverage the data binding capabilities provided by MvvmCross. To update Master.axml, complete the following steps: Open Master.axml and add a namespace definition to the top of the XML to include the NationalParks.Droid namespace: This namespace definition is required in order to allow Android to resolve the MvvmCross-specific elements that will be specified. Change the ListView element to a Mvx.MvxListView element: <Mvx.MvxListView    android_layout_width="match_parent"    android_layout_height="match_parent"    android_id="@+id/parksListView" /> Add a data binding specification to the MvxListView element, binding the ItemsSource property of the list view to the Parks property of MasterViewModel, as follows:    . . .    android_id="@+id/parksListView"    local_MvxBind="ItemsSource Parks" /> Add a list item template attribute to the element definition. This layout controls the content of each item that will be displayed in the list view: local:MvxItemTemplate="@layout/nationalparkitem" Create the NationalParkItem layout and provide TextView elements to display both the name and description of a park, as follows: <LinearLayout    android_orientation="vertical"    android_layout_width="fill_parent"    android_layout_height="wrap_content">    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"         android:textSize="40sp"/>    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"        android_textSize="20sp"/> </LinearLayout> Add data binding specifications to each of the TextView elements: . . .        local_MvxBind="Text Name" /> . . .        local_MvxBind="Text Description" /> . . . Note that in this case, the context for data binding is an instance of an item in the collection that was bound to MvxListView, for this example, an instance of NationalPark. Creating the MasterView activity Next, create MasterView, which is an MvxActivity instance that corresponds with MasterViewModel. To create MasterView, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, navigate to Add | New File. In the New File dialog, navigate to Android | Activity, enter MasterView in the Name field, and select New. Modify the class specification so that it inherits from MvxActivity; you will also need to add a few using directives as follows: using Cirrious.MvvmCross.Droid.Views; using NationalParks.Core.ViewModels; . . . namespace NationalParks.Droid.Views {    [Activity(Label = "Parks")]    public class MasterView : MvxActivity    {        . . .    } } Open Setup.cs and add code to initialize the file handler and path for the NationalParksData singleton to the CreateApp() method, as follows: protected override IMvxApplication CreateApp() {    NationalParksData.Instance.FileHandler =        new FileHandler ();    NationalParksData.Instance.DataDir =        System.Environment.GetFolderPath(          System.Environment.SpecialFolder.MyDocuments);    return new Core.App(); } Compile and run the app; you will need to copy the NationalParks.json file to the device or emulator using the Android Device Monitor. All the parks in NationalParks.json should be displayed. Implementing the detail view Now that we have the master list view displaying national parks, we can focus on creating the detail view. We will follow the same steps for the detail view as the ones we just completed for the master view. Creating DetailViewModel We start creating DetailViewModel by using the following steps: Following the same procedure as the one that was used to create MasterViewModel, create a new ViewModel named DetailViewModel in the ViewModel folder of NationalParks.Core. Add a NationalPark property to support data binding for the view controls, as follows: protected NationalPark _park; public NationalPark Park {    get { return _park; }    set { _park = value;          RaisePropertyChanged(() => Park);      } } Create a Parameters class that can be used to pass a park ID for the park that should be displayed. It's convenient to create this class within the class definition of the ViewModel that the parameters are for: public class DetailViewModel : MvxViewModel {    public class Parameters    {        public string ParkId { get; set; }    }    . . . Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData: public void Init(Parameters parameters) {    Park = NationalParksData.Instance.Parks.        FirstOrDefault(x => x.Id == parameters.ParkId); } Updating the Detail.axml layout Next, we will update the layout file. The main changes that need to be made are to add data binding specifications to the layout file. To update the Detail.axml layout, perform the following steps: Open Detail.axml and add the project namespace to the XML file: Add data binding specifications to each of the TextView elements that correspond to a national park property, as demonstrated for the park name: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/nameTextView"    local_MvxBind="Text Park.Name" /> Creating the DetailView activity Now, create the MvxActivity instance that will work with DetailViewModel. To create DetailView, perform the following steps: Following the same procedure as the one that was used to create MasterView, create a new view named DetailView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that our menus will be accessible. Copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials)[AR4] . Comment out the section in OnOptionsItemSelect() related to the Edit action for now; we will fill that in once the edit view is completed. Adding navigation The last step is to add navigation so that when an item is clicked on in MvxListView on MasterView, the park is displayed in the detail view. We will accomplish this using a command property and data binding. To add navigation, perform the following steps: Open MasterViewModel and add an IMvxCommand property; this will be used to handle a park that is being selected: protected IMvxCommand ParkSelected { get; protected set; } Create an Action delegate that will be called when the ParkSelected command is executed, as follows: protected void ParkSelectedExec(NationalPark park) {    ShowViewModel<DetailViewModel> (        new DetailViewModel.Parameters ()            { ParkId = park.Id }); } Initialize the command property in the constructor of MasterViewModel: ParkClicked =    new MvxCommand<NationalPark> (ParkSelectedExec); Now, for the last step, add a data binding specification to MvvListView in Master.axml to bind the ItemClick event to the ParkClicked command on MasterViewModel, which we just created: local:MvxBind="ItemsSource Parks; ItemClick ParkClicked" Compile and run the app. Clicking on a park in the list view should now navigate to the detail view, displaying the selected park. Implementing the edit view We are now almost experts at implementing new Views and ViewModels. One last View to go is the edit view. Creating EditViewModel Like we did previously, we start with the ViewModel. To create EditViewModel, complete the following steps: Following the same process that was previously used in this article to create EditViewModel, add a data binding property and create a Parameters class for navigation. Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData in the case of editing an existing park or create a new instance if the user has chosen the New action. Inspect the parameters passed in to determine what the intent is: public void Init(Parameters parameters) {    if (string.IsNullOrEmpty (parameters.ParkId))        Park = new NationalPark ();    else        Park =            NationalParksData.Instance.            Parks.FirstOrDefault(            x => x.Id == parameters.ParkId); } Updating the Edit.axml layout Update Edit.axml to provide data binding specifications. To update the Edit.axml layout, you first need to open Edit.axml and add the project namespace to the XML file. Then, add the data binding specifications to each of the EditView elements that correspond to a national park property. Creating the EditView activity Create a new MvxActivity instance named EditView to will work with EditViewModel. To create EditView, perform the following steps: Following the same procedure as the one that was used to create DetailView, create a new View named EditView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that the Done action will accessible from the ActionBar. You can copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials). Change the implementation of Done to call the Done command on EditViewModel. Adding navigation Add navigation to two places: when New (+) is clicked from MasterView and when Edit is clicked in DetailView. Let's start with MasterView. To add navigation from MasterViewModel, complete the following steps: Open MasterViewModel.cs and add a NewParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as follows: protected IMvxCommand NewParkClicked { get; set; } protected void NewParkClickedExec() { ShowViewModel<EditViewModel> (); } Note that we do not pass in a parameter class into ShowViewModel(). This will cause a default instance to be created and passed in, which means that ParkId will be null. We will use this as a way to determine whether a new park should be created. Now, it's time to hook the NewParkClicked command up to the actionNew menu item. We do not have a way to accomplish this using data binding, so we will resort to a more traditional approach—we will use the OnOptionsItemSelected() method. Add logic to invoke the Execute() method on NewParkClicked, as follows: case Resource.Id.actionNew:    ((MasterViewModel)ViewModel).        NewParkClicked.Execute ();    return true; To add navigation from DetailViewModel, complete the following steps: Open DetailViewModel.cs and add a EditParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as shown in the following code snippet: protected IMvxCommand EditPark { get; protected set;} protected void EditParkHandler() {    ShowViewModel<EditViewModel> (        new EditViewModel.Parameters ()            { ParkId = _park.Id }); } Note that an instance of the Parameters class is created, initialized, and passed into the ShowViewModel() method. This instance will in turn be passed into the Init() method on EditViewModel. Initialize the command property in the constructor for MasterViewModel, as follows: EditPark =    new MvxCommand<NationalPark> (EditParkHandler); Now, update the OnOptionsItemSelect() method in DetailView to invoke the DetailView.EditPark command when the Edit action is selected: case Resource.Id.actionEdit:    ((DetailViewModel)ViewModel).EditPark.Execute ();    return true; Compile and run NationalParks.Droid. You should now have a fully functional app that has the ability to create new parks and edit the existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailView. Creating the MvvmCross iOS app The process of creating the Android app with MvvmCross provides a solid understanding of how the overall architecture works. Creating the iOS solution should be much easier for two reasons: first, we understand how to interact with MvvmCross and second, all the logic we have placed in NationalParks.Core is reusable, so that we just need to create the View portion of the app and the startup code. To create NationalParks.iOS, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog, navigate to C# | iOS | iPhone | Single View Application, enter NationalParks.iOS in the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.iOS and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.iOS as a result of adding the package. They are as follows: packages.config: This file contains a list of libraries associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView: This class is placed in the Views folder, which corresponds to the FirstViewModel instance created in NationalParks.Core. Setup: This class inherits from MvxTouchSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). AppDelegate.cs.txt: This class contains the sample startup code, which should be placed in the actual AppDelete.cs file. Implementing the iOS user interface We are now ready to create the user interface for the iOS app. The good news is that we already have all the ViewModels implemented, so we can simply reuse them. The bad news is that we cannot easily reuse the storyboards from our previous work; MvvmCross apps generally use XIB files. One of the reasons for this is that storyboards are intended to provide navigation capabilities and an MvvmCross app delegates that responsibility to ViewModel and presenter. It is possible to use storyboards in combination with a custom presenter, but the remainder of this article will focus on using XIB files, as this is the more common use. The screen layouts can be used as depicted in the following screenshot: We are now ready to get started. Implementing the master view The first view we will work on is the master view. To implement the master view, complete the following steps: Create a new ViewController class named MasterView by right-clicking on the Views folder of NationalParks.iOS and navigating to Add | New File | iOS | iPhone View Controller. Open MasterView.xib and arrange controls as seen in the screen layouts. Add outlets for each of the edit controls. Open MasterView.cs and add the following boilerplate logic to deal with constraints on iOS 7, as follows: // ios7 layout if (RespondsToSelector(new    Selector("edgesForExtendedLayout")))    EdgesForExtendedLayout = UIRectEdge.None; Within the ViewDidLoad() method, add logic to create MvxStandardTableViewSource for parksTableView: MvxStandardTableViewSource _source; . . . _source = new MvxStandardTableViewSource(    parksTableView,    UITableViewCellStyle.Subtitle,    new NSString("cell"),    "TitleText Name; DetailText Description",      0); parksTableView.Source = _source; Note that the example uses the Subtitle cell style and binds the national park name and description to the title and subtitle. Add the binding logic to the ViewDidShow() method. In the previous step, we provided specifications for properties of UITableViewCell to properties in the binding context. In this step, we need to set the binding context for the Parks property on MasterModelView: var set = this.CreateBindingSet<MasterView,    MasterViewModel>(); set.Bind (_source).To (vm => vm.Parks); set.Apply(); Compile and run the app. All the parks in NationalParks.json should be displayed. Implementing the detail view Now, implement the detail view using the following steps: Create a new ViewController instance named DetailView. Open DetailView.xib and arrange controls as shown in the following code. Add outlets for each of the edit controls. Open DetailView.cs and add the binding logic to the ViewDidShow() method: this.CreateBinding (this.nameLabel).    To ((DetailViewModel vm) => vm.Park.Name).Apply (); this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).        Apply (); this.CreateBinding (this.stateLabel).    To ((DetailViewModel vm) => vm.Park.State).Apply (); this.CreateBinding (this.countryLabel).    To ((DetailViewModel vm) => vm.Park.Country).        Apply (); this.CreateBinding (this.latLabel).    To ((DetailViewModel vm) => vm.Park.Latitude).        Apply (); this.CreateBinding (this.lonLabel).    To ((DetailViewModel vm) => vm.Park.Longitude).        Apply (); Adding navigation Add navigation from the master view so that when a park is selected, the detail view is displayed, showing the park. To add navigation, complete the following steps: Open MasterView.cs, create an event handler named ParkSelected, and assign it to the SelectedItemChanged event on MvxStandardTableViewSource, which was created in the ViewDidLoad() method: . . .    _source.SelectedItemChanged += ParkSelected; . . . protected void ParkSelected(object sender, EventArgs e) {    . . . } Within the event handler, invoke the ParkSelected command on MasterViewModel, passing in the selected park: ((MasterViewModel)ViewModel).ParkSelected.Execute (        (NationalPark)_source.SelectedItem); Compile and run NationalParks.iOS. Selecting a park in the list view should now navigate you to the detail view, displaying the selected park. Implementing the edit view We now need to implement the last of the Views for the iOS app, which is the edit view. To implement the edit view, complete the following steps: Create a new ViewController instance named EditView. Open EditView.xib and arrange controls as in the layout screenshots. Add outlets for each of the edit controls. Open EditView.cs and add the data binding logic to the ViewDidShow() method. You should use the same approach to data binding as the approach used for the details view. Add an event handler named DoneClicked, and within the event handler, invoke the Done command on EditViewModel:protected void DoneClicked (object sender, EventArgs e) {    ((EditViewModel)ViewModel).Done.Execute(); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for EditView, and assign the DoneClicked event handler to it, as follows: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Done,        DoneClicked), true); Adding navigation Add navigation to two places: when New (+) is clicked from the master view and when Edit is clicked on in the detail view. Let's start with the master view. To add navigation to the master view, perform the following steps: Open MasterView.cs and add an event handler named NewParkClicked. In the event handler, invoke the NewParkClicked command on MasterViewModel: protected void NewParkClicked(object sender,        EventArgs e) {    ((MasterViewModel)ViewModel).            NewParkClicked.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView and assign the NewParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Add,        NewParkClicked), true); To add navigation to the details view, perform the following steps: Open DetailView.cs and add an event handler named EditParkClicked. In the event handler, invoke the EditParkClicked command on DetailViewModel: protected void EditParkClicked (object sender,    EventArgs e) {    ((DetailViewModel)ViewModel).EditPark.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView, and assign the EditParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Edit,        EditParkClicked), true); Refreshing the master view list One last detail that needs to be taken care of is to refresh the UITableView control on MasterView when items have been changed on EditView. To refresh the master view list, perform the following steps: Open MasterView.cs and call ReloadData() on parksTableView within the ViewDidAppear() method of MasterView: public override void ViewDidAppear (bool animated) {    base.ViewDidAppear (animated);    parksTableView.ReloadData(); } Compile and run NationalParks.iOS. You should now have a fully functional app that has the ability to create new parks and edit existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailVIew. Considering the pros and cons After completing our work, we now have the basis to make some fundamental observations. Let's start with the pros: MvvmCross definitely increases the amount of code that can be reused across platforms. The ViewModels house the data required by the View, the logic required to obtain and transform the data in preparation for viewing, and the logic triggered by user interactions in the form of commands. In our sample app, the ViewModels were somewhat simple; however, the more complex the app, the more reuse will likely be gained. As MvvmCross relies on the use of each platform's native UI frameworks, each app has a native look and feel and we have a natural layer that implements platform-specific logic when required. The data binding capabilities of MvvmCross also eliminate a great deal of tedious code that would otherwise have to be written. All of these positives are not necessarily free; let's look at some cons: The first con is complexity; you have to learn another framework on top of Xamarin, Android, and iOS. In some ways, MvvmCross forces you to align the way your apps work across platforms to achieve the most reuse. As the presentation logic is contained in the ViewModels, the views are coerced into aligning with them. The more your UI deviates across platforms; the less likely it will be that you can actually reuse ViewModels. With these things in mind, I would definitely consider using MvvmCross for a cross-platform mobile project. Yes, you need to learn an addition framework and yes, you will likely have to align the way some of the apps are laid out, but I think MvvmCross provides enough value and flexibility to make these issues workable. I'm a big fan of reuse and MvvmCross definitely pushes reuse to the next level. Summary In this article, we reviewed the high-level concepts of MvvmCross and worked through a practical exercise in order to convert the national parks apps to use the MvvmCross framework and the increase code reuse. Resources for Article: Further resources on this subject: Kendo UI DataViz – Advance Charting [article] The Kendo MVVM Framework [article] Sharing with MvvmCross [article]
Read more
  • 0
  • 0
  • 7699

article-image-evolution-hadoop
Packt
29 Dec 2014
12 min read
Save for later

Evolution of Hadoop

Packt
29 Dec 2014
12 min read
 In this article by Sandeep Karanth, author of the book Mastering Hadoop, we will see about the Hadoop's timeline, Hadoop 2.X and Hadoop YARN. Hadoop's timeline The following figure gives a timeline view of the major releases and milestones of Apache Hadoop. The project has been there for 8 years, but the last 4 years has seen Hadoop make giant strides in big data processing. In January 2010, Google was awarded a patent for the MapReduce technology. This technology was licensed to the Apache Software Foundation 4 months later, a shot in the arm for Hadoop. With legal complications out of the way, enterprises—small, medium, and large—were ready to embrace Hadoop. Since then, Hadoop has come up with a number of major enhancements and releases. It has given rise to businesses selling Hadoop distributions, support, training, and other services. Hadoop 1.0 releases, referred to as 1.X in this book, saw the inception and evolution of Hadoop as a pure MapReduce job-processing framework. It has exceeded its expectations with a wide adoption of massive data processing. The stable 1.X release at this point of time is 1.2.1, which includes features such as append and security. Hadoop 1.X tried to stay flexible by making changes, such as HDFS append, to support online systems such as HBase. Meanwhile, big data applications evolved in range beyond MapReduce computation models. The flexibility of Hadoop 1.X releases had been stretched; it was no longer possible to widen its net to cater to the variety of applications without architectural changes. Hadoop 2.0 releases, referred to as 2.X in this book, came into existence in 2013. This release family has major changes to widen the range of applications Hadoop can solve. These releases can even increase efficiencies and mileage derived from existing Hadoop clusters in enterprises. Clearly, Hadoop is moving fast beyond MapReduce to stay as the leader in massive scale data processing with the challenge of being backward compatible. It is becoming a generic cluster-computing and storage platform from being only a MapReduce-specific framework. Hadoop 2.X The extensive success of Hadoop 1.X in organizations also led to the understanding of its limitations, which are as follows: Hadoop gives unprecedented access to cluster computational resources to every individual in an organization. The MapReduce programming model is simple and supports a develop once deploy at any scale paradigm. This leads to users exploiting Hadoop for data processing jobs where MapReduce is not a good fit, for example, web servers being deployed in long-running map jobs. MapReduce is not known to be affable for iterative algorithms. Hacks were developed to make Hadoop run iterative algorithms. These hacks posed severe challenges to cluster resource utilization and capacity planning. Hadoop 1.X has a centralized job flow control. Centralized systems are hard to scale as they are the single point of load lifting. JobTracker failure means that all the jobs in the system have to be restarted, exerting extreme pressure on a centralized component. Integration of Hadoop with other kinds of clusters is difficult with this model. The early releases in Hadoop 1.X had a single NameNode that stored all the metadata about the HDFS directories and files. The data on the entire cluster hinged on this single point of failure. Subsequent releases had a cold standby in the form of a secondary NameNode. The secondary NameNode merged the edit logs and NameNode image files, periodically bringing in two benefits. One, the primary NameNode startup time was reduced as the NameNode did not have to do the entire merge on startup. Two, the secondary NameNode acted as a replica that could minimize data loss on NameNode disasters. However, the secondary NameNode (secondary NameNode is not a backup node for NameNode) was still not a hot standby, leading to high failover and recovery times and affecting cluster availability. Hadoop 1.X is mainly a Unix-based massive data processing framework. Native support on machines running Microsoft Windows Server is not possible. With Microsoft entering cloud computing and big data analytics in a big way, coupled with existing heavy Windows Server investments in the industry, it's very important for Hadoop to enter the Microsoft Windows landscape as well. Hadoop's success comes mainly from enterprise play. Adoption of Hadoop mainly comes from the availability of enterprise features. Though Hadoop 1.X tries to support some of them, such as security, there is a list of other features that are badly needed by the enterprise. Yet Another Resource Negotiator (YARN) In Hadoop 1.X, resource allocation and job execution were the responsibilities of JobTracker. Since the computing model was closely tied to the resources in the cluster, MapReduce was the only supported model. This tight coupling led to developers force-fitting other paradigms, leading to unintended use of MapReduce. The primary goal of YARN is to separate concerns relating to resource management and application execution. By separating these functions, other application paradigms can be added onboard a Hadoop computing cluster. Improvements in interoperability and support for diverse applications lead to efficient and effective utilization of resources. It integrates well with the existing infrastructure in an enterprise. Achieving loose coupling between resource management and job management should not be at the cost of loss in backward compatibility. For almost 6 years, Hadoop has been the leading software to crunch massive datasets in a parallel and distributed fashion. This means huge investments in development; testing and deployment were already in place. YARN maintains backward compatibility with Hadoop 1.X (hadoop-0.20.205+) APIs. An older MapReduce program can continue execution in YARN with no code changes. However, recompiling the older code is mandatory. Architecture overview The following figure lays out the architecture of YARN. YARN abstracts out resource management functions to a platform layer called ResourceManager (RM). There is a per-cluster RM that primarily keeps track of cluster resource usage and activity. It is also responsible for allocation of resources and resolving contentions among resource seekers in the cluster. RM uses a generalized resource model and is agnostic to application-specific resource needs. For example, RM need not know the resources corresponding to a single Map or Reduce slot. Planning and executing a single job is the responsibility of ApplicationMaster (AM). There is an AM instance per running application. For example, there is an AM for each MapReduce job. It has to request for resources from the RM, use them to execute the job, and work around failures, if any. The general cluster layout has RM running as a daemon on a dedicated machine with a global view of the cluster and its resources. Being a global entity, RM can ensure fairness depending on the resource utilization of the cluster resources. When requested for resources, RM allocates them dynamically as a node-specific bundle called a container. For example, 2 CPUs and 4 GB of RAM on a particular node can be specified as a container. Every node in the cluster runs a daemon called NodeManager (NM). RM uses NM as its node local assistant. NMs are used for container management functions, such as starting and releasing containers, tracking local resource usage, and fault reporting. NMs send heartbeats to RM. The RM view of the system is the aggregate of the views reported by each NM. Jobs are submitted directly to RMs. Based on resource availability, jobs are scheduled to run by RMs. The metadata of the jobs are stored in persistent storage to recover from RM crashes. When a job is scheduled, RM allocates a container for the AM of the job on a node in the cluster. AM then takes over orchestrating the specifics of the job. These specifics include requesting resources, managing task execution, optimizations, and handling tasks or job failures. AM can be written in any language, and different versions of AM can execute independently on a cluster. An AM resource request contains specifications about the locality and the kind of resource expected by it. RM puts in its best effort to satisfy AM's needs based on policies and availability of resources. When a container is available for use by AM, it can launch application-specific code in this container. The container is free to communicate with its AM. RM is agnostic to this communication. Storage layer enhancements A number of storage layer enhancements were undertaken in the Hadoop 2.X releases. The number one goal of the enhancements was to make Hadoop enterprise ready. High availability NameNode is a directory service for Hadoop and contains metadata pertaining to the files within cluster storage. Hadoop 1.X had a secondary Namenode, a cold standby that needed minutes to come up. Hadoop 2.X provides features to have a hot standby of NameNode. On the failure of an active NameNode, the standby can become the active Namenode in a matter of minutes. There is no data loss or loss of NameNode service availability. With hot standbys, automated failover becomes easier too. The key to keep the standby in a hot state is to keep its data as current as possible with respect to the active Namenode. This is achieved by reading the edit logs of the active NameNode and applying it onto itself with very low latency. The sharing of edit logs can be done using the following two methods: A shared NFS storage directory between the active and standby NameNodes: the active writes the logs to the shared location. The standby monitors the shared directory and pulls in the changes. A quorum of Journal Nodes: the active NameNode presents its edits to a subset of journal daemons that record this information. The standby node constantly monitors these journal daemons for updates and syncs the state with itself. The following figure shows the high availability architecture using a quorum of Journal Nodes. The data nodes themselves send block reports directly to both the active and standby NameNodes: Zookeeper or any other High Availability monitoring service can be used to track NameNode failures. With the assistance of Zookeeper, failover procedures to promote the hot standby as the active NameNode can be triggered. HDFS Federation Similar to what YARN did to Hadoop's computation layer, a more generalized storage model has been implemented in Hadoop 2.X. The block storage layer has been generalized and separated out from the filesystem layer. This separation has given an opening for other storage services to be integrated into a Hadoop cluster. Previously, HDFS and the block storage layer were tightly coupled. One use case that has come forth from this generalized storage model is HDFS Federation. Federation allows multiple HDFS namespaces to use the same underlying storage. Federated NameNodes provide isolation at the filesystem level. HDFS snapshots Snapshots are point-in-time, read-only images of the entire or a particular subset of a filesystem. Snapshots are taken for three general reasons: Protection against user errors Backup Disaster recovery Snapshotting is implemented only on NameNode. It does not involve copying data from the data nodes. It is a persistent copy of the block list and file size. The process of taking a snapshot is almost instantaneous and does not affect the performance of NameNode. Other enhancements There are a number of other enhancements in Hadoop 2.X, which are as follows: The wire protocol for RPCs within Hadoop is now based on Protocol Buffers. Previously, Java serialization via Writables was used. This improvement not only eases maintaining backward compatibility, but also aids in rolling the upgrades of different cluster components. RPCs allow for client-side retries as well. HDFS in Hadoop 1.X was agnostic about the type of storage being used. Mechanical or SSD drives were treated uniformly. The user did not have any control on data placement. Hadoop 2.X releases in 2014 are aware of the type of storage and expose this information to applications as well. Applications can use this to optimize their data fetch and placement strategies. HDFS append support has been brought into Hadoop 2.X. HDFS access in Hadoop 1.X releases has been through HDFS clients. In Hadoop 2.X, support for NFSv3 has been brought into the NFS gateway component. Clients can now mount HDFS onto their compatible local filesystem, allowing them to download and upload files directly to and from HDFS. Appends to files are allowed, but random writes are not. A number of I/O improvements have been brought into Hadoop. For example, in Hadoop 1.X, clients collocated with data nodes had to read data via TCP sockets. However, with short-circuit local reads, clients can directly read off the data nodes. This particular interface also supports zero-copy reads. The CRC checksum that is calculated for reads and writes of data has been optimized using the Intel SSE4.2 CRC32 instruction. Support enhancements Hadoop is also widening its application net by supporting other platforms and frameworks. One dimension we saw was onboarding of other computational models with YARN or other storage systems with the Block Storage layer. The other enhancements are as follows: Hadoop 2.X supports Microsoft Windows natively. This translates to a huge opportunity to penetrate the Microsoft Windows server land for massive data processing. This was partially possible because of the use of the highly portable Java programming language for Hadoop development. The other critical enhancement was the generalization of compute and storage management to include Microsoft Windows. As part of Platform-as-a-Service offerings, cloud vendors give out on-demand Hadoop as a service. OpenStack support in Hadoop 2.X makes it conducive for deployment in elastic and virtualized cloud environments. Summary In this article, we saw the evolution of Hadoop and some of its milestones and releases. We went into depth on Hadoop 2.X and the changes it brings into Hadoop. The key takeaways from this article are: In over 6 years of its existence, Hadoop has become the number one choice as a framework for massively parallel and distributed computing. The community has been shaping Hadoop to gear up for enterprise use. In 1.X releases, HDFS append and security, were the key features that made Hadoop enterprise-friendly. Hadoop's storage layer was enhanced in 2.X to separate the filesystem from the block storage service. This enables features such as supporting multiple namespaces and integration with other filesystems. 2.X shows improvements in Hadoop storage availability and snapshotting. Resources for Article: Further resources on this subject: Securing the Hadoop Ecosystem [article] Sizing and Configuring your Hadoop Cluster [article] HDFS and MapReduce [article]
Read more
  • 0
  • 0
  • 2679
article-image-application-connectivity-and-network-events
Packt
26 Dec 2014
10 min read
Save for later

Application Connectivity and Network Events

Packt
26 Dec 2014
10 min read
 In this article by Kerri Shotts, author of PhoneGap for Enterprise, we will see how an app reacts to the network changes and activities. In an increasingly connected world, mobile devices aren't always connected to the network. As such, the app needs to be sensitive to changes in the device's network connectivity. It also needs to be sensitive to the type of network (for example, cellular versus wired), not to mention being sensitive to the device the app itself is running on. Given all this, we will cover the following topics: Determining network connectivity Getting the current network type Detecting changes in connectivity Handling connectivity issues (For more resources related to this topic, see here.) Determining network connectivity In a perfect world, we'd never have to worry if the device was connected to the Internet or not, and if our backend was reachable. Of course, we don't live in that world, so we need to respond appropriately when the device's network connectivity changes. What's critical to remember is that having a network connection in no way determines the reachability of a host. That is to say, it's entirely possible for a device to be connected to a Wi-Fi network or a mobile hotspot and yet is unable to contact your servers. This can happen for several reasons (any of which can prevent proper communication with your backend). In short, determining the network status and being sensitive to changes in the status really tells you only one thing: whether or not it is futile to attempt communication. After all, if the device isn't connected to any network, there's no reason to attempt communication over a nonexistent network. On the other hand, if a network is available, the only way to determine if your hosts are reachable or not is to try and contact them. The ability to determine the device's network connectivity and respond to changes in the status is not available in Cordova/PhoneGap by default. You'll need to add a plugin before you can use this particular feature. You can install the plugin as follows: cordova plugin add org.apache.cordova.network-information The plugin's complete documentation is available at: https://github.com/apache/cordova-plugin-network-information/blob/master/doc/index.md. Getting the current network type Anytime after the deviceready event fires, you can query the plugin for the status of the current network connection by querying navigator.connection.type: var networkType = navigator.connection.type; switch (networkType) { case Connection.UNKNOWN: console.log ("Unknown connection."); break; case Connection.ETHERNET: console.log ("Ethernet connection."); break; case Connection.WIFI: console.log ("Wi-Fi connection."); break; case Connection.CELL_2G: console.log ( "Cellular (2G) connection."); break; case Connection.CELL_3G: console.log ( "Cellular (3G) connection."); break; case Connection.CELL_4G: console.log ( "Cellular (4G) connection."); break; case Connection.CELL: console.log ( "Cellular connection."); break; case Connection.NONE: console.log ( "No network connection."); break; } If you executed the preceding code on a typical mobile device, you'd probably either see some variation of the Cellular connection or the Wi-Fi connection message. If your device was on Wi-Fi and you proceeded to disable it and rerun the app, the Wi-Fi notice will be replaced with the Cellular connection notice. Now, if you put the device into airplane mode and rerun the app, you should see No network connection. Based on the available network type constants, it's clear that we can use this information in various ways: We can tell if it makes sense to attempt a network request: if the type is Connection.NONE, there's no point in trying as there's no network to service the request. We can tell if we are on a wired network, a Wi-Fi network, or a cellular network. Consider a streaming video app; this app can not only permit full quality video on a wired/Wi-Fi network, but can also use a lower quality video stream if it was running on a cellular connection. Although tempting, there's one thing the earlier code does not tell us: the speed of the network. That is, we can't use the type of the network as a proxy for the available bandwidth, even though it feels like we can. After all, aren't Ethernet connections typically faster than Wi-Fi connections? Also, isn't a 4G cellular connection faster than a 2G connection? In ideal circumstances, you'd be right. Unfortunately, it's possible for a fast 4G cellular network to be very congested, thus resulting in poor throughput. Likewise, it is possible for an Ethernet connection to communicate over a noisy wire and interact with a heavily congested network. This can also slow throughput. Also, while it's important to recognize that although you can learn something about the network the device is connected to, you can't use this to learn anything about the network conditions beyond that network. The device might indicate that it is attached to a Wi-Fi network, but this Wi-Fi network might actually be a mobile hotspot. It could be connected to a satellite with high latency, or to a blazing fast fiber network. As such, the only two things we can know for sure is whether or not it makes sense to attempt a request, and whether or not we need to limit the bandwidth if the device knows it is on a cellular connection. That's it. Any other use of this information is an abuse of the plugin, and is likely to cause undesirable behavior. Detecting changes in connectivity Determining the type of network connection once does little good as the device can lose the connection or join a new network at any time. This means that we need to properly respond to these events in order to provide a good user experience. Do not rely on the following events being fired when your app starts up for the first time. On some devices, it might take several seconds for the first event to fire; however, in some cases, the events might never fire (specifically, if testing in a simulator). There are two events our app needs to listen to: the online event and the offline event. Their names are indicative of their function, so chances are good you already know what they do. The online event is fired when the device connects to a network, assuming it wasn't connected to a network before. The offline event does the opposite: it is fired when the device loses a connection to a network, but only if the device was previously connected to a network. This means that you can't depend on these events to detect changes in the type of the network: a move from a Wi-Fi network to a cellular network might not elicit any events at all. In order to listen to these events, you can use the following code: document.addEventListener ("online", handleOnlineEvent, false); document.addEventListener ("offline", handleOfflineEvent, false); The event listener doesn't receive any information, so you'll almost certainly want to check the network type when handling an online event. The offline event will always correspond to a Connection.NONE network type. Having the ability to detect changes in the connectivity status means that our app can be more intelligent about how it handles network requests, but it doesn't tell us if a request is guaranteed to succeed. Handling connectivity issues As the only way to know if a network request might succeed is to actually attempt the request; we need to know how to properly handle the errors that might rise out of such an attempt. Between the Mobile and the Middle tier, the following are the possible errors that you might encounter while connecting to a network: TimeoutError: This error is thrown when the XHR times out. (Default is 30 seconds for our wrapper, but if the XHR's timeout isn't otherwise set, it will attempt to wait forever.) HTTPError: This error is thrown when the XHR completes and receives a response other than 200 OK. This can indicate any number of problems, but it does not indicate a network connectivity issue. JSONError: This error is thrown when the XHR completes, but the JSON response from the server cannot be parsed. Something is clearly wrong on the server, of course, but this does not indicate a connectivity issue. XHRError: This error is thrown when an error occurs when executing the XHR. This is definitely indicative of something going very wrong (not necessarily a connectivity issue, but there's a good chance). MaxRetryAttemptsReached: This error is thrown when the XHR wrapper has given up retrying the request. The wrapper automatically retries in the case of TimeoutError and XHRError. In all the earlier cases, the catch method in the promise chain is called. At this point, you can attempt to determine the type of error in order to determine what to do next: function sendFailRequest() { XHR.send( "GET", "http://www.really-bad-host-name.com /this/will/fail" ) .then(function( response ) {    console.log( response ); }) .catch( function( err ) {    if ( err instanceof XHR.XHRError ||     err instanceof XHR.TimeoutError ||     err instanceof XHR.MaxRetryAttemptsReached ) {      if ( navigator.connection.type === Connection.NONE ) {        // we could try again once we have a network connection        var retryRequest = function() {          sendFailRequest();          APP.removeGlobalEventListener( "networkOnline",         retryRequest );        };        // wait for the network to come online – we'll cover       this method in a moment        APP.addGlobalEventListener( "networkOnline",       retryRequest );      } else {        // we have a connection, but can't get through       something's going on that we can't fix.        alert( "Notice: can't connect to the server." );      }    }    if ( err instanceof XHR.HTTPError ) {      switch ( err.HTTPStatus ) {      case 401: // unauthorized, log the user back in        break;        case 403: // forbidden, user doesn't have access        break;        case 404: // not found        break;        case 500: // internal server error        break;        default:       console.log( "unhandled error: ", err.HTTPStatus );      }    }    if ( err instanceof XHR.JSONParseError ) {      console.log( "Issue parsing XHR response from server." );    } }).done(); } sendFailRequest(); Once a connection error is encountered, it's largely up to you and the type of app you are building to determine what to do next, but there are several options to consider as your next course of action: Fail loudly and let the user know that their last action failed. It might not be terribly great for user experience, but it might be the only sensible thing to do. Check whether there is a network connection present, and if not, hold on to the request until an online event is received and then send the request again. This makes sense only if the request you are sending is a request for data, not a request for changing data, as the data might have changed in the interim. Summary In this article you learnt how an app built using PhoneGap/Cordova reacts to the changing network conditions, also how to handle the connectivity issues that you might encounter. Resources for Article: Further resources on this subject: Configuring the ChildBrowser plugin [article] Using Location Data with PhoneGap [article] Working with the sharing plugin [article]
Read more
  • 0
  • 0
  • 4282

article-image-your-3d-world
Packt
26 Dec 2014
16 min read
Save for later

Your 3D World

Packt
26 Dec 2014
16 min read
 In this article by Ciro Cardoso, author of the book Mastering Lumion 3D, we will cover the following topics: The 3D models available Placing content Selecting different objects This article is the intermediate point of our project because we will cover everything we need to know to fully master Lumion's models. The bullet points provide a reasonable idea of what we will see, and by the end of this article, you will be able to think ahead and improve your workflow with what Lumion provides. (For more resources related to this topic, see here.) Lumion models – a quick overview You have to keep in mind that different versions of Lumion dictate what models are accessible. There is a substantial difference between Lumion and Lumion Pro, but even if you don't have Lumion Pro, there are some places where we can get free and paid 3D models. Different categories and what we can find Let's have a look at what is available and for this, we have to open the Objects menu that will give us access to eight libraries, but we only need five of them, as shown in the following screenshot: Each button represents a library where we can find different categories. The following list can give an overview of what each library contains: The Nature library: Inside this library, we can find several species of trees from Africa, Europe, and tropical trees, as well as grass, plants, flowers, cactus, and rocks. The Transport library: Here, we can find all forms of transport, from public transport to air balloons. The Indoor library: This is an important library that could be checked before modeling anything for interiors. We have the assorted objects, decoration items, electronics, appliances, food and drink, kitchen tools, interior lighting, taps, chairs and sofas, cabinets, tables, and utilities. The People and animals library: Here, we have people from different ethnic groups, 2D people, and animals. Please keep in mind that in this library, we have five types of objects: idle, walking, static, 2D cutout, and silhouettes. The Outdoor library: Here, we have elements to populate exterior scenes with objects found in a normal daily life. Some of them can actually add interesting details to a scene making it look more believable. These are the libraries that we will use to start populating the scene, but we still have to point out some differences between them, because not every model is 3D and not every model is static or idle. This knowledge helps us understand and make the decision of choosing the model that is appropriate for your scene. Idle, animated, and other 3D models As mentioned earlier, there are different types of models available in Lumion. We have 3D and 2D models and silhouettes, but the best way to comprehend the difference is by placing them in your scene and seeing how they behave. Let's start by clicking on the People and Animals button to activate this library, but then we have to select the Change object button to open the library, as shown in the following screenshot:   This button opens the library and we can start by selecting the Men-3D tab and select the first 3D model called Man_African_0001_Idle. Select this model or any 3D model with the idle suffix, and we are back to the Build mode where we have to click on the left mouse button to place the 3D model. Repeat the same step for: Man_African_0001_Walk under the Men-3D tab Any model from the People – 2D – High Detail tab Any model from the People – 3D – Silhouettes tab Any model from the People – 2D – Silhouettes tab The idea is having something like this in your scene:   After placing these models, it is easy to understand the difference between each one. Perhaps the most significant aspect we have to point out is the different results we will get by using an idle or walk model. Both of the 3D models are animated as you can confirm while placing them, but with a walk 3D model, we can later animate them to walk around the scene. On the other hand, the idle 3D model is static, but we still have some loop animations that give life to the 3D model. An additional aspect is that the 2D models are permanently facing the camera and unfortunately, there is no way to switch off this option, but we can change the color of the 2D model if necessary. So, now that we know what is available, what is the next step? Start placing models and populate the entire scene with life, but let's have a look at some key points that will help you improve the workflow. Placing and controlling 3D models in Lumion Where do we start? Well, this is something entirely personal, although it is a good idea to start working with bigger 3D models and then gradually moving down until the final 3D models are just minor details and touches to transform the scene into a professional project. If we focus our attention only in one section, the problem may be a lack of time to add the same quality to other areas in the scene. Placing a 3D model from Lumion's library The process of placing a 3D model is simple, as we can see from the following composition of images:   We will start by clicking on the Objects menu and choosing the correct library. As shown in the previous screenshot, we selected the Nature library and after that clicked on the Change object button to open the Nature library. Once in the library, we have to navigate to the desired tab and click on the thumbnail to select the 3D model. Back to the Build mode, we have to click on the left mouse button to place the 3D model. When placing a 3D model, Lumion recognizes surfaces and avoids any intersection between the 3D model and the surface. Sometimes, this feature can be in our way and cause some difficulties to place a 3D model. To bypass this problem, press and hold the G key, and then click on the left mouse button to place the 3D model in the terrain. Great! We placed the first Lumion model in the scene. One model is placed and how many more do we need to place? It depends on what project you have, and if it is something small like the example shown in this book, placing 3D models is not a big issue. However, when working on large projects, placing the 3D models can be a massive and repetitive task, but don't forget that Lumion is a user-friendly application and provides tools that help with repetitive tasks. Placing multiple copies with one click Some 3D models may require several copies to create a more believable look, such as trees, bushes, flowers, and other elements. Imagine that you had to place tons of copies of the same model one by one. As mentioned, Lumion has some shortcuts that will help you save time and thereby not lose patience. What do we have to do? Before placing a 3D model, press and hold the Ctrl key, and then click on the left mouse button to place 10 copies of the 3D model selected, as shown in the following screenshot:   However, there is a slight downside to this technique as you probably must have noticed from the previous screenshot. The disadvantage is that we don't have control over the area where the 3D models are scattered and the distance between them; some of the 3D models may intersect with each other. With smaller 3D models, this technique is useful because the 3D models tend not to be far from each other, and with bigger 3D models, we can use the Ctrl key to place the 10 copies and then adjust accordingly. Another shortcut that is worth keeping in mind is the Z key. If you press and hold the Z key and then click to place the 3D model and then click again, the next 3D model will have a different size. Consequently, a powerful combination is Ctrl + Z + the left mouse button to place 10 copies with different sizes, as shown in the following screenshot:   Why is this useful? This is a fair question. The best answer is to take some time to look away from the screen to outside and observe that we are surrounded by randomness. We hardly could find two trees with the same size and shape, even from the same species. Our eyes are a perfect mechanism to spot things that look repetitive. However, we don't have the time and possibilities to create each tree different from the other, but we can cheat. Cheating is OK in 3D because it helps us gain time to concentrate our attention in other areas equally important to accomplish the perfect image still or movie. One way of cheating is by using the same tree, but changing the rotation, scale, and color. With the combination of Ctrl + Z + the left mouse button, we have the opportunity to at least change the scale of each copy placed in the scene. If you look at the previous screenshot, the plants presented when arranged and placed in the correct location will look much more natural than using the same scale for the 3D model. However, how can we manipulate and control the 3D models placed in the scene? Tweaking the 3D models To place any model from Lumion's library, we have to use the left mouse button and if, instead of release, we hold the left mouse button and drag it, it is possible to change the location where the 3D model is going to be placed, as shown in the following screenshot:   However, we need more control than this and the previous screenshot also shows where we can find the tools to move, scale, rotate, and change the height of the 3D model. The tools and shortcuts we use for imported 3D models are precisely the same for Lumion's native 3D models. As a quick reminder, here is the list of the shortcuts to tweak the position, scale, and rotation of the 3D model: M: Move the 3D model L: Scale the 3D model R: Rotate the 3D model heading P: Rotate the 3D model pitch B: Rotate the 3D model bank H: Change the heights However, there is an aspect that needs to be kept in mind all the time to tweak and control the 3D models in a project. This is something that is crucial and if you are new to Lumion, it is perfectly normal that on the first few tries, you will get frustrated because you cannot select the 3D model. This may sound annoying, but in truth, this is a way Lumion helps us in not becoming overweight and confused when trying to select a 3D model by providing a narrow control. With this in mind, the next section will show a few tricks and techniques that are useful with Lumion's models and techniques. This will help to improve the way we work with the 3D models and fully master this stage in the production. The remarkable Context menu We can call it remarkable because this menu gives us the full control and shortcuts to rearrange the 3D models present in the scene. This menu is divided into two very distinct sections: the Selection and Transformation submenus, as shown in the following screenshot:   How can these menus be useful to your project? Let's start with the Selection submenu and when we select it the following options appear:   However, looking at these options doesn't help you understand how they can be so useful and powerful. Let's check how they work. Selection – library Working with the Nature library has some challenges and one of them is trying to identify a tree or other plant that we already placed in the scene. This can be really difficult, in particular when there are other models very similar to the one we are looking for. The Library... option found under the Selection submenu can give a hand with this task and will make your life easy. Locate the 3D model you want and using the Context menu, click on the Selection submenu. Select the Library… option and another two options appear. We need to choose the Select in library option and automatically the Change object button changes to the 3D model, as shown in the following screenshot:   Sometimes, when we click on the Change object button to access the library, we have to select each tab to find where the 3D model is, but once we have the correct tab, it is easy to recognize the 3D model because of the halo around the thumbnail. However, there was another option called Replace with library selection. Now, this is when you start to see the full potential of Lumion and how these options will greatly improve the speed of your workflow. Picking the example on the previous screenshot, we can see that the plant used is called FicusElastica_001. Then, we realized that this is the wrong 3D model, but on the other hand, the location is correct and we don't want to change the location even a few millimeters. The Replace with library selection option is our salvation; so, let's see how we can use it. The first thing to do is to open the Nature library and select the correct 3D model, which in this case will be FicusElastica_003. After selecting this 3D model, we are back to the Build mode, but instead of placing the 3D model, we will select the Context menu and pick the 3D model that needs to be replaced. Then, click on the Selection submenu, next the Library... option and then click on the Replace with library selection button, as exemplified on the next screenshot:   In addition to this fantastic feature, Lumion will keep not only the location, but also the rotation and scale of the previous 3D model. Can you imagine how easy it is to replace a species of a tree or another model in the scene if the client doesn't like it? What about the other selection options? How can we use them? Selection – all the Selection options The options using which we have access to the Selection submenu are simple, particularly, the ones related to deselecting a 3D model. The Selection option is another way where we can select a 3D model, but using the Ctrl key is much faster. However, there are two selection options that can be used to select a wide range of 3D models or having a narrower control over what we select. Let's say that we totally forgot to use a layer for the 3D models we placed and again we can pick the example that we have been using with the FicusElastica_001 model. We need to place all the FicusElastica_001 models inside a layer, but we have several models scattered around the scene. One way to tackle this issue is by selecting each model and then moving these to a new layer. The smart way is by using the Select All Similar option because when we use this option all the FicusElastica_001 models are selected, as shown in the following screenshot:   In this case, we only have two copies of the FicusElastica_001 model, but it is easy to understand how powerful this option can be and how easy it is to select a certain amount of 3D models in seconds. To know that a 3D model is selected, a blue wire box is drawn around the 3D model. Eventually, we will realize that we need to copy every single tree, plant, flower, and rock from one point to another. This is a massive task to be accomplished using only the normal selection technique. If you think for a second all of these models are part of the same library, the Nature library. So, instead of using the Select All Similar option, we will use the Select All Similar Category option, as shown in the following screenshot:   As you can see in the previous screenshot, every single 3D model from the Nature library was selected and ready to be controlled in the way we need and want. To deselect the 3D models, we can make use of the Deselect All option, but a quicker way is by pressing and holding the Ctrl key and clicking wherever you want. However, what if we need to select models from different categories? Selecting different categories Selecting different categories is not something that we can find in the Context menu, but since we are talking about selecting 3D models, you may find this trick useful. Have a closer look at the following screenshot:   In the previous screenshot, we can see models from three different categories: Indoor, People and Animals, and Outdoor. How is that possible? It is easier than you think. Start by selecting the models found in the Indoor category. The next step is selecting the People and Animals category, and by pressing and holding the Ctrl key, select the 3D models you want in this category. Repeat the same step for as many categories as you like, but remember that you need to select individual models and not draw a selection rectangle. This principle also works if we select the models with the Context menu. Another option to quickly select and transform a 3D model is by pressing the F12 key. When we do this, every 3D model becomes available to be selected and we can change the location, rotation, and the height of the 3D model. Now that we have the 3D models selected, what can we do with them? Let's explore the marvelous Transformation submenu. Summary In this article, we saw how to place and select 3D objects using different keys. We also saw how to create your own 3D world using different tools and techniques. Resources for Article:  Further resources on this subject: Integrating Direct3D with XAML and Windows 8.1 [article] Diving Straight into Photographic Rendering [article] What is Lumion? [article]
Read more
  • 0
  • 0
  • 1316

article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 4515
article-image-downloading-and-understanding-construct-2
Packt
26 Dec 2014
19 min read
Save for later

Downloading and Understanding Construct 2

Packt
26 Dec 2014
19 min read
In this article by Aryadi Subagio, the author of Learning Construct 2, introduces you to Construct 2, makes you familiar with the interface and terms that Construct 2 uses, as well as gives you a quick overview of the event system. (For more resources related to this topic, see here.) About Construct 2 Construct 2 is an authoring tool that makes the process of game development really easy. It can be used by a variety of people, from complete beginners in game development to experts who want to make a prototype quickly or even use Construct 2 to make games faster than ever. It is created by Scirra Ltd, a company based in London, and right now, it can run on the Windows desktop platform, although you can export your games to multiple platforms. Construct 2 is an HTML5-based game editor with a lot of features enough for people beginning to work with game development to make their first 2D game. Some of them are: Multiple platforms to target: You can publish your game to desktop computers (PC, Mac, or Linux), to many mobile platforms (Android, iOS, Blackberry, Windows Phone 8.0, Tizen, and much more), and also on websites via HTML5. Also, if you have a developer's license, you can publish it on Nintendo's Wii U. No programming language required: Construct 2 doesn't use any programming language that is difficult to understand; instead, it relies on its event system, which is really easy for anyone, even without coding experience, to jump in. Built-in physics: Using Construct 2 means you don't need to worry about complicated physics functions; it's all built in Construct 2 and is easy to use! Can be extended (extensible): Many plugins have been written by third-party developers to add new functionalities to Construct 2. Note that writing plugins is outside the scope of this book. If you have a JavaScript background and want to try your hand at writing plugins, you can access the JavaScript SDK and documentation at https://www.scirra.com/manual/15/sdk. Special effects: There are a lot of built-in effects to make your game prettier! You can use Construct 2 to virtually create all kinds of 2D games, from platformer, endless run, tower defense, casual, top-down shooting, and many more. Downloading Construct 2 Construct 2 can be downloaded from Scirra's website (https://www.scirra.com/), which only requires you to click on the download button in order to get started. At the time of writing this book, the latest stable version is r184, and this tutorial is written using this version. Another great thing about Construct 2 is that it is actively developed, and the developer frequently releases beta features to gather feedback and perform bug testing. There are two different builds of Construct 2: beta build and stable build. Choosing which one to download depends on your preference when using Construct 2. If you like to get your hands on the latest features, then you should choose the beta build; just remember that beta builds often have bugs. If you want a bug-proof version, then choose the stable build, but you won't be the first one to use the new features. The installation process is really straightforward. You're free to skip this section if you like, because all you need to do is open the file and follow the instructions there. If you're installing a newer version of Construct 2, it will uninstall the older version automatically for you! Navigating through Construct 2 Now that we have downloaded and installed Construct 2, we can start getting our hands dirty and make games with it! Not so fast though. As Construct 2's interface is different compared to other game-making tools, we need to know how to use it. When you open Construct 2, you will see a start page as follows:   This start page is basically here to make it easier for you to return to your most recent projects, so if you just opened Construct 2, then this will be empty. What you need to pay attention to is the new project link on the left-hand side; click on it, and we'll start making games. Alternatively, you can click on File in the upper-left corner and then click on New. You'll see a selection of templates to start with, so understandably, this can be confusing if you don't know which one to pick. So, for now, just click on New empty project and then click on Open. Starting an empty project is good when you want to prototype your game. What you see in the screenshot now is an empty layout, which is the place where we'll make our games. This also represents how your game will look. It might be confusing to navigate the first time you see this, but don't worry; I'll explain everything you need to know for now by describing it piece by piece. The white part in the middle is the layout, because Construct 2 is a what you see is what you get kind of tool. This part represents how your game will look in the end. The layout is like your canvas; it's your workspace; it is where you design your levels, add your enemies, and place your floating coins. It is where you make your game. The take-home point here is that the layout size is not the same as the window size! The layout size can be bigger than the window size, but it can't be smaller than the window size. This is because the window size represents the actual game window. The dotted line is the border of the window size, so if you put a game object outside it, it won't be initially visible in the game, unless you scroll towards it. In the preceding screenshot, only the red plane is visible to the player. Players don't see the green spaceship because it's outside the game window. On the right-hand side, we have the Projects bar and the Objects bar. An Objects bar shows you all the objects that are used in the active layout. Note that an active layout is the one you focused on right now; this means that, at this very instance, we only have one layout. The Objects bar is empty because we haven't added any objects. The Projects bar helps in the structuring of your project, and it is structured as follows: All layouts are stored in the Layouts folder. Event sheets are stored in the Event sheets folder. All objects that are used in the project are stored in the Object types folder. All created families are in the Families folder. A family is a feature of Construct 2. The Sounds folder contains sound effects and audio files. The Music folder contains long background music. The difference between the Sounds folder and the Music folder is that the contents in the Music folder are streamed, while the files inside the Sounds folder are downloaded completely before they are played. This means if you put a long music track in the Sounds folder, it will take a few minutes for it to be played, but in the Music folder, it is immediately streamed. However, it doesn't mean that the music will be played immediately; it might need to buffer before playing. The Files folder contains other files that don't fit into the folders mentioned earlier. One example here is Icons. Although you can't rename or delete these folders, you can add subfolders inside them if you want. On the left-hand side, we have a Properties bar. There are three kinds of properties: layout properties, project properties, and object properties. The information showed in the Properties bar depends on what you clicked last. There is a lot of information here, so I think it's best to explain it as we go ahead and make our game, but for now, you can click on any part of the Properties bar and look at the bottom part of it for help. I'll just explain a bit about some basic things in the project properties: Name: This is your project's name; it doesn't have to be the same as the saved file's name. So, you can have the saved file as project_wing.capx and the project's name as Shadow wing. Version: This is your game's version number if you plan on releasing beta versions; make sure to change this first Description: Your game's short description; some application stores require you to fill this out before submitting ID: This is your game's unique identification; this comes in the com.companyname.gamename format, so your ID would be something like com.redstudio.shadowwing. Creating game objects To put it simply, everything in Construct 2 is a game object. This can range from the things that are visible on screen, which, for example, are sprites, text, particles, and sprite font, to the things that are not visible but are still used in the game, such as an array, dictionary, keyboard, mouse, gamepad, and many more. To create a new game object, you can either double-click anywhere on a layout (not on another object already present), or you can right-click on your mouse and select Insert new object. Doing either one of these will open an Insert New Object dialog, where you can select the object to be inserted. You can click on the Insert button or double-click on the object icon to insert it. There are two kinds of objects here: the objects that are inserted into the active layout and the objects that are inserted into the entire project. Objects that are visible on the screen are inserted into the active layout, and objects that are not visible on the screen are inserted into the entire project. If you look closely, each object is separated into a few categories such as Data & Storage, Form controls, General, and so on. I just want to say that you should pay special attention to the objects in the Form controls category. As the technology behind it is HTML5 and a Construct 2 game is basically a game made in JavaScript, objects such as the ones you see on web pages can be inserted into a Construct 2 game. These objects are the objects in the Form controls category. A special rule applies to the objects: we can't alter their layer order. This means that these objects are always on top of any other objects in the game. We also can't export them to platforms other than web platforms. So, if you want to make a cross-platform game, it is advised not to use the Form controls objects. For now, insert a sprite object by following these steps: After clicking on the Insert button, you will notice that your mouse cursor becomes a crosshair, and there's a floating label with the Layer 0 text. This is just a way for Construct 2 to tell you which layer you're adding to your object. Click your mouse to finally insert the object. Even if you add your object to a wrong layer, you can always move it later. When adding any object with a visual representation on screen, such as a sprite or a tiled background, Construct 2 automatically opens up its image-editing window. You can draw an image here or simply load it from a file created using a software. Click on X in the top-right corner of the window to close the window when you have finished drawing. You shouldn't worry here; this won't delete your object or image. Adding layers Layers are a great way to manage your objects' visual hierarchy. You can also add some visual effects to your game using layers. By default, your Layers bar is located at the same place as the Projects bar. You'll see two tabs here: Projects and Layers. Click on the Layers tab to open the Layers bar. From here, you can add new layers and rename, delete, and even reorganize them to your liking. You can do this by clicking on the + icon a few times to add new layers, and after this, you can reorganize them by dragging a layer up or down. Just like with Adobe products, you can also toggle the visibility of all objects in the same layer to make it easier while you're developing games. If you don't want to change or edit all objects in the same layer, which might be on a background layer for instance, you can lock this layer. Take a look at the following screenshot: There are two ways of referring to a layer: using its name (Layer 0, Layer 1, Layer 2, Layer 3) or its index (0, 1, 2, 3). As you can see from the previous screenshot, the index of a layer changes as you move a layer up or down its layer hierarchy (the layer first created isn't always the one with the index number 0). The layer with index 0 will always be at the bottom, and the one with the highest index will always be at the top, so remember this because it will come in handy when you make your games. The eye icon determines the visibility of the layer. Alternatively, you can also check the checkbox beside each layer's name. Objects from the invisible layer won't be visible in Construct 2 but will still visible when you play the game. The lock icon, beside the layer's name at the top, toggles between whether a layer is locked or not, so objects from locked layers can't be edited, moved, and selected. What is an event system? Construct 2 doesn't use a traditional programming language. Instead, it uses a unique style of programming called an event system. However, much like traditional programming languages, it works as follows: It executes commands from top to bottom It executes commands at every tick It has variables (a global and a local variable) It has a feature called function, which works in the same way as other functions in a traditional programming language, without having to go into the code An event system is used to control the objects in a layout. It can also be used to control the layout itself. An event system can be found inside the event sheet; you can access it by clicking on the event sheet tab at the top of the layout. Reading an event system I hope I haven't scared you all with the explanations of an event system. Please don't worry because it's really easy! There are two components to an event system: an event and an action. Events are things that occur in the game, and actions are the things that happen when there is an event. For a clearer understanding, take a look at the following screenshot where the event is taken from one of my game projects: The first event, the one with number 12, is a bullet on collision with an enemy, which means when any bullet collides with any enemy, the actions on its right-hand side will be executed. In this case, it will subtract the enemy's health, destroy the bullet object, and create a new object for a damage effect. The next event, number 13, is what happens when an enemy's health drops below zero; the actions will destroy the enemy and add points to the score variable. This is easy, right? Take a look at how we created the redDamage object; it says on layer "Game". Every time we create a new object through an action, we also need to specify on which layer it is created. As mentioned earlier, we can refer to a layer with its name or with its index number, so either way is fine. However, I usually use a layer's name, just in case I need to rearrange the layer's hierarchy later. If we use the layer's index (for example, index 1) we can rearrange the layer so that index 1 is different; this means that we will end up creating objects in the wrong layer. Earlier, I said that an event system executes commands from top to bottom. This is true except for one kind of event: a trigger. A trigger is an event that, instead of executing at every tick, waits for something to happen before it is executed. Triggers are events with a green arrow beside them (like the bullet on collision with enemy event shown earlier). As a result of this, unlike the usual events, it doesn't matter where the triggers are placed in the event system. Writing events Events are written on event sheets. When you create a new layout, you can choose to add a new event sheet to this new layout. If you choose to add an event sheet, you can rename it to the same name or one that is different from the layout. However, it is advised that you name the event sheets exactly same as the layout to make it clear which event sheet is associated with a layout. We can only link one event sheet to a layout from its properties, so if we want to add more event sheets to a layout, we must include them in that event sheet. To write an event, just perform the following steps: Click on the event sheet tab above the layout. You'll see an empty event sheet; to add events, simply click on the Add event link or right-click and select Add event. Note that from now, on I will refer to the action of adding a new step with words such as add event, add new event, or something similar. You'll see a new window with objects to create an event from; every time you add an event (or action), Construct 2 always gives you objects you can add an event (or action) from. This prevents you from doing something impossible, for example, trying to modify the value of a local variable outside of its scope. I will explain local variables shortly. Whether or not you have added an object, there will always be a system object to create an event from. This contains a list of events that you create directly from the game instead of from an object. Double-click on it, and you'll see a list of events you can create with a system object. There are a lot of events, and explaining them can take a long time. For now, if you're curious, there is an explanation of each event in the upper part of the window. Next, scroll down and look for an Every x seconds event. Double-click on it, enter 1.0 second, and click on Done. You should have the following event: To add an action to an event, just perform the following steps: Click on the Add action link beside an event. Click on an object you want to create an action from; for now, double-click on the systems object. Double-click on the Set layer background color action under the Layers & Layout category. Change the three numbers inside the bracket to 100, 200, and 50. Click on the Done button. You should have the following event: This action will change the background color of layer 0 to the one we set in the parameter, which is green. Also, because adding a screenshot every time gives you a code example, which would be troublesome, I will write my code example as follows: System every 1.0 seconds | System Restart layout The left-hand side of the code is the event, and the right-hand side of the code is the action. I think this is pretty clear. Creating a variable I said that I'm going to explain variables, and you might have noticed a global and local variables category when you added an action. A variable is like a glass or cup, but instead of water, it holds values. These values can be one of three types: Text, Number, or Boolean. Text: This type holds a value of letters, words, or a sentence. This can include numbers as well, but the numbers will be treated like a part of the word. Number: This type holds numerical values and can't store any alphabetical value. The numbers are treated like numbers, which means that mathematical operations can be performed on them. Boolean: This type only holds one of the two values: True or False. This is used to check whether a certain state of an object is true or false. To create a global variable, just right-click in an event sheet and select Add global variable. After that, you'll see a new window to add a global variable. Here's how to fill each field: Name: This is the name of the variable; no two variables can have the same name, and this name is case sensitive, which means exampleText is different from ExampleText. Type: This tells whether the variable is Text, Number, or Boolean. Only instance variables can have a Boolean type. Initial value: This is the variable's value when first created. A text type's value must be surrounded with a quote (" "). Description: This is an optional field; just in case the name isn't descriptive enough, additional explanation can be written here. After clicking on the OK button, you have created your new variable! This variable has a global scope; this means that it can be accessed from anywhere within the project, while a local variable only has a limited scope and can be accessed from a certain place in the event sheet. I will cover local variables in depth later in the book. You might have noticed that in the previous screenshot, the Static checkbox cannot be checked. This is because only local variables can be marked as static. One difference between global and local variables is that the local variable's value reverts to its initial value the next time the code is executed, while the global variable's value doesn't change until there's a code that changes it. A static local variable retains its value just like a global variable. All variables' values can be changed from events, both global and local, except the ones that are constant. Constant variables will always retain their initial value; they can never be changed. A constant variable can be used for a variable that has a value you don't want to accidentally rewrite later. Summary In this article, we learned about the features of Construct 2, its ease of use, and why it's perfect for people with no programming background. We learned about Construct 2's interface and how to create new layers in it. We know what objects are and how to create them. This article also introduced you to the event system and showed you how to write code in it. Now, you are ready to start making games with Construct 2! Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Introducing variables [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article]
Read more
  • 0
  • 0
  • 10739

Packt
26 Dec 2014
11 min read
Save for later

Getting Started with XenServer®

Packt
26 Dec 2014
11 min read
This article is written by Martez Reed, the author of Mastering Citrix® XenServer®. One of the most important technologies in the information technology field today is virtualization. Virtualization is beginning to span every area of IT, including but not limited to servers, desktops, applications, network, and more. Our primary focus is server virtualization, specifically with Citrix XenServer 6.2. There are three major platforms in the server virtualization market: VMware's vSphere, Microsoft's Hyper-V, and Citrix's XenServer. In this article, we will cover the following topics: XenServer's overview XenServer's features What's new in Citrix XenServer 6.2 Planning and installing Citrix XenServer (For more resources related to this topic, see here.) Citrix® XenServer® Citrix XenServer is a type 1 or bare metal hypervisor. A bare metal hypervisor does not require an underlying host operating system. Type 1 hypervisors have direct access to the underlying hardware, which provides improved performance and guest compatibility. Citrix XenServer is based on the open source Xen hypervisor that is widely deployed in various industries and has a proven record of stability and performance. Citrix® XenCenter® Citrix XenCenter is a Windows-based application that provides a graphical user interface for managing the Citrix XenServer hosts from a single management interface. Features of Citrix® XenServer® The following section covers the features offered by Citrix XenServer: XenMotion/Live VM Migration: The XenMotion feature allows for running virtual machines to be migrated from one host to another without any downtime. XenMotion relocates the processor and memory instances of the virtual machine from one host to another, while the actual data and settings reside on the shared storage. This feature is pivotal in providing maximum uptime when performing maintenance or upgrades. This feature requires shared storage among the hosts. Storage XenMotion / Live Storage Migration: The Storage XenMotion feature provides functionality similar to that of XenMotion, but it is used to move a virtual machine's virtual disk from one storage repository to another without powering off the virtual machine. High Availability: High Availability automatically restarts the virtual machines on another host in the event of a host failure. This feature requires shared storage among the hosts. Resource pools: Resource pools are a collection of Citrix XenServer hosts grouped together to form a single pool of compute, memory, network, and storage resources that can be managed as a single entity. The resource pool allows the virtual machines to be started on any of the hosts and seamlessly moved between them. Active Directory integration: Citrix XenServer can be joined to a Windows Active Directory domain to provide centralized authentication for XenServer administrators. This eliminates the need for multiple independent administrator accounts on each XenServer host in a XenServer environment. Role-based access control (RBAC): RBAC is a feature that takes advantage of the Active Directory integration and allows administrators to define roles that have specific privileges associated with them. This allows administrative permissions to be segregated among different administrators. Open vSwitch: The default network backend for the Citrix XenServer 6.2 hypervisor is Open vSwitch. Open vSwitch is an open source multilayer virtual switch that brings advanced network functionality to the XenServer platform such as NetFlow, SPAN, OpenFlow, and enhanced Quality of Service (QoS). The Open vSwitch backend is also an integral component of the platform's support of software-defined networking (SDN). Dynamic Memory Control: Dynamic Memory Control allows XenServer to maximize the physical memory utilization by sharing unused physical memory among the guest virtual machines. If a virtual machine has been allocated 4 GB of memory and is only using 2 GB, the remaining memory can be shared with the other guest virtual machines. This feature provides a mechanism for memory oversubscription. IntelliCache: IntelliCache is a feature aimed at improving the performance of Citrix XenDesktop virtual desktops. IntelliCache creates a cache on a XenServer local storage repository, and as the virtual desktops perform read operations, the parent VM's virtual disk is copied to the cache. Write operations are also written to the local cache when nonpersistent or shared desktops are used. This mechanism reduces the load on the storage array by retrieving data from a local source for reads instead of the array. This is particularly beneficial when multiple desktops share the same parent image. This feature is only available with Citrix XenDesktop. Disaster Recovery: The XenServer Disaster Recovery feature provides a mechanism to recover the virtual machines and vApps in the event of the failure of an entire pool or site. Distributed Virtual Switch Controller (DVSC): DVSC provides centralized management and visibility of the networking in XenServer. Thin provisioning: Thin provisioning allows for a given amount of disk space to be allocated to virtual machines but only consume the amount that is actually being used by the guest operating system. This feature provides more efficient use of the underlying storage due to the on-demand consumption. What's new in Citrix® XenServer® 6.2 Citrix has added a number of new and exciting features in the latest version of XenServer: Open source New licensing model Improved guest support Open source Starting with Version 6.2, the Citrix XenServer hypervisor is now open sourced, but continues to be managed by Citrix Systems. The move to an open source model was the result of Citrix Systems' desire to further collaborate and integrate the XenServer product with its partners and the open source community. New licensing model The licensing model has been changed in Version 6.2, with the free version of the XenServer platform now providing full functionality, the previous advanced, enterprise, and platinum versions have been eliminated. Citrix will offer paid support for the free version of the XenServer hypervisor that will include the ability to install patches/updates using the XenCenter GUI, in addition to Citrix technical support. Improved guest support Version 6.2 has added official support for the following guest operating systems: Microsoft Windows 8 (full support) Microsoft Windows Server 2012 SUSE Linux Enterprise Server (SLES) 11 SP2 (32/64 bit) Red Hat Enterprise Linux (RHEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Oracle Enterprise Linux (OEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 CentOS (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Debian Wheezy (32/64 bit) VSS support for Windows Server 2008 R2 has been improved and reintroduced Citrix XenServer 6.2 Service Pack 1 adds support for the following operating systems: Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Retired features The following features have been removed from Version 6.2 of Citrix XenServer: Workload Balancing (WLB) SCOM integration Virtual Machine Protection Recovery (VMPR) Web Self Service XenConvert (this has been replaced by XenServer Conversion Manager) Deprecated features The following features will be removed from the future releases of Citrix XenServer. Citrix has reviewed the XenServer market and has determined that there are third-party products that are able to provide the product functionality more effectively: Microsoft System Center Virtual Machine Manager SCVMM support Integrated StorageLink Planning and Installing Citrix® XenServer® Installing Citrix XenServer is generally a simple and straightforward process that can be completed in 10 to 15 minutes. While the actual install is simple, there are several major decisions that need to be made prior to installing Citrix XenServer in order to ensure a successful deployment. Selecting the server hardware Typically, the first step is to select the server hardware that will be used. While the thought might be to just pick a server that fits our needs, we should also ensure that the hardware meets the documented system requirements. Checking the hardware against the Hardware Compatibility List (HCL) provided by Citrix Systems is advised to ensure that the system qualifies for Citrix support and that the system will properly run Citrix XenServer. The HCL provides a list of server models that have been verified to work with Citrix XenServer. The HCL can be found online at http://www.citrix.com/xenserver/hcl. Meeting the system requirements The following sections cover the minimal system requirements for Citrix XenServer 6.2. Processor requirements The following list covers the minimum requirements for the processor(s) to install Citrix XenServer 6.2: One or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU To support VMs running on Windows, an Intel VT or AMD-V 64-bit x86-based system with one or more CPU(s) is required Virtualization technology needs to be enabled in the BIOS Virtualization technology is disabled by default on many server platforms and needs to be manually enabled. Memory requirements The minimum memory requirement for installing Citrix XenServer 6.2 is 2 GB with a recommendation of 4 GB or more for production workloads. In addition to the memory usage of the guest virtual machines, the Xen hypervisor on the Control Domain (dom0) consumes the memory resources. The amount of resources consumed by the Control Domain (dom0) is based on the amount of physical memory in the host. Hard disk requirements The following are the minimum requirements for the hard disk(s) to install Citrix XenServer 6.2: 16 GB of free disk space minimum and 60 GB of free disk space is recommended Direct attached storage in the form of SATA, SAS, SCSI, or PATA interfaces are supported XenServer can be installed on a LUN presented from a storage area network (SAN) via a host bus adapter (HBA) in the XenServer host A physical HBA is required to boot XenServer from a SAN. Network card requirements 100 Mbps or a faster NIC is required for installing Citrix XenServer. One or more gigabit NICs is recommended for faster P2V, export/import data transfers, and VM live migrations. Installing Citrix® XenServer® 6.2 The following sections cover the installation of Citrix XenServer 6.2. Installation methods The Citrix XenServer 6.2 installer can be launched via two methods as listed: CD/DVD PXE or network boot Installation source There are several options where the Citrix XenServer installation files can be stored, and depending on the scenario, one would be preferred over another. Typically, the HTTP, FTP, or NFS option would be used when the installer is booted over the network via PXE or when a scripted installation is being performed. The installation sources are as follows: Local media (CD/DVD) HTTP or FTP NFS Supplemental packs Supplemental packs provide additional functionality to the XenServer platform through features such as enhanced hardware monitoring and third-party management software integration. The supplemental packs are typically downloaded from the vendor's website and are installed when prompted during the XenServer installation. XenServer® installation The following steps cover installing Citrix XenServer 6.2 from a CD: Boot the server from the Citrix XenServer 6.2 installation media and press Enter when prompted to start the Citrix XenServer 6.2 installer. Select the desired key mapping and select Ok to proceed. Press F9 if additional drivers need to be installed or select Ok to continue. Accept the EULA. Select the hard drive for the Citrix XenServer installation and choose Ok to proceed. Select the hard drive(s) to be used for storing the guest virtual machines and choose Ok to continue. You need to select the Enable thin provisioning (Optimized storage for XenDesktop) option to make use of the IntelliCache feature. Select the installation media source and select Ok to continue. Install supplemental packs if necessary and choose No to proceed. Select Verify installation source and select Ok to begin the verification. The installation media should be verified at least once to ensure that none of the installation files are corrupt. Choose Ok to continue after the verification has successfully completed. Provide and confirm a password for the root account and select Ok to proceed. Select the network interface to be used as the primary management interface and choose Ok to continue. Select the Static configuration option and provide the requested information. Choose Ok to continue. Enter the desired hostname and DNS server information. Select Ok to proceed. Select the appropriate geographical area to configure the time zone and select Ok to continue. Select the appropriate city or area to configure the time zone and select Ok to proceed. Select Using NTP or Manual time entry for the server to determine the local time and choose Ok to continue. Using NTP to synchronize the time of XenServer hosts in a pool is recommended to ensure that the time on all the hosts in the pool is synchronized. Enter the IP address or hostname of the desired NTP server(s) and select Ok to proceed. Select Install XenServer to start the installation. Click on Ok to restart the server after the installation has completed. The following screen should be presented after the reboot: Summary In this article, we covered an overview of Citrix XenServer along with the features that were available. We also looked at the new features that were added in XenServer 6.2 and then examined installing XenServer. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Designing a XenDesktop® Site [article] Installation and Deployment of Citrix Systems®' CPSM [article]
Read more
  • 0
  • 0
  • 9093
Modal Close icon
Modal Close icon