Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-detecting-and-protecting-against-your-enemies
Packt
22 Jul 2016
9 min read
Save for later

Detecting and Protecting against Your Enemies

Packt
22 Jul 2016
9 min read
In this article by Matthew Poole, the author of the book Raspberry Pi for Secret Agents - Third Edition, we will discuss how Raspberry Pi has lots of ways of connecting things to it, such as plugging things into the USB ports, connecting devices to the onboard camera and display ports and to the various interfaces that make up the GPIO (General Purpose Input/Output) connector. As part of our detection and protection regime we'll be focusing mainly on connecting things to the GPIO connector. (For more resources related to this topic, see here.) Build a laser trip wire You may have seen Wallace and Grommet's short film, The Wrong Trousers, where the penguin uses a contraption to control Wallace in his sleep, making him break into a museum to steal the big shiny diamond. The diamond is surrounded by laser beams but when one of the beams is broken the alarms go off and the diamond is protected with a cage! In this project, I'm going to show you how to set up a laser beam and have our Raspberry Pi alert us when the beam is broken—aka a laser trip wire. For this we're going to need to use a Waveshare Laser Sensor module (www.waveshare.com), which is readily available to buy on Amazon for around £10 / $15. The module comes complete with jumper wires, that allows us to easily connect it to the GPIO connector in the Pi: The Waveshare laser sensor module contains both the transmitter and receiver How it works The module contains both a laser transmitter and receiver. The laser beam is transmitted from the gold tube on the module at a particular modulating frequency. The beam will then be reflected off a surface such as a wall or skirting board and picked up by the light sensor lens at the top of the module. The receiver will only detect light that is modulated at the same frequency as the laser beam, and so does not get affected by visible light. This particular module works best when the reflective surface is between 80 and 120 cm away from the laser transmitter. When the beam is interrupted and prevented from reflecting back to the receiver this is detected and the data pin will be triggered. A script monitoring the data pin on the Pi will then do something when it detects this trigger. Important: Don't ever look directly into the laser beam as will hurt your eyes and may irreversibly damage them. Make sure the unit is facing away from you when you wire it up. Wiring it up This particular device runs from a power supply of between 2.5 V and 5.0 V. Since our GPIO inputs require 3.3 V maximum when a high level is input, we will use the 3.3 V supply from our Raspberry Pi to power the device: Wiring diagram for the laser sensor module Connect the included 3-hole connector to the three pins at the bottom of the laser module with the red wire on the left (the pin marked VCC). Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labeled D0/GPIO 17). Connect the black wire to pin 6 of the GPIO connector (labeled GND/0V) Connect the red wire to pin 1 of the GPIO connector (3.3 V). The module should now come alive. The red LED on the left of the module will come on if the beam is interrupted. This is what it should look like in real-life: The laser module connected to the Raspberry Pi Writing the detection script Now that we have connected the laser sensor module to our Raspberry Pi, we need to write a little script that will detect when the beam has been broken. In this project we've connected our sensor output to D0, which is GPIO17 (refer to the earlier GPIO pin-out diagram). We need to create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to "in": pi@raspberrypi ~ $ sudo echo in > sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that it will have returned "1" (digital high state) if the beam reflection is detected, or a "0" (digital low state) if the beam is interrupted. We can create a script to poll for the beam state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 1 ]; then #beam not blocked echo "OK" else #beam was broken echo "ALERT" fi done Code listing for beam-sensor.sh When you run the script you should see OK scroll up the screen. Now interrupt the beam using your hand and you should see ALERT scroll up the console screen until you remove your hand. Don't forget, that once we've finished with the GPIO port it's tidy to remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport We've now seen how to easily read a GPIO input, the same wiring principle and script can be used to read other sensors, such as motion detectors or anything else that has an on and off state, and act upon their status. Protecting an entire area Our laser trip wire is great for being able to detect when someone walks through a doorway or down a corridor, but what if we wanted to know if people are in a particular area or a whole room? Well we can with a basic motion sensor, otherwise known as a passive infrared (PIR) detector. These detectors come in a variety of types, and you may have seen them lurking in the corners of rooms, but fundamentally they all work the same way by detecting the presence of body heat in relation to the background temperature, within a certain area, and so are commonly used to trigger alarm systems when somebody (or something such as the pet cat) has entered a room. For the covert surveillance of our private zone we're going to use a small Parallax PIR Sensor available from many online Pi-friendly stores such as ModMyPi, Robot Shop or Adafruit for less than £10 / $15. This little device will detect the presence of enemies within a 10 meter range of it. If you can't obtain one of these types then there other types that will work just as well, but the wiring might be different to that explained in this project. Parallax passive infrared motion sensor Wiring it up As with our laser sensor module, this device also just needs three wires to connect it to the Raspberry Pi. However, they are connected differently on the sensor as shown below: Wiring diagram for the Parallax PIR motion sensor module Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labelled D0 /GPIO 17), with the other end connecting to the OUT pin on the PIR module. Connect the black wire to pin 6 of the GPIO connector (labelled GND / 0V), with the other end connecting to the GND pin on the PIR module. Connect the red wire to pin 1 of the GPIO connector (3.3 V), with the other end connecting to the VCC pin on the module. The module should now come alive, and you'll notice the light switching on and off as it detects your movement around it. This is what it should look like for real: PIR motion sensor connected to Raspberry Pi Implementing the detection script The detection script for the PIR motion sensor is the similar to the one we created for the laser sensor module in the previous section. Once again, we've connected our sensor output to D0, which is GPIO17. We create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to in: pi@raspberrypi ~ $ sudo echo in >/sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that this time the PIR module will have returned 1 (digital high state) if the motion is detected, or a 0 (digital low state) if there is no motion detected. We can modify our previous script to poll for the motion-detected state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 0 ]; then #no motion detected echo "OK" else #motion was detected echo "INTRUDER!" fi done Code listing for motion-sensor.sh When you run the script you should see OK scroll up the screen if everything is nice and still. Now move in front of the PIR's detection area and you should see INTRUDER! scroll up the console screen until you are still again. Again, don't forget, that once we've finished with the GPIO port we should remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport Summary In this article we have a guide to the Raspberry Pi's GPIO connector and how to safely connect peripherals to it, that is, by connecting a laser sensor module to our Pi to create a rather cool laser trip wire that could alert you when the laser beam is broken. Resources for Article: Further resources on this subject: Building Our First Poky Image for the Raspberry Pi[article] Raspberry Pi LED Blueprints[article] Raspberry Pi Gaming Operating Systems[article]
Read more
  • 0
  • 0
  • 7422

article-image-debugging-your-net-application
Packt
21 Jul 2016
13 min read
Save for later

Debugging Your .NET Application

Packt
21 Jul 2016
13 min read
In this article by Jeff Martin, author of the book Visual Studio 2015 Cookbook - Second Edition, we will discuss about how but modern software development still requires developers to identify and correct bugs in their code. The familiar edit-compile-test cycle is as familiar as a text editor, and now the rise of portable devices has added the need to measure for battery consumption and optimization for multiple architectures. Fortunately, our development tools continue to evolve to combat this rise in complexity, and Visual Studio continues to improve its arsenal. (For more resources related to this topic, see here.) Multi-threaded code and asynchronous code are probably the two most difficult areas for most developers to work with, and also the hardest to debug when you have a problem like a race condition. A race condition occurs when multiple threads perform an operation at the same time, and the order in which they execute makes a difference to how the software runs or the output is generated. Race conditions often result in deadlocks, incorrect data being used in other calculations, and random, unrepeatable crashes. The other painful area to debug involves code running on other machines, whether it is running locally on your development machine or running in production. Hooking up a remote debugger in previous versions of Visual Studio has been less than simple, and the experience of debugging code in production was similarly frustrating. In this article, we will cover the following sections: Putting Diagnostic Tools to work Maximizing everyday debugging Putting Diagnostic Tools to work In Visual Studio 2013, Microsoft debuted a new set of tools called the Performance and Diagnostics hub. With VS2015, these tools have revised further, and in the case of Diagnostic Tools, promoted to a central presence on the main IDE window, and is displayed, by default, during debugging sessions. This is great for us as developers, because now it is easier than ever to troubleshoot and improve our code. In this section, we will explore how Diagnostic Tools can be used to explore our code, identify bottlenecks, and analyze memory usage. Getting ready The changes didn't stop when VS2015 was released, and succeeding updates to VS2015 have further refined the capabilities of these tools. So for this section, ensure that Update 2 has been installed on your copy of VS2015. We will be using Visual Studio Community 2015, but of course, you may use one of the premium editions too. How to do it… For this section, we will put together a short program that will generate some activity for us to analyze: Create a new C# Console Application, and give it a name of your choice. In your project's new Program.cs file, add the following method that will generate a large quantity of strings: static List<string> makeStrings() { List<string> stringList = new List<string>(); Random random = new Random(); for (int i = 0; i < 1000000; i++) { string x = "String details: " + (random.Next(1000, 100000)); stringList.Add(x); } return stringList; } Next we will add a second static method that produces an SHA256-calculated hash of each string that we generated. This method reads in each string that was previously generated, creates an SHA256 hash for it, and returns the list of computed hashes in the hex format. static List<string> hashStrings(List<string> srcStrings) { List<string> hashedStrings = new List<string>(); SHA256 mySHA256 = SHA256Managed.Create(); StringBuilder hash = new StringBuilder(); foreach (string str in srcStrings) { byte[] srcBytes = mySHA256.ComputeHash(Encoding.UTF8.GetBytes(str), 0, Encoding.UTF8.GetByteCount(str)); foreach (byte theByte in srcBytes) { hash.Append(theByte.ToString("x2")); } hashedStrings.Add(hash.ToString()); hash.Clear(); } mySHA256.Clear(); return hashedStrings; } After adding these methods, you may be prompted to add using statements for System.Text and System.Security.Cryptography. These are definitely needed, so go ahead and take Visual Studio's recommendation to have them added. Now we need to update our Main method to bring this all together. Update your Main method to have the following: static void Main(string[] args) { Console.WriteLine("Ready to create strings"); Console.ReadKey(true); List<string> results = makeStrings(); Console.WriteLine("Ready to Hash " + results.Count() + " strings "); //Console.ReadKey(true); List<string> strings = hashStrings(results); Console.ReadKey(true); } Before proceeding, build your solution to ensure everything is in working order. Now run the application in the Debug mode (F5), and watch how our program operates. By default, the Diagnostic Tools window will only appear while debugging. Feel free to reposition your IDE windows to make their presence more visible or use Ctrl + Alt + F2 to recall it as needed. When you first launch the program, you will see the Diagnostic Tools window appear. Its initial display resembles the following screenshot. Thanks to the first ReadKey method, the program will wait for us to proceed, so we can easily see the initial state. Note that CPU usage is minimal, and memory usage holds constant. Before going any further, click on the Memory Usage tab, and then the Take Snapshot command as indicated in the preceding screenshot. This will record the current state of memory usage by our program, and will be a useful comparison point later on. Once a snapshot is taken, your Memory Usage tab should resemble the following screenshot: Having a forced pause through our ReadKey() method is nice, but when working with real-world programs, we will not always have this luxury. Breakpoints are typically used for situations where it is not always possible to wait for user input, so let's take advantage of the program's current state, and set two of them. We will put one to the second WriteLine method, and one to the last ReadKey method, as shown in the following screenshot: Now return to the open application window, and press a key so that execution continues. The program will stop at the first break point, which is right after it has generated a bunch of strings and added them to our List object. Let's take another snapshot of the memory usage using the same manner given in Step 9. You may also notice that the memory usage displayed in the Process Memory gauge has increased significantly, as shown in this screenshot: Now that we have completed our second snapshot, click on Continue in Visual Studio, and proceed to the next breakpoint. The program will then calculate hashes for all of the generated strings, and when this has finished, it will stop at our last breakpoint. Take another snapshot of the memory usage. Also take notice of how the CPU usage spiked as the hashes were being calculated: Now that we have these three memory snapshots, we will examine how they can help us. You may notice how memory usage increases during execution, especially from the initial snapshot to the second. Click on the second snapshot's object delta, as shown in the following screenshot: On clicking, this will open the snapshot details in a new editor window. Click on the Size (Bytes) column to sort by size, and as you may suspect, our List<String> object is indeed the largest object in our program. Of course, given the nature of our sample program, this is fairly obvious, but when dealing with more complex code bases, being able to utilize this type of investigation is very helpful. The following screenshot shows the results of our filter: If you would like to know more about the object itself (perhaps there are multiple objects of the same type), you can use the Referenced Types option as indicated in the preceding screenshot. If you would like to try this out on the sample program, be sure to set a smaller number in the makeStrings() loop, otherwise you will run the risk of overloading your system. Returning to the main Diagnostic Tools window, we will now examine CPU utilization. While the program is executing the hashes (feel free to restart the debugging session if necessary), you can observe where the program spends most of its time: Again, it is probably no surprise that most of the hard work was done in the hashStrings() method. But when dealing with real-world code, it will not always be so obvious where the slowdowns are, and having this type of insight into your program's execution will make it easier to find areas requiring further improvement. When using the CPU profiler in our example, you may find it easier to remove the first breakpoint and simply trigger a profiling by clicking on Break All as shown in this screenshot: How it works... Microsoft wanted more developers to be able to take advantage of their improved technology, so they have increased its availability beyond the Professional and Enterprise editions to also include Community. Running your program within VS2015 with the Diagnostic Tools window open lets you examine your program's performance in great detail. By using memory snapshots and breakpoints, VS2015 provides you with the tools needed to analyze your program's operation, and determine where you should spend your time making optimizations. There's more… Our sample program does not perform a wide variety of tasks, but of course, more complex programs usually perform well. To further assist with analyzing those programs, there is a third option available to you beyond CPU Usage and Memory Usage: the Events tab. As shown in the following screenshot, the Events tab also provides the ability to search events for interesting (or long-running) activities. Different event types include file activity, gestures (for touch-based apps), and program modules being loaded or unloaded. Maximizing everyday debugging Given the frequency of debugging, any refinement to these tools can pay immediate dividends. VS 2015 brings the popular Edit and Continue feature into the 21st century by supporting a 64-bit code. Added to that is the new ability to see the return value of functions in your debugger. The addition of these features combine to make debugging code easier, allowing to solve problems faster. Getting ready For this section, you can use VS 2015 Community or one of the premium editions. Be sure to run your choice on a machine using a 64-bit edition of Windows, as that is what we will be demonstrating in the section. Don't worry, you can still use Edit and Continue with 32-bit C# and Visual Basic code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU, and select Configuration Manager... When the Configuration Manager dialog opens, we can create a new project platform targeting a 64-bit code. To do this, click on the drop-down menu for Platform, and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to Configuration Manager. Verify that x64 remains active under Platform, and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: With the project settings out of the face, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border, or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method, and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use whatever method you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5, or by clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation, as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). (If you don't see Autos, it can be made visible by pressing Ctrl + D, A, or through Debug | Windows | Autos while debugging.) After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for the area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet in the following screenshot—Step Over once more if you would like to see that assigned): There's more… Programmers who write C++ have already had the ability to see the return values of functions—this just brings .NET developers into the fold. The result is that your development experience won't have to suffer based on the language you have chosen to use for your project. The Edit and Continue functionality is also available for ASP.NET projects. New projects created on VS2015 will have Edit and Continue enabled by default. Existing projects imported to VS2015 will usually need this to be enabled if it hasn't been done already. To do so, open the Options dialog via Tools | Options, and look for the Debugging | General section. The following screenshot shows where this option is located on the properties page: Whether you are working with an ASP.NET project or a regular C#/VB .NET application, you can verify Edit and Continue is set via this location. Summary In this article, we examine the improvements to the debugging experience in Visual Studio 2015, and how it can help you diagnose the root cause of a problem faster so that you can fix it properly, and not just patch over the symptoms. Resources for Article:   Further resources on this subject: Creating efficient reports with Visual Studio [article] Creating efficient reports with Visual Studio [article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [article]
Read more
  • 0
  • 0
  • 15849

article-image-managing-eap-domain-mode
Packt
19 Jul 2016
7 min read
Save for later

Managing EAP in Domain Mode

Packt
19 Jul 2016
7 min read
This article by Francesco Marchioni author of the book Mastering JBoss Enterprise Application Platform 7dives deep into application server management using the domain mode, its main components, and discusses how to shift to advanced configurations that resemble real-world projects. Here are the main topics covered are: Domain mode breakdown Handy domainproperties Electing the domaincontroller (For more resources related to this topic, see here.) Domain mode break down Managing the application server in the domain mode means, in a nutshell, to control multiple servers from a centralized single point of control. The servers that are part of the domain can span across multiple machines (or even across the cloud) and they can be grouped with similar servers of the domain to share a common configuration. To make some rationale, we will break down the domain components into two main categories: Physical components: Theseare the domain elements that can be identified with a Java process running on the operating system Logical components: Theseare the domain elements which can span across several physical components Domain physical components When you start the application server through the domain.sh script, you will be able to identify the following processes: Host controller: Each domain installation contains a host controller. This is a Java process that is in charge to start and stop the servers that are defined within the host.xml file. The host controller is only aware of the items that are specific to the local physical installation such as the domaincontroller host and port, the JVM settings of the servers or their system properties. Domain controller: One host controller of the domain (and only one) is configured to act as domaincontroller. This means basically two things: keeping the domainconfiguration (into the domain.xml file) and assisting the host controller for managing the servers of the domain. Servers: Each host controller can contain any number of servers which are the actual server instances. These server instances cannot be started autonomously. The host controller is in charge to start/stop single servers, when the domaincontroller commands them. If you start the default domain configuration on a Linux machine, you will see that the following processes will show in your operating system: As you can see, the process controller is identified by the [Process Controller] label, while the domaincontroller corresponds to the [Host Controller] label. Each server shows in the process table with the name defined in the host.xml file. You can use common operating system commands such as grep to further restrict the search to a specific process. Domain logical components A domain configuration with only physical elements in it would not add much to a line of standalone servers. The following components can abstract the domain definition, making it dynamic and flexible: Server Group: A server group is a collection of servers. They are defined in the domain.xml file, hence they don't have any reference to an actual host controller installation. You can use a server group to share configuration and deployments across a group of servers. Profile: A profile is an EAP configuration. A domain can hold as many profiles as you need. Out of the box the following configurations are provided: default: This configuration matches with the standalone.xml configuration (in standalone mode) hence it does not include JMS, IIOP, or HA. full: This configuration matches with the standalone-full.xml configuration (in standalone mode) hence it includes JMS and OpenJDK IIOP to the default server. ha: This configuration matches with the standalone-ha.xml configuration (in standalone mode) so it enhances the default configuration with clustering (HA). full-ha: This configuration matches with the standalone-full-ha.xml configuration (in standalone mode), hence it includes JMS, IIOP, and HA. Handy domainproperties So far we have learnt the default configuration files used by JBoss EAP and the location where they are placed. These settings can be however varied by means of system properties. The following table shows how to customize the domain configuration file names: Option Description --domain-config The domain configuration file (default domain.xml) --host-config The host configuration file (default host.xml) On the other hand, this table summarizes the available options to adjust the domain directory structure: Property Description jboss.domain.base.dir The base directory for domain content jboss.domain.config.dir The base configuration directory jboss.domain.data.dir The directory used for persistent data file storage jboss.domain.log.dir The directory containing the host-controller.log and process-controller.log files jboss.domain.temp.dir The directory used for temporary file storage jboss.domain.deployment.dir The directory used to store deployed content jboss.domain.servers.dir The directory containing the managed server instances For example, you can start EAP 7 in domain mode using the domain configuration file mydomain.xml and the host file named myhost.xml based on the base directory /home/jboss/eap7domain using the following command: $ ./domain.sh –domain-config=mydomain.xml –host-config=myhost.xml –Djboss.domain.base.dir=/home/jboss/eap7domain Electing the domaincontroller Before creating your first domain, we will learn more in detail the process which connects one or more host controller to one domaincontroller and how to elect a host controller to be a domaincontroller. The physical topology of the domain is stored in the host.xml file. Within this file, you will find as the first line the Host Controller name, which makes each host controller unique: <host name="master"> One of the host controllers will be configured to act as a domaincontroller. This is done in the domain-controller section with the following block, which states that the domaincontroller is the host controller itself (hence, local): <domain-controller> <local/> </domain-controller> All other host controllers will connect to the domaincontroller, using the following example configuration which uses the jboss.domain.master.address and jboss.domain.master.port properties to specify the domaincontroller address and port: <domain-controller> <remote protocol="remote" host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/> </domain-controller> The host controller-domaincontroller communication happens behind the scenes through a management native port that is defined as well into the host.xml file: <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket interface="management" port="${jboss.management.native.port:9999}"/> </native-interface> <http-interface security-realm="ManagementRealm" http-upgrade-enabled="true"> <socket interface="management" port="${jboss.management.http.port:9990}"/> </http-interface> </management-interfaces> The other highlighted attribute is the managementhttpport that can be used by the administrator to reach the domaincontroller. This port is especially relevant if the host controller is the domaincontroller. Both sockets use the management interface, which is defined in the interfaces section of the host.xml file, and exposes the domain controller on a network available address: <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> </interfaces> If you want to run multiplehost controllers on the same machine, you need to provide a unique jboss.management.native.port for each host controller or a different jboss.bind.address.management. Summary In this article we have some essentials of the domain mode breakdown, handy domain propertiesand also electing the domain controller. Resources for Article: Further resources on this subject: Red5: A video-on-demand Flash Server [article] Animating Elements [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 16460

article-image-reactive-programming-c
Packt
18 Jul 2016
30 min read
Save for later

Reactive Programming with C#

Packt
18 Jul 2016
30 min read
In this article by Antonio Esposito from the book Reactive Programming for .NET Developers , we will see a practical example of what is reactive programming with pure C# coding. The following topics will be discussed here: IObserver interface IObservable interface Subscription life cycle Sourcing events Filtering events Correlating events Sourcing from CLR streams Sourcing from CLR enumerables (For more resources related to this topic, see here.) IObserver interface This core level interface is available within the Base Class Library (BCL) of .NET 4.0 and is available for the older 3.5 as an add-on. The usage is pretty simple and the goal is to provide a standard way of handling the most basic features of any reactive message consumer. Reactive messages flow by a producer and a consumer and subscribe for some messages. The IObserver C# interface is available to construct message receivers that comply with the reactive programming layout by implementing the three main message-oriented events: a message received, an error received, and a task completed message. The IObserver interface has the following sign and description: // Summary: // Provides a mechanism for receiving push-based notifications. // // Type parameters: // T: // The object that provides notification information.This type parameter is // contravariant. That is, you can use either the type you specified or any // type that is less derived. For more information about covariance and contravariance, // see Covariance and Contravariance in Generics. public interface IObserver<in T> { // Summary: // Notifies the observer that the provider has finished sending push-based notifications. void OnCompleted(); // // Summary: // Notifies the observer that the provider has experienced an error condition. // // Parameters: // error: // An object that provides additional information about the error. void OnError(Exception error); // // Summary: // Provides the observer with new data. // // Parameters: // value: // The current notification information. void OnNext(T value); } Any new message to flow to the receiver implementing such an interface will reach the OnNext method. Any error will reach the OnError method, while the task completed acknowledgement message will reach the OnCompleted method. The usage of an interface means that we cannot use generic premade objects from the BCL. We need to implement any receiver from scratch by using such an interface as a service contract. Let's see an example because talking about a code example is always simpler than talking about something theoretic. The following examples show how to read from a console application command from a user in a reactive way: cass Program { static void Main(string[] args) { //creates a new console input consumer var consumer = new ConsoleTextConsumer(); while (true) { Console.WriteLine("Write some text and press ENTER to send a messagernPress ENTER to exit"); //read console input var input = Console.ReadLine(); //check for empty messate to exit if (string.IsNullOrEmpty(input)) { //job completed consumer.OnCompleted(); Console.WriteLine("Task completed. Any further message will generate an error"); } else { //route the message to the consumer consumer.OnNext(input); } } } } public class ConsoleTextConsumer : IObserver<string> { private bool finished = false; public void OnCompleted() { if (finished) { OnError(new Exception("This consumer already finished it's lifecycle")); return; } finished = true; Console.WriteLine("<- END"); } public void OnError(Exception error) { Console.WriteLine("<- ERROR"); Console.WriteLine("<- {0}", error.Message); } public void OnNext(string value) { if (finished) { OnError(new Exception("This consumer finished its lifecycle")); return; } //shows the received message Console.WriteLine("-> {0}", value); //do something //ack the caller Console.WriteLine("<- OK"); } } The preceding example shows the IObserver interface usage within the ConsoleTextConsumer class that simply asks a command console (DOS-like) for the user input text to do something. In this implementation, the class simply writes out the input text because we simply want to look at the reactive implementation. The first important concept here is that a message consumer knows nothing about how messages are produced. The consumer simply reacts to one of the tree events (not CLR events). Besides this, some kind of logic and cross-event ability is also available within the consumer itself. In the preceding example, we can see that the consumer simply showed any received message again on the console. However, if a complete message puts the consumer in a finished state (by signaling the finished flag), any other message that comes on the OnNext method will be automatically routed to the error one. Likewise, any other complete message that will reach the consumer will produce another error once the consumer is already in the finished state. IObservable interface The IObservable interface, the opposite of the IObserver interface, has the task of handling message production and the observer subscription. It routes right messages to the OnNext message handler and errors to the OnError message handler. At its life cycle end, it acknowledges all the observers on the OnComplete message handler. To create a valid reactive observable interface, we must write something that is not locking against user input or any other external system input data. The observable object acts as an infinite message generator, something like an infinite enumerable of messages; although in such cases, there is no enumeration. Once a new message is available somehow, observer routes it to all the subscribers. In the following example, we will try creating a console application to ask the user for an integer number and then route such a number to all the subscribers. Otherwise, if the given input is not a number, an error will be routed to all the subscribers. This is observer similar to the one already seen in the previous example. Take a look at the following codes: /// <summary> /// Consumes numeric values that divides without rest by a given number /// </summary> public class IntegerConsumer : IObserver<int> { readonly int validDivider; //the costructor asks for a divider public IntegerConsumer(int validDivider) { this.validDivider = validDivider; } private bool finished = false; public void OnCompleted() { if (finished) OnError(new Exception("This consumer already finished it's lifecycle")); else { finished = true; Console.WriteLine("{0}: END", GetHashCode()); } } public void OnError(Exception error) { Console.WriteLine("{0}: {1}", GetHashCode(), error.Message); } public void OnNext(int value) { if (finished) OnError(new Exception("This consumer finished its lifecycle")); //the simple business logic is made by checking divider result else if (value % validDivider == 0) Console.WriteLine("{0}: {1} divisible by {2}", GetHashCode(), value, validDivider); } } This observer consumes integer numeric messages, but it requires that the number is divisible by another one without producing any rest value. This logic, because of the encapsulation principle, is within the observer object. The observable interface, instead, only has the logic of the message sending of valid or error messages. This filtering logic is made within the receiver itself. Although that is not something wrong, in more complex applications, specific filtering features are available in the publish-subscribe communication pipeline. In other words, another object will be available between observable (publisher) and observer (subscriber) that will act as a message filter. Back to our numeric example, here we have the observable implementation made using an inner Task method that does the main job of parsing input text and sending messages. In addition, a cancellation token is available to handle the user cancellation request and an eventual observable dispose: //Observable able to parse strings from the Console //and route numeric messages to all subscribers public class ConsoleIntegerProducer : IObservable<int>, IDisposable { //the subscriber list private readonly List<IObserver<int>> subscriberList = new List<IObserver<int>>(); //the cancellation token source for starting stopping //inner observable working thread private readonly CancellationTokenSource cancellationSource; //the cancellation flag private readonly CancellationToken cancellationToken; //the running task that runs the inner running thread private readonly Task workerTask; public ConsoleIntegerProducer() { cancellationSource = new CancellationTokenSource(); cancellationToken = cancellationSource.Token; workerTask = Task.Factory.StartNew(OnInnerWorker, cancellationToken); } //add another observer to the subscriber list public IDisposable Subscribe(IObserver<int> observer) { if (subscriberList.Contains(observer)) throw new ArgumentException("The observer is already subscribed to this observable"); Console.WriteLine("Subscribing for {0}", observer.GetHashCode()); subscriberList.Add(observer); return null; } //this code executes the observable infinite loop //and routes messages to all observers on the valid //message handler private void OnInnerWorker() { while (!cancellationToken.IsCancellationRequested) { var input = Console.ReadLine(); int value; foreach (var observer in subscriberList) if (string.IsNullOrEmpty(input)) break; else if (input.Equals("EXIT")) { cancellationSource.Cancel(); break; } else if (!int.TryParse(input, out value)) observer.OnError(new FormatException("Unable to parse given value")); else observer.OnNext(value); } cancellationToken.ThrowIfCancellationRequested(); } //cancel main task and ack all observers //by sending the OnCompleted message public void Dispose() { if (!cancellationSource.IsCancellationRequested) { cancellationSource.Cancel(); while (!workerTask.IsCanceled) Thread.Sleep(100); } cancellationSource.Dispose(); workerTask.Dispose(); foreach (var observer in subscriberList) observer.OnCompleted(); } //wait until the main task completes or went cancelled public void Wait() { while (!(workerTask.IsCompleted || workerTask.IsCanceled)) Thread.Sleep(100); } } To complete the example, here there is the program Main: static void Main(string[] args) { //this is the message observable responsible of producing messages using (var observer = new ConsoleIntegerProducer()) //those are the message observer that consume messages using (var consumer1 = observer.Subscribe(new IntegerConsumer(2))) using (var consumer2 = observer.Subscribe(new IntegerConsumer(3))) using (var consumer3 = observer.Subscribe(new IntegerConsumer(5))) observer.Wait(); Console.WriteLine("END"); Console.ReadLine(); } The cancellationToken.ThrowIfCancellationRequested may raise an exception in your Visual Studio when debugging. Simply go next by pressing F5, or test such code example without the attached debugger by starting the test with Ctrl + F5 instead of the F5 alone. The application simply creates an observable variable, which is able to parse user data. Then, register three observers specifying to each observer variables the wanted valid divider value. Then, the observable variable will start reading user data from the console and valid or error messages will flow to all the observers. Each observer will apply its internal logic of showing the message when it divides for the related divider. Here is the result of executing the application: Observables and observers in action Subscription life cycle What will happen if we want to stop a single observer from receiving messages from the observable event source? If we change the program Main from the preceding example to the following one, we could experience a wrong observer life cycle design. Here's the code: //this is the message observable responsible of producing messages using (var observer = new ConsoleIntegerProducer()) //those are the message observer that consume messages using (var consumer1 = observer.Subscribe(new IntegerConsumer(2))) using (var consumer2 = observer.Subscribe(new IntegerConsumer(3))) { using (var consumer3 = observer.Subscribe(new IntegerConsumer(5))) { //internal lifecycle } observer.Wait(); } Console.WriteLine("END"); Console.ReadLine(); Here is the result in the output console: The third observer unable to catch value messages By using the using construct method, we should stop the life cycle of the consumer object. However, we do not, because in the previous example, the Subscribe method of the observable simply returns a NULL object. To create a valid observer, we must handle and design its life cycle management. This means that we must eventually handle the external disposing of the Subscribe method's result by signaling the right observer that his life cycle reached the end. We have to create a Subscription class to handle an eventual object disposing in the right reactive way by sending the message for the OnCompleted event handler. Here is a simple Subscription class implementation: /// <summary> /// Handle observer subscription lifecycle /// </summary> public sealed class Subscription<T> : IDisposable { private readonly IObserver<T> observer; public Subscription(IObserver<T> observer) { this.observer = observer; } //the event signalling that the observer has //completed its lifecycle public event EventHandler<IObserver<T>> OnCompleted; public void Dispose() { if (OnCompleted != null) OnCompleted(this, observer); observer.OnCompleted(); } } The usage is within the observable Subscribe method. Here's an example: //add another observer to the subscriber list public IDisposable Subscribe(IObserver<int> observer) { if (observerList.Contains(observer)) throw new ArgumentException("The observer is already subscribed to this observable"); Console.WriteLine("Subscribing for {0}", observer.GetHashCode()); observerList.Add(observer); //creates a new subscription for the given observer var subscription = new Subscription<int>(observer); //handle to the subscription lifecycle end event subscription.OnCompleted += OnObserverLifecycleEnd; return subscription; } void OnObserverLifecycleEnd(object sender, IObserver<int> e) { var subscription = sender as Subscription<int>; //remove the observer from the internal list within the observable observerList.Remove(e); //remove the handler from the subscription event //once already handled subscription.OnCompleted -= OnObserverLifecycleEnd; } As visible, the preceding example creates a new Subscription<T> object to handle this observer life cycle with the IDisposable.Dispose method. Here is the result of such code edits against the full example available in the previous paragraph: The observer will end their life as we dispose their life cycle tokens This time, an observer ends up its life cycle prematurely by disposing the subscription object. This is visible by the first END message. Later, only two observers remain available at the application ending; when the user asks for EXIT, only such two observers end their life cycle by themselves rather than by the Subscription disposing. In real-world applications, often, observers subscribe to observables and later unsubscribe by disposing the Subscription token. This happens because we do not always want a reactive module to handle all the messages. In this case, this means that we have to handle the observer life cycle by ourselves, as we already did in the previous examples, or we need to apply filters to choose which messages flows to which subscriber, as visible in the later section Filtering events. Kindly consider that although filters make things easier, we will always have to handle the observer life cycle. Sourcing events Sourcing events is the ability to obtain from a particular source where few useful events are usable in reactive programming. Reactive programming is all about event message handling. Any event is a specific occurrence of some kind of handleable behavior of users or external systems. We can actually program event reactions in the most pleasant and productive way for reaching our software goals. In the following example, we will see how to react to CLR events. In this specific case, we will handle filesystem events by using events from the System.IO.FileSystemWatcher class that gives us the ability to react to the filesystem's file changes without the need of making useless and resource-consuming polling queries against the file system status. Here's the observer and observable implementation: public sealed class NewFileSavedMessagePublisher : IObservable<string>, IDisposable { private readonly FileSystemWatcher watcher; public NewFileSavedMessagePublisher(string path) { //creates a new file system event router this.watcher = new FileSystemWatcher(path); //register for handling File Created event this.watcher.Created += OnFileCreated; //enable event routing this.watcher.EnableRaisingEvents = true; } //signal all observers a new file arrived private void OnFileCreated(object sender, FileSystemEventArgs e) { foreach (var observer in subscriberList) observer.OnNext(e.FullPath); } //the subscriber list private readonly List<IObserver<string>> subscriberList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { //register the new observer subscriberList.Add(observer); return null; } public void Dispose() { //disable file system event routing this.watcher.EnableRaisingEvents = false; //deregister from watcher event handler this.watcher.Created -= OnFileCreated; //dispose the watcher this.watcher.Dispose(); //signal all observers that job is done foreach (var observer in subscriberList) observer.OnCompleted(); } } /// <summary> /// A tremendously basic implementation /// </summary> public sealed class NewFileSavedMessageSubscriber : IObserver<string> { public void OnCompleted() { Console.WriteLine("-> END"); } public void OnError(Exception error) { Console.WriteLine("-> {0}", error.Message); } public void OnNext(string value) { Console.WriteLine("-> {0}", value); } } The observer interface simply gives us the ability to write text to the console. I think, there is nothing to say about it. On the other hand, the observable interface makes the most of the job in this implementation. The observable interface creates the watcher object and registers the right event handler to catch the wanted reactive events. It handles the life cycle of itself and the internal watcher object. Then, it correctly sends the OnComplete message to all the observers. Here's the program's initialization: static void Main(string[] args) { Console.WriteLine("Watching for new files"); using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) using (var subscriber = publisher.Subscribe(new NewFileSavedMessageSubscriber())) { Console.WriteLine("Press RETURN to exit"); //wait for user RETURN Console.ReadLine(); } } Any new file that arises in the folder will let route  full FileName to observer. This is the result of a copy and paste of the same file three times: -> [YOUR PATH]out - Copy.png-> [YOUR PATH]out - Copy (2).png-> [YOUR PATH]out - Copy (3).png By using a single observable interface and a single observer interface, the power of reactive programming is not so evident. Let's begin with writing some intermediate object to change the message flow within the pipeline of our message pump made in a reactive way with filters, message correlator, and dividers. Filtering events As said in the previous section, it is time to alter message flow. The observable interface has the task of producing messages, while observer at the opposite consumes such messages. To create a message filter, we need to create an object that is a publisher and subscriber altogether. The implementation must take into consideration the filtering need and the message routing to underlying observers that subscribes to the filter observable object instead of the main one. Here's an implementation of the filter: /// <summary> /// The filtering observable/observer /// </summary> public sealed class StringMessageFilter : IObservable<string>, IObserver<string>, IDisposable { private readonly string filter; public StringMessageFilter(string filter) { this.filter = filter; } //the observer collection private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { this.observerList.Add(observer); return null; } //a simple implementation //that disables message routing once //the OnCompleted has been invoked private bool hasCompleted = false; public void OnCompleted() { hasCompleted = true; foreach (var observer in observerList) observer.OnCompleted(); } //routes error messages until not completed public void OnError(Exception error) { if (!hasCompleted) foreach (var observer in observerList) observer.OnError(error); } //routes valid messages until not completed public void OnNext(string value) { Console.WriteLine("Filtering {0}", value); if (!hasCompleted && value.ToLowerInvariant().Contains(filter.ToLowerInvariant())) foreach (var observer in observerList) observer.OnNext(value); } public void Dispose() { OnCompleted(); } } This filter can be used together with the example from the previous section that routes the FileSystemWatcher events of created files. This is the new program initialization: static void Main(string[] args) { Console.WriteLine("Watching for new files"); using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) using (var filter = new StringMessageFilter(".txt")) { //subscribe the filter to publisher messages publisher.Subscribe(filter); //subscribe the console subscriber to the filter //instead that directly to the publisher filter.Subscribe(new NewFileSavedMessageSubscriber()); Console.WriteLine("Press RETURN to exit"); Console.ReadLine(); } } As visible, this new implementation creates a new filter object that takes parameter to verify valid filenames to flow to the underlying observers. The filter subscribes to the main observable object, while the observer subscribes to the filter itself. It is like a chain where each chain link refers to the near one. This is the output console of the running application: The filtering observer in action Although I made a copy of two files (a .png and a .txt file), we can see that only the text file reached the internal observer object, while the image file reached the OnNext of filter because the invalid against the filter argument never reached internal observer. Correlating events Sometimes, especially when dealing with integration scenarios, there is the need of correlating multiple events that not always came altogether. This is the case of a header file that came together with multiple body files. In reactive programming, correlating events means correlating multiple observable messages into a single message that is the result of two or more original messages. Such messages must be somehow correlated to a value (an ID, serial, or metadata) that defines that such initial messages belong to the same correlation set. Useful features in real-world correlators are the ability to specify a timeout (that may be infinite too) in the correlation waiting logic and the ability to specify a correlation message count (infinite too). Here's a correlator implementation made for the previous example based on the FileSystemWatcher class: public sealed class FileNameMessageCorrelator : IObservable<string>, IObserver<string>, IDisposable { private readonly Func<string, string> correlationKeyExtractor; public FileNameMessageCorrelator(Func<string, string> correlationKeyExtractor) { this.correlationKeyExtractor = correlationKeyExtractor; } //the observer collection private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { this.observerList.Add(observer); return null; } private bool hasCompleted = false; public void OnCompleted() { hasCompleted = true; foreach (var observer in observerList) observer.OnCompleted(); } //routes error messages until not completed public void OnError(Exception error) { if (!hasCompleted) foreach (var observer in observerList) observer.OnError(error); } Just a pause. Up to this row, we simply created the reactive structure of FileNameMessageCorrelator class by implementing the two main interfaces. Here is the core implementation that correlates messages: //the container of correlations able to contain //multiple strings per each key private readonly NameValueCollection correlations = new NameValueCollection(); //routes valid messages until not completed public void OnNext(string value) { if (hasCompleted) return; //check if subscriber has completed Console.WriteLine("Parsing message: {0}", value); //try extracting the correlation ID var correlationID = correlationKeyExtractor(value); //check if the correlation is available if (correlationID == null) return; //append the new file name to the correlation state correlations.Add(correlationID, value); //in this example we will consider always //correlations of two items if (correlations.GetValues(correlationID).Count() == 2) { //once the correlation is complete //read the two files and push the //two contents altogether to the //observers var fileData = correlations.GetValues(correlationID) //route messages to the ReadAllText method .Select(File.ReadAllText) //materialize the query .ToArray(); var newValue = string.Join("|", fileData); foreach (var observer in observerList) observer.OnNext(newValue); correlations.Remove(correlationID); } } This correlator class accepts a correlation function as a constructor parameter. This function is later used to evaluate correlationID when a new filename variable flows within the OnNext method. Once the function returns valid correlationID, such IDs will be used as key for NameValueCollection, a specialized string collection to store multiple values per key. When there are two values for the same key, correlation is ready to flow out to the underlying observers by reading file data and joining such data into a single string message. Here's the application's initialization: static void Main(string[] args) { using (var publisher = new NewFileSavedMessagePublisher(@"[WRITE A PATH HERE]")) //creates a new correlator by specifying the correlation key //extraction function made with a Regular expression that //extract a file ID similar to FILEID0001 using (var correlator = new FileNameMessageCorrelator(ExtractCorrelationKey)) { //subscribe the correlator to publisher messages publisher.Subscribe(correlator); //subscribe the console subscriber to the correlator //instead that directly to the publisher correlator.Subscribe(new NewFileSavedMessageSubscriber()); //wait for user RETURN Console.ReadLine(); } } private static string ExtractCorrelationKey(string arg) { var match = Regex.Match(arg, "(FILEID\d{4})"); if (match.Success) return match.Captures[0].Value; else return null; } The initialization is quite the same of the filtering example seen in the previous section. The biggest difference is that the correlator object, instead of a string filter variable, accepts a function that analyses the incoming filename and produces the eventually available correlationID variable. I prepared two files with the same ID in filename variable. Here's the console output of the running example: Two files correlated by their name As visible, correlator made its job by joining the two file's data into a single message regardless of the order in which the two files were stored in the filesystem. These examples regarding the filtering and correlation of messages should give you the idea that we can do anything with received messages. We can put a message in standby until a correlated message comes, we can join multiple messages into one, we can produce multiple times the same message, and so on. This programming style opens the programmer's mind to lot of new application designs and possibilities. Sourcing from CLR streams Any class that extends System.IO.Stream is some kind of cursor-based flow of data. The same happens when we want to see a video stream, a sort of locally not persisted data that flows only in the network with the ability to go forward and backward, stop, pause, resume, play, and so on. The same behavior is available while streaming any kind of data, thus, the Stream class is the base class that exposes such behavior for any need. There are specialized classes that extend Stream, helping work with the streams of text data (StreamWriter and StreamReader), binary serialized data (BinaryReader and BinaryWriter), memory-based temporary byte containers (MemoryStream), network-based streams (NetworkStream), and lot of others. Regarding reactive programming, we are dealing with the ability to source events from any stream regardless of its kind (network, file, memory, and so on). Real-world applications that use reactive programming based on streams are cheats, remote binary listeners (socket programming), and any other unpredictable event-oriented applications. On the other hand, it is useless to read a huge file in reactive way, because there is simply nothing reactive in such cases. It is time to look at an example. Here's a complete example of a reactive application made for listening to a TPC port and route string messages (CR + LF divides multiple messages) to all the available observers. The program Main and the usual ConsoleObserver methods are omitted for better readability: public sealed class TcpListenerStringObservable : IObservable<string>, IDisposable { private readonly TcpListener listener; public TcpListenerStringObservable(int port, int backlogSize = 64) { //creates a new tcp listener on given port //with given backlog size listener = new TcpListener(IPAddress.Any, port); listener.Start(backlogSize); //start listening asynchronously listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } private void OnTcpClientConnected(Task<TcpClient> clientTask) { //if the task has not encountered errors if (clientTask.IsCompleted) //we will handle a single client connection per time //to handle multiple connections, simply put following //code into a Task using (var tcpClient = clientTask.Result) using (var stream = tcpClient.GetStream()) using (var reader = new StreamReader(stream)) while (tcpClient.Connected) { //read the message var line = reader.ReadLine(); //stop listening if nothing available if (string.IsNullOrEmpty(line)) break; else { //construct observer message adding client's remote endpoint address and port var msg = string.Format("{0}: {1}", tcpClient.Client.RemoteEndPoint, line); //route messages foreach (var observer in observerList) observer.OnNext(msg); } } //starts another client listener listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { //stop listener listener.Stop(); } } The preceding example shows how to create a reactive TCP listener that acts as observable of string messages. The observable method uses an internal TcpListener class that provides mid-level network services across an underlying Socket object. The example asks the listener to start listening and starts waiting for a client into another thread with the usage of a Task object. When a remote client becomes available, its communication with the internals of observable is guaranteed by the OnTcpClientConneted method that verifies the normal execution of Task. Then, it catches TcpClient from Task, reads the network stream, and appends StreamReader to such a network stream to start a reading feature. Once the message reading feature is complete, another Task starts repeating the procedure. Although, this design handles a backlog of pending connections, it makes available only a single client per time. To change such designs to handle multiple connections altogether, simply encapsulate the OnTcpClientConnected logic. Here's an example: private void OnTcpClientConnected(Task<TcpClient> clientTask) { //if the task has not encountered errors if (clientTask.IsCompleted) Task.Factory.StartNew(() => { using (var tcpClient = clientTask.Result) using (var stream = tcpClient.GetStream()) using (var reader = new StreamReader(stream)) while (tcpClient.Connected) { //read the message var line = reader.ReadLine(); //stop listening if nothing available if (string.IsNullOrEmpty(line)) break; else { //construct observer message adding client's remote endpoint address and port var msg = string.Format("{0}: {1}", tcpClient.Client.RemoteEndPoint, line); //route messages foreach (var observer in observerList) observer.OnNext(msg); } } }, TaskCreationOptions.PreferFairness); //starts another client listener listener.AcceptTcpClientAsync().ContinueWith(OnTcpClientConnected); } This is the output of the reactive application when it receives two different connections by using telnet as a client (C:>telnet localhost 8081). The program Main and the usual ConsoleObserver methods are omitted for better readability: The observable routing events from the telnet client As you can see, each client starts connecting to the listener by using a different remote port. This gives us the ability to differentiate multiple remote connections although they connect altogether. Sourcing from CLR enumerables Sourcing from a finite collection is something useless with regard to reactive programming. Differently, specific enumerable collections are perfect for reactive usages. These collections are the changeable collections that support collection change notifications by implementing the INotifyCollectionChanged(System.Collections.Specialized) interface like the ObservableCollection(System.Collections.ObjectModel) class and any infinite collection that supports the enumerator pattern with the usage of the yield keyword. Changeable collections The ObservableCollection<T> class gives us the ability to understand, in an event-based way, any change that occurs against the collection content. Kindly consider that changes regarding collection child properties are outside of the collection scope. This means that we are notified only for collection changes like the one produced from the Add or Remove methods. Changes within a single item does not produce an alteration of the collection size, thus, they are not notified at all. Here's a generic (nonreactive) example: static void Main(string[] args) { //the observable collection var collection = new ObservableCollection<string>(); //register a handler to catch collection changes collection.CollectionChanged += OnCollectionChanged; collection.Add("ciao"); collection.Add("hahahah"); collection.Insert(0, "new first line"); collection.RemoveAt(0); Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } private static void OnCollectionChanged(object sender, NotifyCollectionChangedEventArgs e) { var collection = sender as ObservableCollection<string>; if (e.NewStartingIndex >= 0) //adding new items Console.WriteLine("-> {0} {1}", e.Action, collection[e.NewStartingIndex]); else //removing items Console.WriteLine("-> {0} at {1}", e.Action, e.OldStartingIndex); } As visible, collection notifies all the adding operations, giving the ability to catch the new message. The Insert method signals an Add operation; although with the Insert method, we can specify the index and the value will be available within collection. Obviously, the parameter containing the index value (e.NewStartingIndex) contains the new index accordingly to the right operation. Differently, the Remove operation, although notifying the removed element index, cannot give us the ability to read the original message before the removal, because the event triggers after the remove operation has already occurred. In a real-world reactive application, the most interesting operation against ObservableCollection is the Add operation. Here's an example (console observer omitted for better readability): class Program { static void Main(string[] args) { //the observable collection var collection = new ObservableCollection<string>(); using (var observable = new NotifiableCollectionObservable(collection)) using (var observer = observable.Subscribe(new ConsoleStringObserver())) { collection.Add("ciao"); collection.Add("hahahah"); collection.Insert(0, "new first line"); collection.RemoveAt(0); Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } } public sealed class NotifiableCollectionObservable : IObservable<string>, IDisposable { private readonly ObservableCollection<string> collection; public NotifiableCollectionObservable(ObservableCollection<string> collection) { this.collection = collection; this.collection.CollectionChanged += collection_CollectionChanged; } private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { this.collection.CollectionChanged -= collection_CollectionChanged; foreach (var observer in observerList) observer.OnCompleted(); } } } The result is the same as the previous example about ObservableCollection without the reactive objects. The only difference is that observable routes only messages when the Action values add. The ObservableCollection signaling its content changes Infinite collections Our last example is regarding sourcing events from an infinite collection method. In C#, it is possible to implement the enumerator pattern by signaling each object to enumerate per time, thanks to the yield keyword. Here's an example: static void Main(string[] args) { foreach (var value in EnumerateValuesFromSomewhere()) Console.WriteLine(value); } static IEnumerable<string> EnumerateValuesFromSomewhere() { var random = new Random(DateTime.Now.GetHashCode()); while (true) //forever { //returns a random integer number as string yield return random.Next().ToString(); //some throttling time Thread.Sleep(100); } } This implementation is powerful because it never materializes all the values into the memory. It simply signals that a new object is available to the enumerator that the foreach structure internally uses by itself. The result is writing forever numbers onto the output console. Somehow, this behavior is useful for reactive usage, because it never creates a useless state like a temporary array, list, or generic collection. It simply signals new items available to the enumerable. Here's an example: public sealed class EnumerableObservable : IObservable<string>, IDisposable { private readonly IEnumerable<string> enumerable; public EnumerableObservable(IEnumerable<string> enumerable) { this.enumerable = enumerable; this.cancellationSource = new CancellationTokenSource(); this.cancellationToken = cancellationSource.Token; this.workerTask = Task.Factory.StartNew(() => { foreach (var value in this.enumerable) { //if task cancellation triggers, raise the proper exception //to stop task execution cancellationToken.ThrowIfCancellationRequested(); foreach (var observer in observerList) observer.OnNext(value); } }, this.cancellationToken); } //the cancellation token source for starting stopping //inner observable working thread private readonly CancellationTokenSource cancellationSource; //the cancellation flag private readonly CancellationToken cancellationToken; //the running task that runs the inner running thread private readonly Task workerTask; //the observer list private readonly List<IObserver<string>> observerList = new List<IObserver<string>>(); public IDisposable Subscribe(IObserver<string> observer) { observerList.Add(observer); //subscription lifecycle missing //for readability purpose return null; } public void Dispose() { //trigger task cancellation //and wait for acknoledge if (!cancellationSource.IsCancellationRequested) { cancellationSource.Cancel(); while (!workerTask.IsCanceled) Thread.Sleep(100); } cancellationSource.Dispose(); workerTask.Dispose(); foreach (var observer in observerList) observer.OnCompleted(); } } This is the code of the program startup with the infinite enumerable generation: class Program { static void Main(string[] args) { //we create a variable containing the enumerable //this does not trigger item retrieval //so the enumerator does not begin flowing datas var enumerable = EnumerateValuesFromSomewhere(); using (var observable = new EnumerableObservable(enumerable)) using (var observer = observable.Subscribe(new ConsoleStringObserver())) { //wait for 2 seconds than exit Thread.Sleep(2000); } Console.WriteLine("Press RETURN to EXIT"); Console.ReadLine(); } static IEnumerable<string> EnumerateValuesFromSomewhere() { var random = new Random(DateTime.Now.GetHashCode()); while (true) //forever { //returns a random integer number as string yield return random.Next().ToString(); //some throttling time Thread.Sleep(100); } } } As against the last examples, here we have the usage of the Task class. The observable uses the enumerable within the asynchronous Task method to give the programmer the ability to stop the execution of the whole operation by simply exiting the using scope or by manually invoking the Dispose method. This example shows a tremendously powerful feature: the ability to yield values without having to source them from a concrete (finite) array or collection by simply implementing the enumerator pattern. Although few are used, the yield operator gives the ability to create complex applications simply by pushing messages between methods. The more methods we create that cross send messages to each other, the more complex business logics the application can handle. Consider the ability to catch all such messages with observables, and you have a little idea about how powerful reactive programming can be for a developer. Summary In this article, we had the opportunity to test the main features that any reactive application must implement: message sending, error sending, and completing acknowledgement. We focused on plain C# programming to give the first overview of how reactive classic designs can be applied to all main application needs, such as sourcing from streams, from user input, from changeable and infinite collection. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Domain-Driven Design [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 17281

article-image-getting-started-packages-r
Joel Carlson
18 Jul 2016
6 min read
Save for later

Getting Started with Packages in R

Joel Carlson
18 Jul 2016
6 min read
R is a powerful programming language for loading, manipulating, transforming, and visualizing data. The language is made more powerful by its extensibility in conjunction with the efforts of a highly active open source community. This community is constantly contributing to the language in the form of packages, which are, at their core, sets of thematically linked functions. By leveraging the work that has been put in to the creation of useful open source packages, an R user can substantially improve both the readability and efficiency of their code. In this post, you will learn how to install new packages to extend the functionality of R and how to load those packages into your session. We will also explore some of the most useful packages that have been contributed by the R community! Installing Packages There are a number of places where R packages can be stored, but the three most popular locations are CRAN, Bioconductor, and GitHub. CRAN The Comprehensive R Archive Network is the home of R. At the time of this writing, there are over 8,000 packages hosted on CRAN, all of which are free to download and use. If you are looking to get started with using R in your field but don't know exactly where to start, the CRAN task view for your field or area of interest is likely a good place to start. There you will find listings of relevant packages, along with short descriptions and links to source code. Let's say you've entered the "Reproducible Research" task view and have decided that the package named knitr sounds useful. To install knitr from CRAN, you type this in your R console: install.packages("knitr") Bioconductor Bioconductor is home to over 1,000 packages for R, with a focus on packages that can be used for bioinformatics research. One of the main differences between Bioconductor and CRAN is that Bioconductor has stricter guidelines for accepting packages than CRAN. After finding a package on Bioconductor, such as EBImage, install it by running these commands: source("https://bioconductor.org/biocLite.R") biocLite("EBImage") It is possible to install from Bioconductor using install.packages, but this is not recommended for reasons discussed here. GitHub GitHub is a space where you can post the source code of your work to keep it under version control and also to encourage and facilitate collaboration. Often, GitHub is where the truly bleeding-edge packages can be found, and where package updates are put first. Many of the packages that can be found on CRAN have a development version on GitHub, occasionally with features absent from the CRAN version. As you browse GitHub, you will likely find some packages that will never be put on CRAN or Bioconductor. For this reason, caution should be exercised when using packages sourced from GitHub. Should you find a package on GitHub and wish to install it, you must first download the package devtools from CRAN. You then have access to the install_github() function, where the argument is the name of the developer, followed by a slash, and then the name of the package: install.packages("devtools") # Install swirl! See: https://github.com/swirldev/swirl devtools::install_github("swirldev/swirl") Where the syntax devtools::xxxx() simply means "Use the xxxx function from the devtools package ". You could just have easily called library(devtools) after installing and then simply typed install_github(). The devtools package also includes a number of different methods for installing packages that are stored locally, on bitbucket, in an SVN repository. Try typing ??devtools::install_ to see a full list. Some Popular Packages Now that you know the basic commands for installing packages, let's take a very short look at some of the more popular and useful packages. Visualizing data with ggplot2 ggplot2 is a package that is used to visualize data. It provides a method of chart-building that is intuitive (based on The Grammar of Graphics) and results in aesthetically pleasing graphics. Here is an example of a graphic produced using ggplot2: install.packages("ggplot2") # Install from CRAN library(ggplot2) # Load ggplot2 data(diamonds) # Load diamonds data set # Create plot with carat on x axis, price on y, # and color based on quality of cut ggplot(data=diamonds, aes(x=carat, y=price, col=cut)) + geom_point(alpha=0.5) # Use points (dots) to represent data Manipulating data with dplyr dplyr presents a number of verbs used for manipulating data (select, filter, mutate, arrange, summarize, and so on), each of which are common tasks when working with data. To see how dplyr can simplify your workflow, let's compare the base R versus the dplyr code used to subset the diamonds data into only those gems with Ideal cut type and greater than 2 carats: install.packages("dplyr") # Install dplyr from CRAN library(dplyr) # Load dplyr BaseR <- diamonds[which(diamonds$cut == "Ideal" & diamonds$carat > 2),] # vs: Dplyr <- filter(diamonds, cut == "Ideal" & carat > 2) Clearly the dplyr version is more succinct, more readable, and, most importantly, easier to write. Machine learning with caret The caret package is a collection of functions that unify the syntax used by many of the most popular machine learning packages implemented in R. caret will allow you to quickly prepare your data, create predictive models, tune the model parameters, and interpret the results. Here is a simple working example of training and tuning a k-nearest neighbors model with caret to predict the price of a diamond based on cut, color, and clarity: install.packages("caret") library(caret) # Split data into training and testing sets inTrain <- createDataPartition(diamonds$price, p=0.01, list=FALSE) training <- diamonds[inTrain,] testing <- diamonds[-inTrain,] knn_model <- train(price ~ cut + color + clarity, data=training, method="knn") plot(knn_model) You can see that increasing the number of neighbors in the model increases the accuracy (decreases the RMSE, a method of measuring the average distance between predictions and data). Summary In this post, you learned how to install and load packages from three different major sources: CRAN, Bioconductor, and GitHub. You also took a brief look at three popular packages: ggplot2 for visualization, dplyr for manipulation, and caret for machine learning. About the author Joel Carlson is a recent MSc graduate from Seoul National University, and current Data Science Fellow at Galvanize in San Francisco. He has contributed two R packages to CRAN (radiomics and RImagePalette). You can learn more or contact him at his personal website.
Read more
  • 0
  • 0
  • 2187

article-image-overview-certificate-management
Packt
18 Jul 2016
24 min read
Save for later

Overview of Certificate Management

Packt
18 Jul 2016
24 min read
In this article by David Steadman and Jeff Ingalls, the authors of Microsoft Identity Manager 2016 Handbook, we will look at certificate management in brief. Microsoft Identity Management (MIM)—certificate management (CM)—is deemed the outcast in many discussions. We are here to tell you that this is not the case. We see many scenarios where CM makes the management of user-based certificates possible and improved. If you are currently using FIM certificate management or considering a new certificate management deployment with MIM, we think you will find that CM is a component to consider. CM is not a requirement for using smart cards, but it adds a lot of functionality and security to the process of managing the complete life cycle of your smart cards and software-based certificates in a single forest or multiforest scenario. In this article, we will look at the following topics: What is CM? Certificate management components Certificate management agents The certificate management permission model (For more resources related to this topic, see here.) What is certificate management? Certificate management extends MIM functionality by adding management policy to a driven workflow that enables the complete life cycle of initial enrollment, duplication, and the revocation of user-based certificates. Some smart card features include offline unblocking, duplicating cards, and recovering a certificate from a lost card. The concept of this policy is driven by a profile template within the CM application. Profile templates are stored in Active Directory, which means the application already has a built-in redundancy. CM is based on the idea that the product will proxy, or be the middle man, to make a request to and get one from CA. CM performs its functions with user agents that encrypt and decrypt its communications. When discussing PKI (Public Key Infrastructure) and smart cards, you usually need to have some discussion about the level of assurance you would like for the identities secured by your PKI. For basic insight on PKI and assurance, take a look at http://bit.ly/CorePKI. In typical scenarios, many PKI designers argue that you should use Hardware Security Module (HSM) to secure your PKI in order to get the assurance level to use smart cards. Our personal opinion is that HSMs are great if you need high assurance on your PKI, but smart cards increase your security even if your PKI has medium or low assurance. Using MIM CM with HSM will not be covered in this article, but if you take a look at http://bit.ly/CMandLunSA, you will find some guidelines on how to use MIM CM and HSM Luna SA. The Financial Company has a low-assurance PKI with only one enterprise root CA issuing the certificates. The Financial Company does not use a HSM with their PKI or their MIM CM. If you are running a medium- or high-assurance PKI within your company, policies on how to issue smart cards may differ from the example. More details on PKI design can be found at http://bit.ly/PKIDesign. Certificate management components Before we talk about certificate management, we need to understand the underlying components and architecture: As depicted before, we have several components at play. We will start from the left to the right. From a high level, we have the Enterprise CA. The Enterprise CA can be multiple CAs in the environment. Communication from the CM application server to the CA is over the DCOM/RPC channel. End user communication can be with the CM web page or with a new REST API via a modern client to enable the requesting of smart cards and the management of these cards. From the CM perspective, the two mandatory components are the CM server and the CA modules. Looking at the logical architecture, we have the CA, and underneath this, we have the modules. The policy and exit module, once installed, control the communication and behavior of the CA based on your CM's needs. Moving down the stack, we have Active Directory integration. AD integration is the nuts and bolts of the operation. Integration into AD can be very complex in some environments, so understanding this area and how CM interacts with it is very important. We will cover the permission model later in this article, but it is worth mentioning that most of the configuration is done and stored in AD along with the database. CM uses its own SQL database, and the default name is FIMCertificateManagement. The CM application uses its own dedicated IIS application pool account to gain access to the CM database in order to record transactions on behalf of users. By default, the application pool account is granted the clmApp role during the installation of the database, as shown in the following screenshot:   In CM, we have a concept called the profile template. The profile template is stored in the configuration partition of AD, and the security permissions on this container and its contents determine what a user is authorized to see. As depicted in the following screenshot, CM stores the data in the Public Key Services (1) and the Profile Templates container. CM then reads all the stored templates and the permissions to determine what a user has the right to do (2): Profile templates are at the core of the CM logic. The three components comprising profile templates are certificate templates, profile details, and management policies. The first area of the profile template is certificate templates. Certificate templates define the extensions and data point that can be included in the certificate being requested. The next item is profile details, which determines the type of request (either a smart card or a software user-based certificate), where we will generate the certificates (either on the server or on the client side of the operations), and which certificate templates will be included in the request. The final area of a profile template is known as management policies. Management policies are the workflow engine of the process and contain the manager, the subscriber functions, and any data collection items. The e-mail function is initiated here and commonly referred to as the One Time Password (OTP) activity. Note the word "One". A trigger will only happen once here; therefore, multiple alerts using e-mail would have to be engineered through alternate means, such as using the MIM service and expiration activities. The permission model is a bit complex, but you'll soon see the flexibility it provides. Keep in mind that Service Connection Point (SCP) also has permissions applied to it to determine who can log in to the portal and what rights the user has within the portal. SCP is created upon installation during the wizard configuration. You will want to be aware of the SCP location in case you run into configuration issues with administrators not being able to perform particular functions. The SCP location is in the System container, within Microsoft, and within Certificate Lifecycle Manager, as shown here: Typical location CN=Certificate Lifecycle Manager,CN=Microsoft,CN=System,DC=THEFINANCIALCOMPANY,DC=NET Certificate management agents We covered several key components of the profile templates and where some of the permission model is stored. We now need to understand how the separation of duties is defined within the agent role. The permission model provides granular control, which promotes the separation of duties. CM uses six agent accounts, and they can be named to fit your organization's requiremensts. We will walk through the initial setup again later in this article so that you can use our setup or alter it based on your need. The Financial Company only requires the typical setup. We precreated the following accounts for TFC, but the wizard will create them for you if you do not use them. During the installation and configuration of CM, we will use the following accounts: Besides the separation of duty, CM offers enrollment by proxy. Proxy enrollment of a request refers to providing a middle man to provide the end user with a fluid workflow during enrollment. Most of this proxy is accomplished via the agent accounts in one way or another. The first account is MIM CM Agent (MIMCMAgent), which is used by the CM server to encrypt data from the smart card admin PINs to the data collection stored in the database. So, the agent account has an important role to protect data and communication to and from the certificate authorities. The last user agent role CMAgent has is the capability to revoke certificates. The agent certificate thumbprint is very important, and you need to make sure the correct value is updated in the three areas: CM, web.config, and the certificate policy module under the Signing Certificates tab on the CA. We have identified these areas in the following. For web.config: <add key="Clm.SigningCertificate.Hash" value <add key="Clm.Encryption.Certificate.Hash" value <add key="Clm.SmartCard.ExchangeCertificate.Hash" value The Signing Certificates tab is as shown in the following screenshot:   Now, when you run through the configuration wizard, these items are already updated, but it is good to know which locations need to be updated if you need to troubleshoot agent issues or even update/renew this certificate. The second account we want to look at is Key Recovery Agent (MIMCMKRAgent); this agent account is needed for CM to recover any archived private keys certificates. Now, let's look at Enrollment Agent (MIMCMEnrollAgent); the main purpose of this agent account is to provide the enrollment of smart cards. Enrollment Agent, as we call it, is responsible for signing all smart card requests before they are submitted to the CA. Typical permission for this account on the CA is read and request. Authorization Agent (MIMCMAuthAgent)—or as some folks call this, the authentication agent—is responsible for determining access rights for all objects from a DACL perspective. When you log in to the CM site, it is the authorization account's job to determine what you have the right to do based on all the core components that ACL has applied. We will go over all the agents accounts and rights needed later in this article during our setup. CA Manager Agent (MIMCMManagerAgent) is used to perform core CA functions. More importantly, its job is to issue Certificate Revocation Lists (CRLs). This happens when a smart card or certificate is retired or revoked. It is up to this account to make sure the CRL is updated with this critical information. We saved the best for last: Web Pool Agent (MIMCMWebAgent). This agent is used to run the CM web application. The agent is the account that contacts the SQL server to record all user and admin transactions. The following is a good depiction of all the accounts together and the high-level functions:   The certificate management permission model In CM, we think this part is the most complex because with the implementation, you can be as granular as possible. For this reason, this area is the most difficult to understand. We will uncover the permission model so that we can begin to understand how the permission model works within CM. When looking at CM, you need to formulate the type of management model you will be deploying. What we mean by this is will you have a centralized or delegated model? This plays a key part in deployment planning for CM and the permission you will need to apply. In the centralized model, a specific set of managers are assigned all the rights for the management policy. This includes permissions on the users. Most environments use this method as it is less complex for environments. Now, within this model, we have manager-initiated permission, and this is where CM permissions are assigned to groups containing the subscribers. Subscribers are the actual users doing the enrollment or participating in the workflow. This is the model that The Financial Company will use in its configuration. The delegated model is created by updating two flags in web.config called clm.RequestSecurity.Flags and clm.RequestSecurity.Groups. These two flags work hand in hand as if you have UseGroups, then it will evaluate all the groups within the forests to include universal/global security. Now, if you use UseGroups and define clm.RequestSecurity.Groups, then it will only look for these specific groups and evaluate via the Authorization Agent . The user will tell the Authorization Agent to only read the permission on the user and ignore any group membership permissions:   When we continue to look at the permission, there are five locations that permissions can be applied in. In the preceding figure is an outline of these locations, but we will go in more depth in the subsections in a bit. The basis of the figure is to understand the location and what permission can be applied. The following are the areas and the permissions that can be set: Service Connection Point: Extended Permissions Users or Groups: Extended Permissions Profile Template Objects: Container: Read or Write Template Object: Read/Write or Enroll Certificate Template: Read or Enroll CM Management Policy within the Web application: We have multiple options based on the need, such as Initiate Request Now, let's begin to discuss the core areas to understand what they can do. So, The Financial Company can design the enrollment option they want. In the example, we will use the main scenario we encounter, such as the helpdesk, manager, and user-(subscriber) based scenarios. For example, certain functions are delegated to the helpdesk to allow them to assist the user base without giving them full control over the environment (delegated model). Remember this as we look at the five core permission areas. Creating service accounts So far, in our MIM deployment, we have created quite a few service accounts. MIM CM, however, requires that we create a few more. During the configuration wizard, we will get the option of having the wizard create them for us, but we always recommend creating them manually in FIM/MIM CM deployments. One reason is that a few of these need to be assigned some certificates. If we use an HSM, we have to create it manually in order to make sure the certificates are indeed using the HSM. The wizard will ask for six different service accounts (agents), but we actually need seven. In The Financial Company, we created the following seven accounts to be used by FIM/MIM CM: MIMCMAgent MIMCMAuthAgent MIMCMCAManagerAgent MIMCMEnrollAgent MIMCMKRAgent MIMCMWebAgent MIMCMService The last one, MIMCMService, will not be used during the configuration wizard, but it will be used to run the MIM CM Update service. We also created the following security groups to help us out in the scenarios we will go over: MIMCM-Helpdesk: This is the next step in OTP for subscribers MIMCM-Managers: These are the managers of the CM environment MIMCM-Subscribers: This is group of users that will enroll Service Connection Point Service Connection Point (SCP)is located under the Systems folder within Active Directory. This location, as discussed in the earlier parts of the article, defines who functions as the user as it relates to logging in to the web application. As an example, if we just wanted every user to only log in, we would give them read rights. Again, authenticated users, have this by default, but if you only wanted a subset of users to access, you should remove authenticated users and add your group. When you run the configuration wizard, SCP is decided, but the default is the one shown in the following screenshot:   If a user is assigned to any of the MIM CM permissions available on SCP, the administrative view of the MIM CM portal will be shown. The MIM CM permissions are defined in a Microsoft TechNet article at http://bit.ly/MIMCMPermission. For your convenience, we have copied parts of the information here: MIM CM Audit: This generates and displays MIM CM policy templates, defines management policies within a profile template, and generates MIM CM reports. MIM CM Enrollment Agent: This performs certificate requests for the user or group on behalf of another user. The issued certificate's subject contains the target user's name and not the requester's name. MIM CM Request Enroll: This initiates, executes, or completes an enrollment request. MIM CM Request Recover: This initiates encryption key recovery from the CA database. MIM CM Request Renew: This initiates, executes, or completes an enrollment request. The renewal request replaces a user's certificate that is near its expiration date with a new certificate that has a new validity period. MIM CM Request Revoke: This revokes a certificate before the expiration of the certificate's validity period. This may be necessary, for example, if a user's computer or smart card is stolen. MIM CM Request Unblock Smart Card: This resets a smart card's user Personal Identification Number (PIN) so that he/she can access the key material on a smart card. The Active Directory extended permissions So, even if you have the SCP defined, we still need to set up the permissions on the user or group of users that we want to manage. As in our helpdesk example, if we want to perform certain functions, the most common one is offline unblock. This would require the MIMCM-HelpDesk group. We will create this group later in this article. It would contain all help desk users then on SCP; we would give them CM Request Unblock Smart Card and CM Enrollment Agent. Then, you need to assign the permission to the extended permission on MIMCM-Subscribers, which contains all the users we plan to manage with the helpdesk and offline unblock:   So, as you can see, we are getting into redundant permissions, but depending on the location, it means what the user can do. So, planning of the model is very important. Also, it is important to document what you have as with some slight tweak, things can and will break. The certificate templates permission In order for any of this to be possible, we still need to give permission to the manager of the user to enroll or read the certificate template, as this will be added to the profile template. For anyone to manage this certificate, everyone will need read and enroll permissions. This is pretty basic, but that is it, as shown in the following screenshot:   The profile template permission The profile template determines what a user can read within the template. To get to the profile template, we need to use Active Directory sites and services to manage profile templates. We need to activate the services node as this is not shown by default, and to do this, we will click on View | Show Services Node:   As an example if you want a user to enroll in the cert, he/she would need CM Enroll on the profile template, as shown in the following screenshot:   Now, this is for users, but let's say you want to delegate the creation of profile templates. For this, all you need to do is give the MIMCM-Managers delegate the right to create all child items on the profile template container, as follows:   The management policy permission For the management policy, we will break it down into two sections: a software-based policy and a smart card management policy. As we have different capabilities within CM based on the type, by default, CM comes with two sample policies (take a look at the following screenshot), which we use for duplication to create a new one. When configuring, it is good to know that you cannot combine software and smart card-based certificates in a policy:   The software management policy The software-based certificate policy has the following policies available through the CM life cycle:   The Duplicate Policy panel creates a duplicate of all the certificates in the current profile. Now, if the first profile is created for the user, all the other profiles created afterwards will be considered duplicate, and the first generated policy will be primary. The Enroll Policy panel defines the initial enrollment steps for certificates such as initiate enroll request and data collection during enroll initiation. The Online Update Policy panel is part of the automatic policy function when key items in the policy change. This includes certificates about to expire, when a certificate is added to the existing profile template or even removed. The Recover Policy panel allows for the recovery of the profile in the event that the user was deleted. This includes the cases where certs are deleted by accident. One thing to point out is if the certificate was a signing cert, the recovery policy would issue a new replacement cert. However, if the cert was used for encryption, you can recover the original using this policy. The Recover On Behalf Policy panel allows managers or helpdesk operations to be recovered on behalf the user in the event that they need any of the certificates. The Renew Policy panel is the workflow that defines the renew setting, such as revocation and who can initiate a request. The Suspend and Reinstate Policy panel enables a temporary revocation of the profile and puts a "certificate hold" status. More information about the CRL status can be found at http://bit.ly/MIMCMCertificateStatus. The Revoke Policy panel maintains the revocation policy and setting around being able to set the revocation reason and delay. Also, it allows the system to push a delta CRL. You also can define the initiators for this policy workflow. The smart card management policy The smart card policy has some similarities to the software-based policy, but it also has a few new workflows to manage the full life cycle of the smart card:   The Profile Details panel is by far the most commonly used part in this section of the policy as it defines all the smart card certificates that will be loaded in the policy along with the type of provider. One key item is creating and destroying virtual smart cards. One final key part is diversifying the admin key. This is best practice as this secures the admin PIN using diversification. So, before we continue, we want to go over this setting as we think it is an important topic. Diversifying the admin key is important because each card or batch of cards comes with a default admin key. Smart cards may have several PINs, an admin PIN, a PINunlock key (PUK), and a user PIN. This admin key, as CM refers to it, is also known as the administrator PIN. This PIN differs from the user's PIN. When personalizing the smart card, you configure the admin key, the PUK, and the user's PIN. The admin key and the PUK are used to reset the virtual smart card's PIN. However, you cannot configure both. You must use the PUK to unlock the PIN if you assign one during the virtual smart card's creation. It is important to note that you must use the PUK to reset the PIN if you provide both a PUK and an admin key. During the configuration of the profile template, you will be asked to enter this key as follows:   The admin key is typically used by smart card management solutions that enable a challenge response approach to PIN unlocking. The card provides a set of random data that the user reads (after the verification of identity) to the deployment admin. The admin then encrypts the data with the admin key (obtained as mentioned before) and gives the encrypted data back to the user. If the encrypted data matches that produced by the card during verification, the card will allow PIN resetting. As the admin key is never in the hands of anyone other than the deployment administrator, it cannot be intercepted or recorded by any other party (including the employee) and thus has significant security benefits beyond those in using a PUK—an important consideration during the personalization process. When enabled, the admin key is set to a card-unique value when the card is assigned to the user. The option to diversify admin keys with the default initialization provider allows MIM CM to use an algorithm to uniquely generate a new key on the card. The key is encrypted and securely transmitted to the client. It is not stored in the database or anywhere else. MIM CM recalculates the key as needed to manage the card:   The CM profile template contains a thumbprint for the certificate to be used in admin key diversification. CM looks in the personal store of the CM agent service account for the private key of the certificate in the profile template. Once located, the private key is used to calculate the admin key for the smart card. The admin key allows CM to manage the smart card (issuing, revoking, retiring, renewing, and so on). Loss of the private key prevents the management of cards diversified using this certificate. More detail on the control can be found at http://bit.ly/MIMCMDiversifyAdminKey. Continuing on, the Disable Policy panel defines the termination of the smart card before expiration, you can define the reason if you choose. Once disabled, it cannot be reused in the environment. The Duplicate Policy panel, similarly to the software-based one, produces a duplicate of all the certificates that will be on the smart card. The Enroll Policy panel, similarly to the software policy, defines who can initiate the workflow and printing options. The Online Update Policy panel, similarly to the software-based cert, allows for the updating of certificates if the profile template is updated. The update is triggered when a renewal happens or, similarly to the software policy, a cert is added or removed. The Offline Unblock Policy panel is the configuration of a process to allow offline unblocking. This is used when a user is not connected to the network. This process only supports Microsoft-based smart cards with challenge questions and answers via, in most cases, the user calling the helpdesk. The Recovery On Behalf Policy panel allows the recovery of certificates for the management or the business to recover if the cert is needed to decrypt information from a user whose contract was terminated or who left the company. The Replace Policy panel is utilized by being able to replace a user's certificate in the event of a them losing their card. If the card they had had a signing cert, then a new signing cert would be issued on this new card. Like with software certs, if the certificate type is encryption, then it would need to be restored on the replace policy. The Renew Policy panel will be used when the profile/certificate is in the renewal period and defines revocation details and options and initiates permission. The Suspend and Reinstate Policy panel is the same as the software-based policy for putting the certificate on hold. The Retire Policy panel is similar to the disable policy, but a key difference is that this policy allows the card to be reused within the environment. The Unblock Policy panel defines the users that can perform an actual unblocking of a smart card. More in-depth detail of these policies can be found at http://bit.ly/MIMCMProfiletempates. Summary In this article, we uncovered the basics of certificate management and the management components that are required to successfully deploy a CM solution. Then, we discussed and outlined, agent accounts and the roles they play. Finally, we looked into the management permission model from the policy template to the permissions and the workflow. Resources for Article: Further resources on this subject: Managing Network Devices [article] Logging and Monitoring [article] Creating Horizon Desktop Pools [article]
Read more
  • 0
  • 0
  • 7293
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microstrategy-10
Packt
15 Jul 2016
13 min read
Save for later

MicroStrategy 10

Packt
15 Jul 2016
13 min read
In this article by Dmitry Anoshin, Himani Rana, and Ning Ma, the authors of the book, Mastering Business Intelligence with MicroStrategy, we are going to talk about MicroStrategy 10 which is one of the leading platforms on the market, can handle all data analytics demands, and offers a powerful solution. We will be discussing the different concepts of MicroStrategy such as its history, deployment, and so on. (For more resources related to this topic, see here.) Meet MicroStrategy 10 MicroStrategy is a market leader in Business Intelligence (BI) products. It has rich functionality in order to meet the requirements of modern businesses. In 2015, MicroStrategy provided a new release of MicroStrategy, version 10. It offers both agility and governance like no other BI product. In addition, it is easy to use and enterprise ready. At the same time, it is great for both IT and business. In other words, MicroStrategy 10 offers an analytics platform that combines an easy and empowering user experience, together with enterprise-grade performance, management, and security capabilities. It is true bimodal BI and moves seamlessly between styles: Data discovery and visualization Enterprise reporting and dashboards In-memory high performance BI Scales from departments to enterprises Administration and security MicroStrategy 10 consists of three main products: MicroStrategy Desktop, MicroStrategy Mobile, and MicroStrategy Web. MicroStrategy Desktop lets users start discovering and visualizing data instantly. It is available for Mac and PC. It allows users to connect, prepare, discover, and visualize data. In addition, we can easily promote to a MicroStrategy Server. Moreover, MicroStrategy Desktop has a brand new HTML5 interface and includes all connection drivers. It allows us to use data blending, data preparation, and data enrichment. Finally, it has powerful advanced analytics and can be integrated with R. To cut a long story short, we want to notice main changes of new BI platform. All developers keep the same functionality, the looks as well as architect the same. All changes are about Web interface and Intelligence Server. Let's look closer at what MicroStrategy 10 can show us. MicroStrategy 10 expands the analytical ecosystem by using third-party toolkits such as: Data visualization libraries: We can easily plug in and use any visualization from the expanding range of Java libraries Statistical toolkits: R, SAS, SPSS, KXEN, and others Geolocation data visualization: Uses mapping capabilities to visualize and interact with location data MicroStrategy 10 has more than 25 new data sources that we can connect to quickly and simply. In addition, it allows us build reports on top of other BI tools, such as SAP Business Objects, Cognos, and Oracle BI. It has a new connector to Hadoop, which uses the native connector. Moreover, it allows us to blend multiple data sources in-memory. We want to notice that MicroStrategy 10 got reach functionality for work with data such as: Streamlined workflows to parse and prepare data Multi-table in-memory support from different sources Automatically parse and prepare data with every refresh 100+ inbuilt functions to profile and clean data Create custom groups on the fly without coding In terms of connection to Hadoop, most BI products use Hive or Impala ODBC drivers in order to use SQL to get data from Hadoop. However, this method is bad in terms of performance. MicroStrategy 10 queries directly against Hadoop. As a result, it is up to 50 times faster than via ODBC. Let's look at some of the main technical changes that have significantly improved MicroStrategy. The platform is now faster than ever before, because it doesn't have a two-billion-row limit on in-memory datasets and allows us to create analytical cubes up to 16 times bigger in size. It publishes cubes dramatically faster. Moreover, MicroStrategy 10 has higher data throughput and cubes can be loaded in parallel 4 times faster with multi-threaded parallel loading. In addition, the in-memory engine allows us to create cubes 80 times larger than before, and we can access data from cubes 50% faster, by using up to 8 parallel threads. Look at the following table, where we compare in-memory cube functionality in version 9 versus version 10: Feature Ver. 9 Ver. 10 Data volume 100 GB ~2TB Number of rows 2 billion 200 billion Load rate 8 GB/hour ~200 GB/hour Data model Star schema Any schema, tabular or multiple sets   In order to make the administration of MicroStrategy more effective in the new version, MicroStrategy Operation Manager was released. It gives MicroStrategy administrators powerful development tools to monitor, automate, and control systems. Operations Manager gives us: Centralized management in a web browser Enterprise Manager Console within Tool Triggers and 24/7 alerts System health monitors Server management Multiple environment administration MicroStrategy 10 education and certification MicroStrategy 10 offers new training courses that can be conducted offline in a training center, or online at http://www.microstrategy.com/us/services/education. We believe that certification is a good thing on your journey. The following certifications now exist for version 10: MicroStrategy 10 Certified Associated Analyst MicroStrategy 10 Certified Application Designer MicroStrategy 10 Certified Application Developer MicroStrategy 10 Certified Administrator After passing all of these exams, you will become a MicroStrategy 10 Application Engineer. More details can be found here: http://www.microstrategy.com/Strategy/media/downloads/training-events/MicroStrategy-certification-matrix_v10.pdf. History ofMicroStrategy Let us briefly look at the history of MicroStrategy, which began in 1991: 1991: Released first BI product, which allowed users to create graphical views and analyses of information data 2000: Released MicroStrategy 7 with a web interface 2003: First to release a fully integrated reporting tool, combining list reports, BI-style dashboards, and interface analyses in a single module. 2005: Released MicroStrategy 8, including one-click actions and drag-and-drop dashboard creation 2009: Released MicroStrategy 9, delivering a seamless consolidated path from department to enterprise BI 2010: Unveiled new mobile BI capabilities for iPad and iPhone, and was featured on the iTunes Bestseller List 2011: Released MicroStrategy Cloud, the first SaaS offering from a major BI vendor 2012: Released Visual Data Discovery and groundbreaking new security platform, Usher 2013: Released expanded Analytics Platform and free Analytics Desktop client 2014: Announced availability of MicroStrategy Analytics via Amazon Web Services (AWS) 2015: MicroStrategy 10 was released, the first ever enterprise analytics solution for centralized and decentralized BI DeployingMicroStrategy 10 We know only one way to master MicroStrategy, through practical exercises. Let's start by downloading and deploying MicroStrategy 10.2. Overview of training architecture In order to master MicroStrategy and learn about some BI considerations, we need to download the all-important software, deploy it, and connect to a network. During the preparation of the training environment, we will cover the installation of MicroStrategy on a Linux operating system. This is very good practice, because many people work with Windows and are not familiar with Linux, so this chapter will provide additional knowledge of working with Linux, as well as installing MicroStrategy and a web server. Look at the training architecture: There are three main components: Red Hat Linux 6.4: Used for deploying the web server and Intelligence Server. Windows machine: Uses MicroStrategy Client and Oracle database. Virtual machine with Hadoop: Ready virtual machine with Hadoop, which will connect to MicroStrategy using a brand new connection. In the real world, we should use separate machines for every component, and sometimes several machines in order to run one component. This is called clustering. Let's create a virtual machine. Creating of Red Hat Linux virtual machine Let's create a virtual machine with Red Hat Linux, which will host our Intelligence Server: Go to http://www.redhat.com/ and create an account Go to the software download center: https://access.redhat.com/downloads Download RHEL: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.2/x86_64/product-software Choose Red Hat Enterprise Linux Server Download Red Hat Enterprise Linux 6.4 x86_64 Choose Binary DVD Now we can create a virtual machine with RHEL 6.4. We have several options in order to choose the software for deploying virtual machine. In our case, we will use a VMware workstation. Before starting to deploy a new VM, we should adjust the default settings, such as increasing RAM and HDD, and adding one more network card in order to connect the external environment with the MicroStrategyClient and sample database. In addition, we should create a new network. When the deployment of the RHEL virtual machine is complete, we should activate a subscription in order to install the required packages. Let us do this with one command in the terminal: # subscription-manager register --username <username> --password <password> --auto-attach Performing prerequisites for MicroStrategy 10 According to the installation and configuration guide, we should deploy all necessary packages. In order to install them, we should execute them under the root: # su # yum install compat-libstdc++-33.i686 # yum install libXp.x86_64 # yum install elfutils-devel.x86_64 # yum install libstdc++-4.4.7-3.el6.i686 # yum install krb5-libs.i686 # yum install nss-pam-ldapd.i686 # yum install ksh.x86_64 The project design process Project design is not just about creating a project in MicroStrategy architect; it involves several steps and thorough analysis, such as how data is stored in the data warehouse, what reports the user wants based on the data, and so on. The following are the steps involved in our project design process: Logical data model design Once the user have business requirements documented, the user must create a fact qualifier matrix to identify the attributes, facts, and hierarchies, which are the building blocks of any logical data model. An example of a fact qualifier is as follows: A logical data model is created based on the source systems and designed before defining a data warehouse. So, it's good for seeing which objects the users want and checking whether the objects are in the source systems. It represents the definition, characteristics, and relationships of the data. This graphical representation of information is easily understandable by business users too. A logical data model graphically represents the following concepts: Attributes: Provides a detailed description of the data Facts: Provide numerical information about the data Hierarchies: Provide relationships between data Data warehouse schema design Physical data warehouse design is based on the logical data model and represents the storage and retrieval of data from the data warehouse. Here, we determine the optimal schema design, which ensures reporting performance and maintenance. The key components of a physical data warehouse schema are columns and tables: Columns: These store attribute and fact data. The following are the three types of columns: ID column: Stores the ID for an attribute Description column: Stores text description of the attribute Fact column: Stores fact data Tables: Physical grouping of related data. Following are the types of tables: Lookup tables: Store information about attributes such as IDs and descriptions Relationship tables: Store information about relationship between two or more attributes Fact tables: Store factual data and the level of aggregation, which is defined based on the attributes of the fact table. They contain base fact columns or derived fact columns: Base fact: Stores the data at the lowest possible level of detail. Aggregate fact: Stores data at a higher or summarized level of detail. Mobile server installation and configuration While mobile client is easy to install, mobile server is not. Here we provide a step-by-step guide on how to install mobile server: Download MicroStrategyMobile.war. Mobile server is packed in a WAR file, just like Operation Manager or Web: Copy MicroStrategyMobile.war from <Microstrategy Installation folder>/Mobile/MobileServer to /usr/local/tomcat7/webapps. Then restart Tomcat, by issuing the ./shutdown.sh and ./startup.sh commands: Connect to the mobile server. Go to http://192.168.81.134:8080/MicroStrategyMobile/servlet/mstrWebAdmin. Then add the server name localhost.localdomain and click connect: Configure mobile server. You can configure (1) Authentication settings for the mobile server application; (2) Privileges and permissions; (3) SSL encryption; (4) Client authentication with a certificate server; (5) Destination folder for the photo uploader widget and signature capture input control. Performing Pareto analysis One good thing about data discovery tools is their agile approach to the data. We can connect any data source and easily slice and dice data. Let's try to use the Pareto principle in order to answer the question: How are sales distributed among the different products? The Pareto principle states that, for many events, roughly 80% of results come from 20% of the causes. For example, 80% of profits come from 20% of the products offered. This type of analysis is very popular in product analytics. In MicroStrategy Desktop, we can use shortcut metrics in order to quickly make complex calculations such as running sums or a percent of the total. Let's build a visualization in order to see the 20% of products that bring us 80% of the money: Choose Combo Chart. Drag and drop Salesamount to the vertical and Englishproductname to the horizontal. Add Orderdate to the filters and restrict to 60 days. Right-click on Sales amountand choose Descending Sort. Right-click on Salesamount | ShortcutMetrics | Percent Running Total. Drag and drop Metric Names to the Color By. Change the color of Salesamount and Percent Running Total. Change the shape of Percent Running Total. As a result, we get this chart: From this chart we can quickly understand our top 20% of products which bring us 80% of revenue. Splunk and MicroStrategy MicroStrategy 10 has announced a new connection to Splunk. I suppose that Splunk is not very popular in the world of Business Intelligence. Most people who have heard about Splunk think that it is just a platform for processing logs. The answers is both true and false. Splunk was derived from the world of spelunking, because searching for root causes in logs is a kind of spelunking without light, and Splunk solves this problem by indexing machine data from a tremendous number of data sources, starting from applications, hardware, sensors, and so on. What is Splunk Splunk's goal is making machine data accessible, usable, and valuable for everyone, and turning machine data into business value. It can: Collect data from anywhere Search and analyze everything Gain real-time Operational Intelligence In the BI world, everyone knows what a data warehouse is. Creating reports from Splunk Now we are ready to build reports using MicroStrategy Desktop and Splunk. Let's do it: Go to MicroStrategy Desktop, click add data, and choose Splunk Create a connection using the existing DNS based on Splunk ODBC: Choose one of tables (Splunk reports): Add other tables as new data sources. Now we can build a dashboard using data from Splunk by dragging and dropping attributes and metrics: Summary In this article we looked at MicroStrategy 10 and its features. We learned about its history and deployment. We also learnt about the project design process, the Pareto analysis and about the connection of Splunk and MicroStrategy. Resources for Article: Further resources on this subject: Stacked Denoising Autoencoders [article] Creating external tables in your Oracle 10g/11g Database [article] Clustering Methods [article]
Read more
  • 0
  • 0
  • 4351

article-image-building-line-chart-ggplot2
Joel Carlson
15 Jul 2016
6 min read
Save for later

Building a Line Chart with ggplot2

Joel Carlson
15 Jul 2016
6 min read
In this blog post, you will follow along to produce a line chart using the ggplot2 package for R. The ggplot2 package is highly customizable and extensible, which provides an intuitive plotting syntax that allows for the creation of an incredibly diverse range of plots. This week, save 50% on some of out best R titles. If one isn't enough, grab any 5 featured products for $50! We're also giving away a free R eBook every week - bookmark this page! Motivating example Before getting started, let’s examine ggplot over the base R plotting functions. In general, the base R plotting system is more verbose and harder to understand and produces plots that are less attractive than their ggplot2 equivalents. To illustrate, let's build a plot using data on the growth of five trees from the “datasets” package. This is just a demonstration, so don't worry too much about the structure of the data or the details of the plotting syntax. Take a look at the following: library(datasets) data("Orange") The goal is to plot the growth of the trees as a line chart where each line corresponds to a different tree over time. Consider the following code to produce this chart using the base R plotting system: # Adapted from: http://www.statmethods.net/graphs/line.html ntrees <- length(unique(Orange$Tree)) # Get the range for the x and y axis xrange <- range(Orange$age) yrange <- range(Orange$circumference) # Set up the plot plot(xrange, yrange, type="n", xlab="Age (days)", ylab="Circumference (mm)" ) colors <- rainbow(ntrees) # Add lines for (i in 1:ntrees) { tree <- subset(Orange, Tree==i) lines(tree$age, tree$circumference, col=colors[i]) } # Add title title("Tree Growth (Base R)") # Add legend legend(xrange[1], yrange[2], 1:ntrees, cex=0.8, col=colors, lty=1, title="Tree")   The code is verbose, difficult to extend or change (for example, if you want to change the lines to points, you would need to change a number of variables), and the chart produced is not particularly attractive. The following is an equivalent chart using ggplot2:   Using ggplot2, you can produce this plot with fewer lines of code that are both more readable and extensible. You will also avoid the ugly "for" loop used to produce the lines. By the end of this post, you will have built this plot from the ground up using ggplot2! Installation and preparation For this post, you will first need to make sure that ggplot2 is installed via the following command: install.packages("ggplot2") Once the package is installed, load it into the session using: library(ggplot2) Data The dataset used in this post is already in the "tidy data" format, as described here. If your data is not in the tidy format, consider using the dplyr and/or tidyr packages to shape it into the correct format. You are using a very small dataset called Orange, which as the preceding plots describe, contains the growth patterns of five trees over several years. The data consist of 35 rows and three columns and is found in the datasets package. The structure of the data is as follows: str(Orange) 'data.frame': 35 obs. of 3 variables: $ Tree : Ord.factor w/ 5 levels "1"<"2"<"3"<"4"<..: 1 1 1 1 1 1 1 2 2 2 ... $ age : num 118 484 664 1004 1231 ... $ circumference: num 30 58 87 115 120 142 145 33 69 111 ... Building plots You will now begin building up the previous plot using principles described in "The Grammar of Graphics", upon which ggplot2 is based. To build a plot using ggplot, think about it in terms of aesthetic mappings and geometries, which are used to create layers that make up the plot. Calling ggplot() without any aesthetics or geometries defined provides an empty canvas. Aesthetics and geometries Aesthetics are the visual properties (for example, size, shape, color, fill, and so on) of the geometries present in the graph. In this context, a geometry refers to objects that directly represent data points (that is, rows in a data frame), such as dots, lines, or bars. In ggplot2, create aesthetics using the aes() function. Inside aes(), you define which variables will map to aesthetics in the plot. Here, we wish to map the "age" variable to the x-axis aesthetic, the "circumference" variable to the y-axis aesthetic, and the "Tree" factor variable to the color aesthetic, with each factor level being represented by a different color, as follows: p <- ggplot(data = Orange, aes(x=age, y=circumference, col=Tree)) If you run the code after defining only the aesthetics, you will see that there is nothing on the plot except the axes:   This is because although you have mapped aesthetics to data, you have yet to represent these mappings with geometries (or geoms). To create this representation, you add a layer on the plot using a call to the line geometry and the geom_line() function, as follows: p <-p +geom_line() p   Take a look at the full listing of geoms that can be used here. Polishing the plot With the structure of the plot in place, polish the plot by: Editing the axis labels Adding a title Moving the legend Axis labels and the title You can create/change the axis labels of the plot using labs(), as follows: p <-p +labs(x="Age (days)", y="Circumference (mm)") You can also add a title using ggtitle(), as follows: p <- p + ggtitle("Tree Growth (ggplot2)") p Moving the legend To move the legend, use the theme() function and change the legend.justification and legend.position variables via the following code: p <- p + theme(legend.justification=c(0,1), legend.position=c(0,1)) p   The justification for the legend is laid out as a grid, where (0,0) is lower-left and (1,1) is upper-right. The legend.position parameter can also take values such as "top", "bottom", "left", "right", or "none" (which removes the legend entirely). The theme() function is very powerful and allows very fine-grained control over the plot. You can find a listing of all the available parameters in the documentation here. Final words The plot is now identical to the plot used to motivate the article! The final code is as follows: ggplot(data=Orange, aes(x=age, y=circumference, col=Tree)) + geom_line() + labs(x="Age (days)", y="Circumference (mm)") + ggtitle("Tree Growth (ggplot2)") + theme(legend.justification=c(0,1), legend.position=c(0,1)) Clearly, the code is more readable, and I think you would agree that the plot is more attractive than the equivalent plot using base R. Good luck and happy plotting! About the author Joel Carlson is a recent MSc graduate from Seoul National University and current Data Science Fellow at Galvanize in San Francisco. He has contributed two R packages in CRAN (radiomics and RImagePalette). You can learn more about him or get in touch at his personal website.
Read more
  • 0
  • 0
  • 11497

article-image-exploring-shaders-and-effects
Packt
14 Jul 2016
5 min read
Save for later

Exploring Shaders and Effects

Packt
14 Jul 2016
5 min read
In this article by Jamie Dean, the author of the book Mastering Unity Shaders and Effects, we will use transparent shaders and atmospheric effects to present the volatile conditions of the planet, Ridley VI, from the surface. In this article, we will cover the following topics: Exploring the difference between cutout, transparent, and fade Rendering Modes Implementing and adjusting Unity's fog effect in the scene (For more resources related to this topic, see here.) Creating the dust cloud material The surface of Ridley VI is made inhospitable by dangerous nitrogen storms. In our game scene, these are represented by dust cloud planes situated near the surface. We need to set up the materials for these clouds with the following steps: In the Project panel, click on the PACKT_Materials folder to view its contents in the Assets panel. In the Assets panel, right-click on an empty area and choose Create| Material. Rename the material dustCloud. In the Hierarchy panel, click to select the dustcloud object. The object's properties will appear in the Inspector. Drag the dustCloud material from the Assets panel onto the Materials field in the Mesh Renderer property visible in the Inspector. Next, we will set the texture map of the material. Reselect the dustCloud material by clicking on it in the Assets panel. Lock the Inspector by clicking on the small lock icon on the top-right corner of the panel. Locking the Inspector allows you to maintain the focus on assets while you are hooking up an associated asset in your project. In the Project panel, click on the PACKT_Textures folder. Locate the strato texture map and drag it into the dustCloud material's Albedo texture slot in the Inspector. The texture map contains four atlassed variations of the cloud effect. We need to adjust how much of the whole texture is shown in the material. In the Inspector, set the Tiling Y value to 0.25. This will ensure that only a quarter of the complete height of the texture will be used in the material. The texture map also contains opacity data. To use this in our material, we need to adjust the Rendering Mode. The Rendering Mode of Standard Shader allows us to specify the opaque nature of a surface. Most often, scene objects are Opaque. Objects behind them are blocked by them and are not visible through their surface. The next option is Cutout. This is used for surfaces containing areas of full opacity and full transparency, such as leaves on a tree or a chain link fence. The opacity is basically on or off for each pixel in a texture. Fade allows objects to have cutout areas where there are completely transparent and partially transparent pixels. The Transparent option is suitable for truly transparent surfaces such as windows, glass, and some types of plastic. When specular is used with a transparent material, it is applied over the whole surface, making it unsuitable for cutout effects. Comparison of Standard Shader transparency types The Fade Rendering Mode is the best option for our dustCloud material as we want the cloud objects to be cutout so that the edges of the quad where the material is applied to is not visible. We want the surface to be partially transparent so that other dustcloud quads are visible behind them, blending the effect. At the top of the material properties in the Inspector, click on the Rendering Mode drop-down menu and set it to Fade: Transparent dustCloud material applied The dust clouds should now be visible with their opacity reading correctly as shown in the preceding figure. In the next step, we will add some further environmental effects to the scene. Adding fog to the scene In this step, we will add fog to the scene. Fog can be set to fade out distant background elements to reduce the amount of scenery that needs to be rendered. It can be colored, allowing us to blend elements together and give our scene some depth. If the Lighting tab is not already visible in the Unity project, activate it from the menu bar by navigating to Windows | Lighting. Dock the Lighting panel if necessary. Scroll to the bottom to locate the Fog properties group. Check the checkbox next to Fog to enable it. You will see that fog is added to the environment in the Scene view as shown in the following figure. The default values do not quite match to what we need in the planet surface environment: Unity's default fog effect Click within the color swatch next to Fog Color to define the color value. When the color picker appears over the main Unity interface, type the hexcode E8BE80FF into the Hex Color field near the bottom as shown in the following screenshot: Fog effect color selection This will define the  yellow orange color that is appropriate for our planet's atmosphere. Set the Fog Mode to Exponential Squared to allow it to give the appearance of becoming thicker in the distance. Increase the fog by increasing the End value to 0.05: Adjusted fog blended with dust cloud transparencies Our dust cloud objects are being blended with the fog as shown in the preceding image. Summary In this article, we took a closer look at material Rendering Modes and how transparent effects can be implemented in a scene. We further explored the real-time environmental effects by creating dust clouds that fade in and out using atlassed textures. We then set up an environmental fog effect using Unity's built-in tools. For more information on Unity shaders and effects, you can refer to the following books: Unity 5.x Animation Cookbook: https://www.packtpub.com/game-development/unity-5x-animation-cookbook Unity 5.x Shaders and Effects Cookbook: https://www.packtpub.com/game-development/unity-5x-shaders-and-effects-cookbook Unity Shaders and Effects Cookbook: https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook Resources for Article: Further resources on this subject: Looking Good – The Graphical Interface [article] Build a First Person Shooter [article] The Vertex Functio [article]
Read more
  • 0
  • 0
  • 31296

article-image-basic-website-using-nodejs-and-mysql-database
Packt
14 Jul 2016
5 min read
Save for later

Basic Website using Node.js and MySQL database

Packt
14 Jul 2016
5 min read
In this article by Fernando Monteiro author of the book Node.JS 6.x Blueprints we will understand some basic concepts of a Node.js application using a relational database (Mysql) and also try to look at some differences between Object Document Mapper (ODM) from MongoDB and Object Relational Mapper (ORM) used by Sequelize and Mysql. For this we will create a simple application and use the resources we have available as sequelize is a powerful middleware for creation of models and mapping database. We will also use another engine template called Swig and demonstrate how we can add the template engine manually. (For more resources related to this topic, see here.) Creating the baseline applications The first step is to create another directory, I'll use the root folder. Create a folder called chapter-02. Open your terminal/shell on this folder and type the express command: express –-git Note that we are using only the –-git flag this time, we will use another template engine but we will install it manually. Installing Swig template Engine The first step to do is change the default express template engine to use Swig, a pretty simple template engine very flexible and stable, also offers us a syntax very similar to Angular which is denoting expressions just by using double curly brackets {{ variableName }}. More information about Swig can be found on the official website at: http://paularmstrong.github.io/swig/docs/ Open the package.json file and replace the jade line for the following: "swig": "^1.4.2" Open your terminal/shell on project folder and type: npm install Before we proceed let's make some adjust to app.js, we need to add the swig module. Open app.js and add the following code, right after the var bodyParser = require('body-parser'); line: var swig = require('swig'); Replace the default jade template engine line for the following code: var swig = new swig.Swig(); app.engine('html', swig.renderFile); app.set('view engine', 'html'); Refactoring the views folder Let's change the views folder to the following new structure: views pages/ partials/ Remove the default jade files form views. Create a file called layout.html inside pages folder and place the following code: <!DOCTYPE html> <html> <head> </head> <body> {% block content %} {% endblock %} </body> </html> Create a index.html inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <h1>{{ title }}</h1> Welcome to {{ title }} {% endblock %} Create a error.html page inside the views/pages folder and place the following code: {% extends 'layout.html' %} {% block title %}{% endblock %} {% block content %} <div class="container"> <h1>{{ message }}</h1> <h2>{{ error.status }}</h2> <pre>{{ error.stack }}</pre> </div> {% endblock %} We need to adjust the views path on app.js, replace the code on line 14 for the following code: // view engine setup app.set('views', path.join(__dirname, 'views/pages')); At this time we completed the first step to start our MVC application. In this example we will use the MVC pattern in its full meaning, Model, View, Controller. Creating controllers folder Create a folder called controllers inside the root project folder. Create a index.js inside the controllers folder and place the following code: // Index controller exports.show = function(req, res) { // Show index content res.render('index', { title: 'Express' }); }; Edit the app.js file and replace the original index route app.use('/', routes); with the following code: app.get('/', index.show); Add the controller path to app.js on line 9, replace the original code, with the following code: // Inject index controller var index = require('./controllers/index'); Now it's time to get if all goes as expected, we run the application and check the result. Type on your terminal/shell the following command: npm start Check with the following URL: http://localhost:3000, you'll see the welcome message of express framework. Removing the default routes folder Remove the routes folder and its content. Remove the user route from the app.js, after the index controller and on line 31. Adding partials files for head and footer Inside views/partials create a new file called head.html and place the following code: <meta charset="utf-8"> <title>{{ title }}</title> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/css/bootstrap.min.css'> <link rel="stylesheet" href="/stylesheets/style.css"> Inside views/partials create a file called footer.html and place the following code: <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.0.0-alpha.2/js/bootstrap.min.js'></script> Now is time to add the partials file to layout.html page using the include tag. Open layout.html and add the following highlighted code: <!DOCTYPE html> <html> <head> {% include "../partials/head.html" %} </head> <body> {% block content %} {% endblock %} {% include "../partials/footer.html" %} </body> </html> Finally we are prepared to continue with our project, this time our directories structure looks like the following image: Folder structure Summaray In this article, we are discussing the basic concept of Node.js and Mysql database and we also saw how to refactor express engine template and use another resource like Swig template library to build a basic website. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Python Scripting Essentials [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 47023
article-image-web-typography
Packt
13 Jul 2016
14 min read
Save for later

Web Typography

Packt
13 Jul 2016
14 min read
This article by Dario Calonaci, author of Practical Responsive Typography teaches you about typography: it's fascinating mysteries, sensual shapes, and everything else you wanted to know about it; this article is about to reveal everything on the subject for you!Every letter, every curve, and every shape in the written form conveys feelings; so it's important to learn everything about it if you want to be a better designer. You also need to know how readable your text is, therefore you have to set it up following some natural constraints our eyes and minds have built in, how white space influences your message, how every form should be taken into consideration in the writing of a textand this article will tell you exactly that! Plus a little more! You will also learn how to approach all of the above in today number one medium, the World Wide Web. Since 95 percent of the Web is made of typography, according toOliver Reichenstein, it's only logical that if you want to approach the Web you surely need to understand it better. Through this article, you'll learn all the basics of typography and will be introduced to it core features, such as: Anatomy Line Height Families Kerning (For more resources related to this topic, see here.) Note that typography, the art of drawing with words, is really ancient, as much as 3200 years prior to the mythological appearance of Christ and the very first book on this matter is the splendid Manuale Tipograficofrom Giambattista Bodoni, which he self-published in 1818. Taking into consideration all the old data, and the new knowledge, everything started from back then and every rule that has been born in print is still valid today, even for the different medium that the Web is. Typefaces classification The most commonly used type classification is based on the technical style and as such it's the one we are going to analyze and use. They are as follows: Serifs Serifs are referred to as such because of the small details that extend from the ending shapes of the characters; the origin of the word itself is obscure, various explanations have been given but none has been accepted as resolute. Their origin can be traced back to the Latin alphabetsof Roman times, probably because of the flares of the brush marks in corners, which were later chiseled in stone by the carvers. They generally give better readability in print than on a screen, probably because of the better definition and evolution of the former in hundreds of years, while the latter technology is, on an evolutionary path, a newborn. With the latest technologies and the high definition monitors that can rival the print definition, multiple scientific studies have been found inconclusive, showing that there is no discernible difference in readability between sans and serifs on the screen and as of today they are both used on the Web. Within this general definition, there are multiples sub-families, as Old Style or Humanist. Old Style or Humanist The oldest ones, dating as far back as the mid 1400s are recognized for the diagonal guide on which the characters are built on; these are clearly visible for example on the e and o of Adobe Jenson. Transitional Serifs They are neither antique nor modern and they date back to the 1700s and are generally numerous. They tend to abandon some of the diagonal stress, but not all of them, especially keeping the o. Georgia and Baskerville are some well-known examples. Modern Serifs Modern Serifs tend to rely on the contrast between thick and thin strokes, abandon diagonal for vertical stress, and on more straight serifs. They appeared in the late 1700s. Bodoni and Didot are certainly the most famous typefaces in this family. Slab Serifs Slab Serifs have little to no contrast between strokes, thick serifs, and sometimes appear with fixed widths, the underlying base resembles one of the sansmore. American Typewriter is the most famous typefaces in this familyas shown in the following image: Sans Serifs They are named sodue to the loss of the decorative serifs, in French "sans" stands for "without". Sans Serif isa more recent invention, since it was born in the late 18th century. They are divided into the following four sub-families: Grotesque Sans It is the earliest of the bunch; its appearance is similar to the serif with contrasted strokesbut without serifsand with angled terminals Franklin Gothic is one of the most famous typefaces in this family. Neo-Grotesque Sans It is plain looking with little to no contrast, small apertures, and horizontal terminals. They are one of the most common font styles ranging from Arial and Helvetica to Universe. Humanist font They have a friendly tone due to the calligraphic stylewith a mixture of different widths characters and, most of the times, contrasted strokes. Gill Sans being the flag-carrier. Geometric font Based on the geometric and rigorous shapes, they are more modern and are used less for body copy. They have a general simplicity but readability of their charactersis difficult. Futura is certainly the most famous geometric font. Script typefaces They are usually classified into two sub-familiesbased upon the handwriting, with cursive aspect and connected letterforms. They are as follows: Formal script Casual script Monospaced typefaces Display typefaces Formal script They are reminiscent of the handwritten letterforms common in the 17th and 18th centuries, sometimes they are also based on handwritings offamous people. They are commonly used for elevated and highly elegant designs and are certainly unusable for long body copy. Kunstler Script is a relatively recent formal script. Casual script This is less precise and tends to resemble a more modern and fast handwriting. They are as recent as the mid-twentieth century. Mistral is certainly the most famous casual script. Monospaced typefaces Almost all the aforementioned families are proportional in their style, (each character takes up space that is proportional to its width). This sub-family addresses each character width as the same, with narrower ones, such as i,just gain white space around them, sometimesresulting in weird appearances. Hence,Due to their nature and their spacing, they aren’t advised as copy typefaces, since their mono spacing can bring unwanted visual imbalance to the text. Courier is certainly the most known monospaced typeface. Display typefaces They are the broadest category and are aimed at small copy to draw attention and rarely follow rules, spreading from every one of the above families and expressing every mood. Recently even Blackletters (the very first fonts designed with the very first, physical printing machines) are being named under this category. For example, Danube and Val are just two of the multitude thatare out there: Expressing different moods In conjunction with the division of typography families, it's also really importantfor every project, both in print and web, to know what they express and why. It takes years of experience to understand those characteristics and the methodto use them correctly; here we are just addressing a very basic distinction to help you start with. Remember that in typography and type design, every curve conveys a different mood, so just be patient while studying and designing. Serifs vs Sans Serifs, through their decorations, their widths, and in and out of their every sub-family convey old and antique/traditional serious feelings, even when more modern ones are used; they certainly convey a more formal appearance. On the other hand, sans serifare aimed at a more modern and up-to-date world, conveying technological advancement, rationality, usually but not always,and less of a human feeling. They're more mechanical and colder than a serif, unless the author voluntarily designed them to be more friendly than the standard ones.. Scripts vs scripts As said, they are of two types, and as the name suggests, the division is straightforward. Vladimir is elegant, refined, upper class looking, and expressesfeelings such as respect. Arizonia on the other hand is not completely informal but is still a schizophrenic mess of strokes and a conclusionless expression of feeling; I'm not sure whether I feel amused or offended for its exaggerated confidentiality. Displaytypefaces Since theyare different in aspect from each other and the fact that there is no general rule that surrounds and defines the Display family, they can express the whole range of emotions.They can go from apathy to depression, from a complete childish involvement and joy to some suited, scary seriousness business feeling (the latter definition is usually expression of some monospaced typefaces). Like every other typeface, more specifically here, every change in weight and style brings in a new sentiment to the table: use it in bold and your content will be strong, fierce; change it to a lighter italic and it will look like its moving, ready to exit from the page. As such, they take years to master and we advice not to use them on your first web work, unless you are completely sure of what you are doing. Every font communicates differently, on a conscious as well as on a subconscious level; even within the same typeface,it all comes down to what we are accustomed to. In the case of font color, what a script does and feel in the European culture can drastically change if the same is used for advertising in the Asian market. Always do your research first. Combining typefaces Combining typefaces is a vital aspect of your projects but it's a tool that is hard to master. Generally,it is said that you should use no more than two fonts in your design. It is a good rule; but let me explain it—or better—enlarge it. While working with text for an informational text block, similar tothe one you are reading now, stick to it. You will express enough contrast and interest while stayingbalanced and the reader willnot get distracted. They will follow the flow and understand the hierarchy of what they are reading. However, as a designer, while typesetting you're not always working on a pure text block: you could be working with words on a packaging or on the web. However, if you know enough about typography and your eyes are well trained (usually after years of visual research and of designing with attention) you can break the rules. You get energy only when mixing contrasting fonts, so why not add a third one to bring in a better balance between the two? As a rule, you can combine fonts when: They are not in the same classification. You mix fonts to add contrast and energy and to inject interest and readability in your document and this is why the clash between serif and sans has been proven timeless.Working with two serifs/sans together instead works only with extensive trial and error and you should choose two fonts that carry enough differences. You can usually combine different subfamilies, for example a slab serif with a modern one or a geometric sans with a grotesque. If your scope is readability, find the same structure.A similar height and similar width works easily when choosing two classifications; but if your scope is aesthetic for small portions of text, you can try completely different structures, such as a slab serif with a geometric sans. You willsee that sometimes it does the job! Go extreme!This requires more experience to balance it out, but if you're working with display or script typefaces, it's almost impossible to find something similar without being boring or unreadable. Try to mix them with more simplistic typefaces if the starting point has a lot of decorations; you won't regret the trial! Typography properties Now that you know the families, you need to know the general rules that will make your text and their usage flow like a springtime breeze. Kerning Is the adjusting of space between two characters to achieve a visually balanced word trough anda visually equal distribution of white space. The word originates from the Latin wordcardo meaning Hinge.When letters were made of metal on wooden blocks, parts of them were built to hang off the base, thus giving space for the next character to sit closer. Tracking It is also as called letter-spacingand it is concerned with the entire word—not single characters or the whole text block—to change the density and texture in a text and to affect its readability. The word originates from the metal tracks where the wooden blocks with the characters were moved horizontally. Tracking request careful settings: too much white space and the words won't appear as single coherent blocks anymore –reduce the white space between the letters drastically and the letters themselves won't be readable. As a rule, you want your lines of text to be made of 50 to 75 characters, including dots and spaces, to achieve better readability. Some will ask you to stop your typing as soon as approximately 39 characters are reached, but I tend to differ. Ligatures According to kerning, especially on serifs, two or three character can clash together. Ligatures are born to avoid this; they are stylistic characters that combine two or three letters into one letter: Standard ligatures are naturally and functionally the most common ones and are made between fi, fl, and other letters when placed next to an f. They should be used, as they tend to make the script more legible. Discretionary ligatures are not functional, they just serve a decorative purpose. They are commonly found and designed between Th and st;as mentioned above, you should use them at your discretion. Leading Leading is the space between the baselines of your text, while line-height adds to the notions and also to the height of ascenders and descenders.The name came to be because in the ancient times, stripes of lead were used to add white space between two lines of text. There are many rules in typesetting (none of which came out as a perfect winner) and everything changes according to the typeface you're using. Mechanical print tends to add 2 points to the current measure being used, while a basic rule for digital is to scale the line-spacing as much as 120 percent of your x-height, which is called single spacing. As a rule of thumb, scale in between 120 and 180 percent and youare good to go (of course with the latter being used for typefaces with a major x-height). Just remember, the descenders should never touch the next line ascenders, otherwise the eye will perceive the text as crumpled and you will have difficulties to understand where one line ends and the other start. Summary The preceding text covers the basics of typography, which you should study and know in order to make the text in your assignment flow better. Now, you have a greater understanding of typography: what it is; what it's made of; what are its characteristics; what the brain search for and process in a text; the lengths it will go to understand it; and the alignments, spacing, and other issues that revolve around this beautiful subject. The most important rule to remember is that text is used to express something. It may be an informative reading, may be the expression of a feeling, such as a poem, or it can be something to make you feel something specifically. Every text has a feeling, every text has an inner tone of voice that can be expressed visually through typography. Usually it’s the text itself that dictates its feeling – and help you decide which and how to express it. All the preceding rules, properties, and knowledgeare means for you to express it and there's a large range of properties on the Web for you to use them. There is almost as much variety available in print with properties for leading, kerning, tracking, and typographical hierarchy all built in your browsers. Resources for Article: Further resources on this subject: Exploring Themes [article] A look into responsive design frameworks [article] Joomla! Template System [article]
Read more
  • 0
  • 0
  • 31488

article-image-responsive-applications-asynchronous-programming
Packt
13 Jul 2016
9 min read
Save for later

Responsive Applications with Asynchronous Programming

Packt
13 Jul 2016
9 min read
In this article by Dirk Strauss, author of the book C# Programming Cookbook, he sheds some light on how to handle events, exceptions and tasks in asynchronous programming, making your application responsive. (For more resources related to this topic, see here.) Handling tasks in asynchronous programming Task-Based Asynchronous Pattern (TAP) is now the recommended method to create asynchronous code. It executes asynchronously on a thread from the thread pool and does not execute synchronously on the main thread of your application. It allows us to check the task's state by calling the Status property. Getting ready We will create a task to read a very large text file. This will be accomplished using an asynchronous Task. How to do it… Create a large text file (we called ours taskFile.txt) and place it in your C:temp folder: In the AsyncDemo class, create a method called ReadBigFile() that returns a Task<TResult> type, which will be used to return an integer of bytes read from our big text file: public Task<int> ReadBigFile() { } Add the following code to open and read the file bytes. You will see that we are using the ReadAsync() method that asynchronously reads a sequence of bytes from the stream and advances the position in that stream by the number of bytes read from that stream. You will also notice that we are using a buffer to read those bytes. public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); return readBytes; } Exceptions you can expect to handle from the ReadAsync() method are ArgumentNullException, ArgumentOutOfRangeException, ArgumentException, NotSupportedException, ObjectDisposedException and InvalidOperatorException. Finally, add the final section of code just after the var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); line that uses a lambda expression to specify the work that the task needs to perform. In this case, it is to read the bytes in the file: public Task<int> ReadBigFile() { var bigFile = File.OpenRead(@"C:temptaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, (int)bigFile.Length); readBytes.ContinueWith(task => { if (task.Status == TaskStatus.Running) Console.WriteLine("Running"); else if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Faulted"); bigFile.Dispose(); }); return readBytes; } If not done so in the previous section, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Now, make sure that you add code to call the AsyncDemo class's ReadBigFile() method asynchronously. Remember to read the result from the method (which are the bytes read) into an integer variable: private async void button1_Click(object sender, EventArgs e) { Console.WriteLine("Start file read"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadBigFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. Clicking on the button1 button will display the outputs to our Output window. Throughout this code execution, the form remains responsive. Take note though that the information displayed in your Output window will differ from the screenshot. This is because the file you used is different from mine. How it works… The task is executed on a separate thread from the thread pool. This allows the application to remain responsive while the large file is being processed. Tasks can be used in multiple ways to improve your code. This recipe is but one example. Exception handling in asynchronous programming Exception handling in asynchronous programming has always been a challenge. This was especially true in the catch blocks. As of C# 6, you are now allowed to write asynchronous code inside the catch and finally block of your exception handlers. Getting ready The application will simulate the action of reading a logfile. Assume that a third-party system always makes a backup of the logfile before processing it in another application. While this processing is happening, the logfile is deleted and recreated. Our application, however, needs to read this logfile on a periodic basis. We, therefore, need to be prepared for the case where the file does not exist in the location we expect it in. Therefore, we will purposely omit the main logfile, so that we can force an error. How to do it… Create a text file and two folders to contain the logfiles. We will, however, only create a single logfile in the BackupLog folder. The MainLog folder will remain empty: In our AsyncDemo class, write a method to read the main logfile in the MainLog folder: private async Task<int> ReadMainLog() { var bigFile = " File.OpenRead(@"C:tempLogMainLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Main Log RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Main Log Faulted"); bigFile.Dispose(); }); return await readBytes; } Create a second method to read the backup file in the BackupLog folder: private async Task<int> ReadBackupLog() { var bigFile = " File.OpenRead(@"C:tempLogBackupLogtaskFile.txt"); var bigFileBuffer = new byte[bigFile.Length]; var readBytes = bigFile.ReadAsync(bigFileBuffer, 0, " (int)bigFile.Length); await readBytes.ContinueWith(task => { if (task.Status == TaskStatus.RanToCompletion) Console.WriteLine("Backup Log " RanToCompletion"); else if (task.Status == TaskStatus.Faulted) Console.WriteLine("Backup Log Faulted"); bigFile.Dispose(); }); return await readBytes; } In actual fact, we would probably only create a single method to read the logfiles, passing only the path as a parameter. In a production application, creating a class and overriding a method to read the different logfile locations would be a better approach. For the purposes of this recipe, however, we specifically wanted to create two separate methods so that the different calls to the asynchronous methods are clearly visible in the code. We will then create a main ReadLogFile() method that tries to read the main logfile. As we have not created the logfile in the MainLog folder, the code will throw a FileNotFoundException. It will then run the asynchronous method and await that in the catch block of the ReadLogFile() method (something which was impossible in the previous versions of C#), returning the bytes read to the calling code: public async Task<int> ReadLogFile() { int returnBytes = -1; try { Task<int> intBytesRead = ReadMainLog(); returnBytes = await ReadMainLog(); } catch (Exception ex) { try { returnBytes = await ReadBackupLog(); } catch (Exception) { throw; } } return returnBytes; } If not done so in the previous recipe, add a button to your Windows Forms application's Form designer. On the winformAsync form designer, open Toolbox and select the Button control, which is found under the All Windows Forms node: Drag the button control onto the Form1 designer: With the button control selected, double-click on the control to create the click event in the code behind. Visual Studio will insert the event code for you: namespace winformAsync { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { } } } Change the button1_Click event and add the async keyword to the click event. This is an example of a void returning an asynchronous method: private async void button1_Click(object sender, EventArgs e) { } Next, we will write the code to create a new instance of the AsyncDemo class and attempt to read the main logfile. In a real-world example, it is at this point that the code does not know that the main logfile does not exist: private async void button1_Click(object sender, EventArgs "e) { Console.WriteLine("Read backup file"); Chapter6.AsyncDemo oAsync = new Chapter6.AsyncDemo(); int readResult = await oAsync.ReadLogFile(); Console.WriteLine("Bytes read = " + readResult); } Running your application will display the Windows Forms application: Before clicking on the button1 button, ensure that the Output window is visible: From the View menu, click on the Output menu item or type Ctrl + Alt + O to display the Output window. This will allow us to see the Console.Writeline() outputs as we have added them to the code in the Chapter6 class and in the Windows application. To simulate a file not found exception, we deleted the file from the MainLog folder. You will see that the exception is thrown, and the catch block runs the code to read the backup logfile instead: How it works… The fact that we can await in catch and finally blocks allows developers much more flexibility because asynchronous results can consistently be awaited throughout the application. As you can see from the code we wrote, as soon as the exception was thrown, we asynchronously read the file read method for the backup file. Summary In this article we looked at how TAP is now the recommended method to create asynchronous code. How tasks can be used in multiple ways to improve your code. This allows the application to remain responsive while the large file is being processed also how exception handling in asynchronous programming has always been a challenge and how to use catch and finally block to handle exceptions. Resources for Article: Further resources on this subject: Functional Programming in C#[article] Creating a sample C#.NET application[article] Creating a sample C#.NET application[article]
Read more
  • 0
  • 0
  • 5073

article-image-hacking-android-apps-using-xposed-framework
Packt
13 Jul 2016
6 min read
Save for later

Hacking Android Apps Using the Xposed Framework

Packt
13 Jul 2016
6 min read
In this article by Srinivasa Rao Kotipalli and Mohammed A. Imran, authors of Hacking Android, we will discuss Android security, which is one of the most prominent emerging topics today. Attacks on mobile devices can be categorized into various categories, such as exploiting vulnerabilities in the kernel, attacking vulnerable apps, tricking users to download and run malware and thus stealing personal data from the device, and running misconfigured services on the device. OWASP has also released the Mobile top 10 list, helping the community better understand mobile security as a whole. Although it is hard to cover a lot in a single article, let's look at an interesting topic: the the runtime manipulation of Android applications. Runtime manipulation is controlling application flow at runtime. There are multiple tools and techniques out there to perform runtime manipulation on Android. This article discusses using the Xposed framework to hook onto Android apps. (For more resources related to this topic, see here.) Let's begin! Xposed is a framework that enables developers to write custom modules for hooking onto Android apps and thus modifying their flow at runtime. It was released by rovo89 in 2012. It works by placing the app_process binary in /system/bin/ directory, replacing the original app_process binary. app_process is the binary responsible for starting the zygote process. Basically, when an Android phone is booted, init runs /system/bin/app_process and gives the resulting process the name Zygote. We can hook onto any process that is forked from the Zygote process using the Xposed framework. To demonstrate the capabilities of Xposed framework, I have developed a custom vulnerable application. The package name of the vulnerable app is com.androidpentesting.hackingandroidvulnapp1. The code in the following screenshot shows how the vulnerable application works: This code has a method, setOutput, that is called when the button is clicked. When setOutput is called, the value of i is passed to it as an argument. If you notice, the value of i is initialized to 0. Inside the setOutput function, there is a check to see whether the value of i is equal to 1. If it is, this application will display the text Cracked. But since the initialized value is 0, this app always displays the text You cant crack it. Running the application in an emulator looks like this: Now, our goal is to write an Xposed module to modify the functionality of this app at runtime and thus printing the text Cracked. First, download and install the Xposed APK file in your emulator. Xposed can be downloaded from the following link: http://dl-xda.xposed.info/modules/de.robv.android.xposed.installer_v32_de4f0d.apk Install this downloaded APK file using the following command: adb install [file name].apk Once you've installed this app, launch it, and you should see the following screen: At this stage, make sure that you have everything set up before you proceed. Once you are done with the setup, navigate to the Modules tab, where we can see all the installed Xposed modules. The following figure shows that we currently don't have any modules installed: We will now create a new module to achieve the goal of printing the text Cracked in the target application shown earlier. We use Android Studio to develop this custom module. Here is the step-by-step procedure to simplify the process: The first step is to create a new project in Android Studio by choosing the Add No Actvity option, as shown in the following screenshot. I named it XposedModule. The next step is to add the XposedBridgeAPI library so that we can use Xposed-specific methods within the module. Download the library from the following link: http://forum.xda-developers.com/attachment.php?attachmentid=2748878&d=1400342298 Create a folder called provided within the app directory and place this library inside the provided directory. Now, create a folder called assets inside the app/src/main/ directory, and create a new file called xposed_init.We will add contents to this file in a later step.After completing the first 3 steps, our project directory structure should look like this: Now, open the build.gradle file under the app folder, and add the following line under the dependencies section: provided files('provided/[file name of the Xposed   library.jar]') In my case, it looks like this: Create a new class and name it XposedClass, as shown here: After you're done creating a new class, the project structure should look as shown in the following screenshot: Now, open the xposed_init file that we created earlier, and place the following content in it. com.androidpentesting.xposedmodule.XposedClass This looks like the following screenshot: Now, let's provide some information about the module by adding the following content to AndroidManifest.xml: <meta-data android_name="xposedmodule" android_value="true" />   <meta-data android_name="xposeddescription" android_value="xposed module to bypass the validation" />     <meta-data android_name="xposedminversion" android_value="54" /> Make sure that you add this content to the application section as shown here: Finally, write the actual code within in the XposedClass to add a hook. Here is the piece of code that actually bypasses the validation being done in the target application: Here's what we have done in the previous code: Firstly, our class is implementing IXposedHookLoadPackage We wrote the method implementation for the handleLoadPackage method—this is mandatory when we implement IXposedHookLoadPackage We set up the string values for classToHook and functionToHook An if condition is written to see whether the package name equals the target package name If package name matches, execute the custom code provided inside beforeHookedMethod Within the beforeHookedMethod, we are setting the value of i to 1 and thus when this button is clicked, the value of i will be considered as 1, and the text Cracked will be displayed as a toast message Compile and run this application just like any other Android app, and then check the Modules section of Xposed application. You should see a new module with the name XposedModule, as shown here: Select the module and reboot the emulator. Once the emulator has restarted, run the target application and click on the Crack Me button. As you can see in the screenshot, we have modified the application's functionality at runtime without actually modifying its original code. We can also see the logs by tapping on the Logs section. You can observe the XposedBridge.log method in the source code shown previously. This is the method used to log the following data shown: Summary Xposed without a doubt is one of the best frameworks available out there. Understanding frameworks such as Xposed is essential to understanding Android application security. This article demonstrated the capabilities of the Xposed framework to manipulate the apps at runtime. A lot of other interesting things can be done using Xposed, such as bypassing root detection and SSL pinning. Further resources on this subject: Speeding up Gradle builds for Android [article] https://www.packtpub.com/books/content/incident-response-and-live-analysis [article] Mobile Forensics [article]
Read more
  • 0
  • 0
  • 37060
article-image-deploying-docker-container-cloud-part-2
Darwin Corn
13 Jul 2016
3 min read
Save for later

Deploying a Docker Container to the Cloud, Part 2

Darwin Corn
13 Jul 2016
3 min read
I previously wrote about app containerization using Docker, and if you’re unfamiliar with that concept, please read that post first. In this post, I'm going to pick up where I left off, with a fully containerized frontend ember application showcasing my music that I now want to share with the world. Speaking of that app in part 1—provided you don't have a firewall blocking port 80 inbound—if you've come straight over from the previous post, you're serving a web app to everyone on your internal network right now. You should, of course, map it to only allow 127.0.0.1 on port 80 instead of 0.0.0.0 (everyone). In this post I am going to focus on my mainstream cloud platform of choice, Google Cloud Platform (GCP). It will only cost ~$5/month, with room to house more similarly simple apps—MVPs, proofs of concept and the like. Go ahead and sign up for the free GCP trial, and create a project. Templates are useful for rapid scaling and minimizing the learning curve; but for the purpose of learning, how this actually works, and for minimizing financial impact, they're next to useless. First, you need to get the container into the private registry that comes with every GCP project. Okay, let's get started. You need to tag the image so that Google Cloud Platform knows where to put it. Then you're going to use the gcloud command-line tool to push it to that cloud registry. $ docker tag docker-demo us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo $ gcloud docker push us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo Congratulations, you have your first container in the cloud! Now let's deploy it. We're going to use Google's Compute Engine, not their Container Engine (besides the registry, but no cluster templates for us). Refer to this article, and if you're using your own app, you'll have to write up a container manifest. If you're using the docker-demo app from the first article, make sure to run a git pull to get an up-to-date version of the repo and notice that a containers.yaml manifest file has been added to the root of the application. containers.yaml apiVersion: v1 kind: Pod metadata: name: docker-demo spec: containers: - name: docker-demo image: us.gcr.io/[YOUR PROJECT ID HERE]/docker-demo imagePullPolicy: Always ports: - containerPort: 80 hostPort: 80 That file instructs the container-vm (purpose-built for running containers)-based VM we're about to create to pull the image and run it. Now let's run the gcloud command to create the VM in the cloud that will host the image, telling it to use the manifest. $ gcloud config set project [YOUR PROJECT ID HERE] $ gcloud compute instances create docker-demo --image container-vm --metadata-from-file google-container-manifest=containers.yaml --zone us-central1-a --machine-type f1-micro Launch the GCP Developer Console and set the firewall on your shiny new VM to 'Allow HTTP traffic'. Or run the following command. $ gcloud compute instances add-tags docker-demo --tags http-server --zone us-central1-a Either way, the previous gcloud compute instances create command should've given you the External (Public) IP of the VM, and navigating there from your browser will show the app. Congrats, you've now deployed a fully containerized web application to the cloud! If you're leaving this up, remember to reserve a static IP for your VM. I recommend consulting some of the documentation I've referenced here to monitor VM and container health as well. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 12198

article-image-working-spring-tag-libraries
Packt
13 Jul 2016
26 min read
Save for later

Working with Spring Tag Libraries

Packt
13 Jul 2016
26 min read
In this article by Amuthan G, the author of the book Spring MVC Beginners Guide - Second Edition, you are going to learn more about the various tags that are available as part of the Spring tag libraries. (For more resources related to this topic, see here.) After reading this article, you will have a good idea about the following topics: JavaServer Pages Standard Tag Library (JSTL) Serving and processing web forms Form-binding and whitelisting Spring tag libraries JavaServer Pages Standard Tag Library JavaServer Pages (JSP) is a technology that lets you embed Java code inside HTML pages. This code can be inserted by means of <% %> blocks or by means of JSTL tags. To insert Java code into JSP, the JSTL tags are generally preferred, since tags adapt better to their own tag representation of HTML, so your JSP pages will look more readable. JSP even lets you  define your own tags; you must write the code that actually implements the logic of your own tags in Java. JSTL is just a standard tag library provided by Oracle. We can add a reference to the JSTL tag library in our JSP pages as follows: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> Similarly, Spring MVC also provides its own tag library to develop Spring JSP views easily and effectively. These tags provide a lot of useful common functionality such as form binding, evaluating errors and outputting messages, and more when we work with Spring MVC. In order to use these, Spring MVC has provided tags in our JSP pages. We must add a reference to that tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> These taglib directives declare that our JSP page uses a set of custom tags related to Spring and identify the location of the library. It also provides a means to identify the custom tags in our JSP page. In the taglib directive, the uri attribute value resolves to a location that the servlet container understands and the prefix attribute informs which bits of markup are custom actions. Serving and processing forms In Spring MVC, the process of putting a HTML form element's values into model data is called form binding. The following line is a typical example of how we put data into the Model from the Controller: model.addAttribute(greeting,"Welcome") Similarly, the next line shows how we retrieve that data in the View using a JSTL expression: <p> ${greeting} </p> But what if we want to put data into the Model from the View? How do we retrieve that data in the Controller? For example, consider a scenario where an admin of our store wants to add new product information to our store by filling out and submitting a HTML form. How can we collect the values filled out in the HTML form elements and process them in the Controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form backing bean in the Model. Later, the Controller can retrieve the formbacking bean from the Model using the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. The form backing bean (sometimes called the form bean) is used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields in the form and the properties in our domain object. Another approach is creating separate classes for form beans, which is sometimes called Data Transfer Objects (DTO). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags, which are more or less similar to HTML form and input tags, but have some special attributes to bind form elements’ data with the form backed bean. Let's create a Spring web form in our application to add new products to our product list: Open our ProductRepository interface and add one more method declaration to it as follows: void addProduct(Product product); Add an implementation for this method in the InMemoryProductRepository class as follows: @Override public void addProduct(Product product) { String SQL = "INSERT INTO PRODUCTS (ID, " + "NAME," + "DESCRIPTION," + "UNIT_PRICE," + "MANUFACTURER," + "CATEGORY," + "CONDITION," + "UNITS_IN_STOCK," + "UNITS_IN_ORDER," + "DISCONTINUED) " + "VALUES (:id, :name, :desc, :price, :manufacturer, :category, :condition, :inStock, :inOrder, :discontinued)"; Map<String, Object> params = new HashMap<>(); params.put("id", product.getProductId()); params.put("name", product.getName()); params.put("desc", product.getDescription()); params.put("price", product.getUnitPrice()); params.put("manufacturer", product.getManufacturer()); params.put("category", product.getCategory()); params.put("condition", product.getCondition()); params.put("inStock", product.getUnitsInStock()); params.put("inOrder", product.getUnitsInOrder()); params.put("discontinued", product.isDiscontinued()); jdbcTempleate.update(SQL, params); } Open our ProductService interface and add one more method declaration to it as follows: void addProduct(Product product); And add an implementation for this method in the ProductServiceImpl class as follows: @Override public void addProduct(Product product) { productRepository.addProduct(product); } Open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/market/products"; } Finally, add one more JSP View file called addProduct.jsp under the  src/main/webapp/WEB-INF/views/ directory and add the following tag reference declaration as the very first line in it: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now add the following code snippet under the tag declaration line and save addProduct.jsp. Note that I skipped some <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage you to add binding tags for the skipped fields while you are trying out this exercise: <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form method="POST" modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> <form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now run our application and enter the URL: http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add product information as shown in the following screenshot:Add a products web form Now enter all the information related to the new product that you want to add and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. Whatever was mentioned prior to step 5 was very familiar to you I guess. Anyhow, I will give you a brief note on what we did in steps 1 to 4. In step 1, we just created an addProduct method declaration in our ProductRepository interface to add new products. And in step 2, we just implemented the addProduct method in our InMemoryProductRepository class. Steps 3 and 4 are just a Service layer extension for ProductRepository. In step 3, we declared a similar method addProduct in our ProductService and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; what we did in step 5 was nothing but adding two request mapping methods, namely getAddNewProductForm and processAddNewProductForm: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/products/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/market/products"; } If you observe those methods carefully, you will notice a peculiar thing, that is, both the methods have the same URL mapping value in their @RequestMapping annotations (value = "/products/add"). So if we enter the URL http://localhost:8080/webstore/market/products/add in the browser, which method will Spring MVC  map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). Yes if you look again, even though both methods have the same URL mapping, they differ in the request method. So what is happening behind the screen is when we enter the URL http://localhost:8080/webstore/market/products/add in the browser, it is considered as a GET request, so Spring MVC will map that request to the getAddNewProductForm method. Within that method, we simply attach a new empty Product domain object with the model, under the attribute name newProduct. So in the  addproduct.jsp View, we can access that newProduct Model object: Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp View file for some time, so that you understand the form processing flow without confusion. In addproduct.jsp, we just added a <form:form> tag from Spring's tag library: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is coming from a Spring tag library, we need to add a reference to that tag library in our JSP file; that's why we added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of the modelAttribute in the <form:form> tag. If you remember correctly, you can see that this value of the modelAttribute and the attribute name we used to store the newProduct object in the Model from our getAddNewProductForm method are the same. So the newProduct object that we attached to the model from the Controller method (getAddNewProductForm) is now bound to the form. This object is called the form backing bean in Spring MVC. Okay now you should look at every <form:input> tag inside the <form:form>tag. You can observe a common attribute in every tag. That attribute is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to form backing bean. So the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now it’s time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button on our form. Yes, since every form submission is considered a POST request, this time the browser will send a POST request to the same URL http://localhost:8080/webstore/products/add. So this time the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply are calling the addProduct service method to add the new product to the repository: productService.addProduct(productToBeAdded); But the interesting question here is how come the productToBeAdded object is populated with the data that we entered in the form? The answer lies in the @ModelAttribute (org.springframework.web.bind.annotation.ModelAttribute) annotation. Notice the method signature of the processAddNewProductForm method: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here if you look at the value attribute of the @ModelAttribute annotation, you can observe a pattern. Yes, the @ModelAttribute annotation's value and the value of the modelAttribute from the <form:form> tag are the same. So Spring MVC knows that it should assign the form bounded newProduct object to the processAddNewProductForm method's parameter productToBeAdded. The @ModelAttribute annotation is not only used to retrieve a object from the Model, but if we want we can even use the @ModelAttribute annotation to add objects to the Model. For instance, we can even rewrite our getAddNewProductForm method to something like the following with using the @ModelAttribute annotation: @RequestMapping(value = "/products/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can see that we haven't created a new empty Product domain object and attached it to the model. All we did was added a parameter of the type Product and annotated it with the @ModelAttribute annotation, so Spring MVC will know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical View name it is returning: redirect:/market/products. So what we are trying to tell Spring MVC by returning the string redirect:/market/products? To get the answer, observe the logical View name string carefully; if we split this string with the ":" (colon) symbol, we will get two parts. The first part is the prefix redirect and the second part is something that looks like a request path: /market/products. So, instead of returning a View name, we are simply instructing Spring to issue a redirect request to the request path /market/products, which is the request path for the list method of our ProductController. So after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring will use a special View object called RedirectView (org.springframework.web.servlet.view.RedirectView) to issue the redirect command behind the screen. Instead of landing on a web page after the successful submission of a web form, we are spawning a new request to the request path /market/products with the help of RedirectView. This pattern is called redirect-after-post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form. Sometimes after submitting the form, if we press the browser's refresh button or back button, there are chances to resubmit the same form. This behavior is called double submission. Have a go hero – customer registration form It is great that we created a web form to add new products to our web application under the URL http://localhost:8080/webstore/market/products/add. Why don't you create a customer registration form in our application to register a new customer in our application? Try to create a customer registration form under the URL http://localhost:8080/webstore/customers/add. Customizing data binding In the last section, you saw how to bind data submitted by a HTML form to a form backing bean. In order to do the binding, Spring MVC internally uses a special binding object called WebDataBinder (org.springframework.web.bind.WebDataBinder). WebDataBinder extracts the data out of the HttpServletRequest object and converts it to a proper data format, loads it into a form backing bean, and validates it. To customize the behavior of data binding, we can initialize and configure the WebDataBinder object in our Controller. The @InitBinder (org.springframework.web.bind.annotation.InitBinder) annotation helps us to do that. The @InitBinder annotation designates a method to initialize WebDataBinder. Let's look at a practical use of customizing WebDataBinder. Since we are using the actual domain object itself as form backing bean, during the form submission there is a chance for security vulnerabilities. Because Spring automatically binds HTTP parameters to form bean properties, an attacker could bind a suitably-named HTTP parameter with form properties that weren't intended for binding. To address this problem, we can explicitly tell Spring which fields are allowed for form binding. Technically speaking, the process of explicitly telling which fields are allowed for binding is called whitelisting binding in Spring MVC; we can do whitelisting binding using WebDataBinder. Time for action – whitelisting form fields for binding In the previous exercise while adding a new product, we bound every field of the Product domain in the form, but it is meaningless to specify unitsInOrder and discontinued values during the addition of a new product because nobody can make an order before adding the product to the store and similarly discontinued products need not be added in our product list. So we should not allow these fields to be bounded with the form bean while adding a new product to our store. However all the other fields of the Product domain object to be bound. Let's see how to this with the following steps: Open our ProductController class and add a method as follows: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("productId", "name", "unitPrice", "description", "manufacturer", "category", "unitsInStock", "condition"); } Add an extra parameter of the type BindingResult (org.springframework.validation.BindingResult) to the processAddNewProductForm method as follows: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded, BindingResult result) In the same processAddNewProductForm method, add the following condition just before the line saving the productToBeAdded object: String[] suppressedFields = result.getSuppressedFields(); if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Now run our application and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add new product information. Fill out all the fields, particularly Units in order and discontinued. Now press the Add button and you will see a HTTP status 500 error on the web page as shown in the following image: The add product page showing an error for disallowed fields Now open addProduct.jsp from /Webshop/src/main/webapp/WEB-INF/views/ in your project and remove the input tags that are related to the Units in order and discontinued fields. Basically, you need to remove the following block of code: <div class="form-group"> <label class="control-label col-lg-2" for="unitsInOrder">Units In Order</label> <div class="col-lg-10"> <form:input id="unitsInOrder" path="unitsInOrder" type="text" class="form:input-large"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see a web page showing a web form to add a new product, but this time without the Units in order and Discontinued fields. Now enter all information related to the new product and click on the Add button. You will see the new product added in the product listing page under the URL http://localhost:8080/webstore/market/products. What just happened? Our intention was to put some restrictions on binding HTTP parameters with the form baking bean. As we already discussed, the automatic binding feature of Spring could lead to a potential security vulnerability if we used a domain object itself as form bean. So we have to explicitly tell Spring MVC which are fields are allowed. That's what we are doing in step 1. The @InitBinder annotation designates a Controller method as a hook method to do some custom configuration regarding data binding on the WebDataBinder. And WebDataBinder is the thing that is doing the data binding at runtime, so we need to tell which fields are allowed to bind to WebDataBinder. If you observe our initialiseBinder method from ProductController, it has a parameter called binder, which is of the type WebDataBinder. We are simply calling the setAllowedFields method on the binder object and passing the field names that are allowed for binding. Spring MVC will call this method to initialize WebDataBinder before doing the binding since it has the @InitBinder annotation. WebDataBinder also has a method called setDisallowedFields to strictly specify which fields are disallowed for binding . If you use this method, Spring MVC allows any HTTP request parameters to be bound except those fields names specified in the setDisallowedFields method. This is called blacklisting binding. Okay, we configured which the allowed fields are for binding, but we need to verify whether any fields other than those allowed are bound with the form baking bean. That's what we are doing in steps 2 and 3. We changed processAddNewProductForm by adding one extra parameter called result, which is of the type BindingResult. Spring MVC will fill this object with the result of the binding. If any attempt is made to bind any fields other than the allowed fields, the BindingResult object will have a getSuppressedFields count greater than zero. That's why we were checking the suppressed field count and throwing a RuntimeException exception: if (suppressedFields.length > 0) { throw new RuntimeException("Attempting to bind disallowed fields: " + StringUtils.arrayToCommaDelimitedString(suppressedFields)); } Here the static class StringUtils comes from org.springframework.util.StringUtils. We want to ensure that our binding configuration is working—that's why we run our application without changing the View file addProduct.jsp in step 4. And as expected, we got the HTTP status 500 error saying Attempting to bind disallowed fields when we submit the Add products form with the unitsInOrder and discontinued fields filled out. Now we know our binder configuration is working, we could change our View file so not to bind the disallowed fields—that's what we were doing in step 6; just removing the input field elements that are related to the disallowed fields from the addProduct.jsp file. After that, our added new products page just works fine, as expected. If any of the outside attackers try to tamper with the POST request and attach a HTTP parameter with the same field name as the form baking bean, they will get a RuntimeException. The whitelisting is just an example of how can we customize the binding with the help of WebDataBinder. But by using WebDataBinder, we can perform many more types of binding customization as well. For example, WebDataBinder internally uses many PropertyEditor (java.beans.PropertyEditor) implementations to convert the HTTP request parameters to the target field of the form backing bean. We can even register custom PropertyEditor objects with WebDataBinder to convert more complex data types. For instance, look at the following code snippet that shows how to register the custom PropertyEditor to convert a Date class: @InitBinder public void initialiseBinder (WebDataBinder binder) { DateFormat dateFormat = new SimpleDateFormat("MMM d, YYYY"); CustomDateEditor orderDateEditor = new CustomDateEditor(dateFormat, true); binder.registerCustomEditor(Date.class, orderDateEditor); } There are many advanced configurations we can make with WebDataBinder in terms of data binding, but for a beginner level, we don’t need to go that deep. Pop quiz – data binding Considering the following data binding customization and identify the possible matching field bindings: @InitBinder public void initialiseBinder(WebDataBinder binder) { binder.setAllowedFields("unit*"); } NoOfUnit unitPrice priceUnit united Externalizing text messages So far in all our View files, we hardcoded text values for all the labels; for instance, take our addProduct.jsp file—for the productId input tag, we have a label tag with the hardcoded text value as Product id: <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> Externalizing these texts from a View file into a properties file will help us to have a single centralized control for all label messages. Moreover, it will help us to make our web pages ready for internationalization. But in order to perform internalization, we need to externalize the label messages first. So now you are going to see how to externalize locale-sensitive text messages from a web page to a property file. Time for action – externalizing messages Let's externalize the labels texts in our addProduct.jsp: Open our addProduct.jsp file and add the following tag lib reference at the top: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> Change the product ID <label> tag's value ID to <spring:message code="addProdcut.form.productId.label"/>. After changing your product ID <label> tag's value, it should look as follows: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Create a file called messages.properties under /src/main/resources in your project and add the following line to it: addProduct.form.productId.label = New Product ID Now open our web application context configuration file WebApplicationContextConfig.java and add the following bean definition to it: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } Now run our application again and enter the URL http://localhost:8080/webstore/market/products/add. You will be able to see the added product page with the product ID label showing as New Product ID. What just happened? Spring MVC has a special a tag called <spring:message> to externalize texts from JSP files. In order to use this tag, we need to add a reference to a Spring tag library—that's what we did in step 1. We just added a reference to the Spring tag library in our addProduct.jsp file: <%@ taglib prefix="spring" uri="http://www.springframework.org/tags" %> In step 2, we just used that tag to externalize the label text of the product ID input tag: <label class="control-label col-lg-2 col-lg-2" for="productId"> <spring:message code="addProduct.form.productId.label"/> </label> Here, an important thing you need to remember is the code attribute of <spring:message> tag, we have assigned the value addProduct.form.productId.label as the code for this <spring:message> tag. This code attribute is a kind of key; at runtime Spring will try to read the corresponding value for the given key (code) from a message source property file. We said that Spring will read the message’s value from a message source property file, so we need to create that file property file. That's what we did in step 3. We just created a property file with the name messages.properties under the resource directory. Inside that file, we just assigned the label text value to the message tag code: addProduct.form.productId.label = New Product ID Remember for demonstration purposes I just externalized a single label, but a typical web application will have externalized messages  for almost all tags; in that case messages messages.properties file will have many code-value pair entries. Okay, we created a message source property file and added the <spring:message> tag in our JSP file, but to connect these two, we need to create one more Spring bean in our web application context for the org.springframework.context.support.ResourceBundleMessageSource class with the name messageSource—we did that in step 4: @Bean public MessageSource messageSource() { ResourceBundleMessageSource resource = new ResourceBundleMessageSource(); resource.setBasename("messages"); return resource; } One important property you need to notice here is the basename property; we assigned the value messages for that property. If you remember, this is the name of the property file that we created in step 3. That is all we did to enable the externalizing of messages in a JSP file. Now if we run the application and open up the Add products page, you can see that the product ID label will have the same text as we assigned to the  addProdcut.form.productId.label code in the messages.properties file. Have a go hero – externalize all the labels from all the pages I just showed you how to externalize the message for a single label; you can now do that for every single label available in all the pages. Summary At the start of this article, you saw how to serve and process forms, and you learned how to bind form data with a form backing bean. You also learned how to read a bean in the Controller. After that, we went a little deeper into the form bean binding and configured the binder in our Controller to whitelist some of the POST parameters from being bound to the form bean. Finally, you saw how to use one more Spring special tag <spring:message> to externalize the messages in a JSP file. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application[article] Mixing ASP.NET Webforms and ASP.NET MVC[article] ASP.NET MVC Framework[article]
Read more
  • 0
  • 1
  • 30384
Modal Close icon
Modal Close icon