Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-building-solutions-using-patterns
Packt
16 Sep 2015
6 min read
Save for later

Building Solutions Using Patterns

Packt
16 Sep 2015
6 min read
In this article by Mark Brummel, the author of Learning Dynamics NAV Patterns, we will learn how to create an application using Dynamics Nav. While creating an application, we can apply patterns and coding concepts into a new module that is recognizable for the users to be as a Microsoft Dynamics NAV application, and is easy to understand and maintain by other developers. The solution that we will make is for a small bed and breakfast (B&B), allowing them to manage their rooms and reservations. This can be integrated into the financial part of Dynamics NAV. It is not the intention of this article to make a full-featured finished product. We will discuss the basic design principles, and the decision making processes. Therefore, we simplify the functional process. One of the restrictions in our application is that we rent rooms per night. This article will be covering the following topics: Building blocks Creating the Table objects (For more resources related to this topic, see here.) Building blocks We borrowed the term classes from the object-oriented programming as a collection of things that belong together. Classes can be tables or code units in Microsoft Dynamics NAV. The first step in our process is to define the classes. These will be created as tables or code units, following the patterns that we have learned: Setup This is the generic set of parameters for the application. Guest This is the person who stays at our B&B. This can be one or two persons, or a group (family). Room Our B&B has a number of rooms with different attributes that determine the price, together with the season. Season This is the time of the year. Price This is the price for one night in a room. Reservation Rooms can be reserved on a daily basis with a starting and ending date. Stay This is the set of one or more consecutive nights at our B&B. Check-In This is the start of a stay, checking in for reservation. Check-Out At the end of a stay, we would like to send a bill. Clean Whenever a room is cleaned, we would like to register this. Evaluation Each stay can be evaluated by a customer. Invoice This generate a Sales Invoice for a Stay. Apply Architectural Patterns The second step is to decide per class which Architectural Patterns we can use. In some special cases, we might need to write down new patterns, based on the data structures that are not used in the standard application. Setup For the application setup, we will use the Singleton pattern. This allows us to define a single set of values for the entire application that is kept in memory during the lifetime of the system. Guest To register our guests, we will use the standard Customer table in Dynamics NAV. This has pros and cons. The good thing about doing this is the ability to use all the standard analysis options in the application for our customers without reinventing the wheel. Some B&B users might decide to also sell souvenirs or local products so that they can use items and the standard trade part of Dynamics NAV. We can also use the campaigns in the Relationship Management module. The bad part, or challenge, is upgradability. If we were to add fields to the customer table, or modify the standard page elements, we will have to merge these into the application each time we get a new version of the product, which is once per month. We will use the new delta file, as well as the testability framework to challenge this. Room The architectural pattern for a room is a tough decision. Most users of our system run a small B&B, so we can consider rooms to be the setup data. Number Series is not a required pattern. We will therefore decide to implement a Supplemental Table. Season Each B&B can setup their own seasons. They are used to determine price, but when not used, the system will have to work too. We implement a Supplemental Table too. Price Rooms can have a default price, or a price per season and a guest. Based on this requirement, we will implement the Rules Pattern that allows us a complex array of setup values. Reservation We want to carefully trace reservations and cancellations per room and per guest. We would like to analyze the data based on the season. For this feature, we will implement the Journal-Batch-Line pattern and introduce an Entry table that is managed by the Journal. Stay We would like to register each unique stay in our system rather than individual nights. This allows us to easily combine parameters, and generate a total price. We will implement this as a Master Data, based on the requirement to be able to use number series. The Stay does not have requirements for a lines table, nor does it represent a document in our organization. Check-In When a guest checks in to the bed and breakfast, we can check a reservation and apply the reservation to the Stay. Check-Out When a guest leaves, we would like to setup the final bill, and ask to evaluate the stay. This process will be a method on the Stay class with encapsulated functions, creating the sales invoice, and generating an evaluation document. Clean Rooms have to be cleaned each day when a guest stays, but at least once a week when the room is empty. We will use the entry pattern without a journal. Clean will be a method on the Room class. Each day we will generate entries using the Job Queue Entry pattern. The Room will also have a method that indicates if a room has been cleaned. Evaluation A Stay in our B&B can be evaluated by our guests. The evaluation has a different criteria. We will use the Document Pattern. Invoice We can create the method as an encapsulated method of the Stay class. In order to link the Sales Invoice to the Stay, we will add the Stay No. field to the Sales Header, the Sales Invoice Header, and the Sales Cr.Memo Header tables. Creating the Table Objects Based on the Architectural Patterns, we can define a set of objects that we can start working with, which is as follows: Object names are limited to 30 characters, which is challenging for naming them. The Bed and Breakfast name illustrates this challenge. Only use abbreviation when the limitation of length is a problem. Summary In this article, you learned how to define classes for building an application. You have also learned about the kinds of architectural patterns that will be involved in creating the classes in your application. Resources for Article: Further resources on this subject: Performance by Design [article] Advanced Data Access Patterns [article] Formatting Report Items and Placeholders [article]
Read more
  • 0
  • 0
  • 991

article-image-virtualization-0
Packt
16 Sep 2015
16 min read
Save for later

Virtualization

Packt
16 Sep 2015
16 min read
This article by Skanda Bhargav, the author of Troubleshooting Ubuntu Server, deals with virtualization techniques—why virtualization is important and how administrators can install and serve users with services via virtualization. We will learn about KVM, Xen, and Qemu. So sit back and let's take a spin into the virtual world of Ubuntu. (For more resources related to this topic, see here.) What is virtualization? Virtualization is a technique by which you can convert a set of files into a live running machine with an OS. It is easy to set up one machine and much easier to clone and replicate the same machine across hardware. Also, each of the clones can be customized based on requirements. We will look at setting up a virtual machine using Kernel-based Virtual Machine, Xen, and Qemu in the sections that follow. Today, people are using the power of virtualization in different situations and environments. Developers use virtualization in order to have an independent environment in which to safely test and develop applications without affecting other working environments. Administrators are using virtualization to separate services and also commission or decommission services as and when required or requested. By default, Ubuntu supports the Kernel-based Virtual Machine (KVM), which has built-in extensions for AMD and Intel-based processors. Xen and Qemu are the options suggested where you have hardware that does not have extensions for virtualization. libvirt The libvirt library is an open source library that is helpful for interfacing with different virtualization technologies. One small task before starting with libvirt is to check your hardware support extensions for KVM. The command to do so is as follows: kvm-ok You will see a message stating whether or not your CPU supports hardware virtualization. An additional task would be to verify the BIOS settings for virtualization and activate it. Installation Use the following command to install the package for libvirt: sudo apt-get install kvm libvirt-bin Next, you will need to add the user to the group libvirt. This will ensure that user gets additional options for networking. The command is as follows: sudo adduser $USER libvirtd We are now ready to install a guest OS. Its installation is very similar to that of installing a normal OS on the hardware. If your virtual machine needs a graphical user interface (GUI), you can make use of an application virt-viewer and connect using VNC to the virtual machine's console. We will be discussing the virt-viewer and its uses in the later sections of this article. virt-install virt-install is a part of the python-virtinst package. The command to install this package is as follows: sudo apt-get install python-virtinst One of the ways of using virt-install is as follows: sudo virt-install -n new_my_vm -r 256 -f new_my_vm.img -s 4 -c jeos.iso --accelerate --connect=qemu:///system --vnc --noautoconsole -v Let's understand the preceding command part by part: -n: This specifies the name of virtual machine that will be created -r: This specifies the RAM amount in MBs -f: This is the path for the virtual disk -s: This specifies the size of the virtual disk -c: This is the file to be used as virtual CD, but it can be an .iso file as well --accelerate: This is used to make use of kernel acceleration technologies --vnc: This exports the guest console via vnc --noautoconsole: This disables autoconnect for the virtual machine console -v: This creates a fully virtualized guest Once virt-install is launched, you may connect to console with virt-viewer utility from remote connections or locally using GUI. Use to wrap long text to next line. virt-clone One of the applications to clone a virtual machine to another is virt-clone. Cloning is a process of creating an exact replica of the virtual machine that you currently have. Cloning is helpful when you need a lot of virtual machines with same configuration. Here is an example of cloning a virtual machine: sudo virt-clone -o my_vm -n new_vm_clone -f /path/to/ new_vm_clone.img --connect=qemu:///sys Let's understand the preceding command part by part: -o: This is the original virtual machine that you want to clone -n: This is the new virtual machine name -f: This is the new virtual machine's file path --connect: This specifies the hypervisor to be used Managing the virtual machine Let's see how to manage the virtual machine we installed using virt. virsh Numerous utilities are available for managing virtual machines and libvirt; virsh is one such utility that can be used via command line. Here are a few examples: The following command lists the running virtual machines: virsh -c qemu:///system list The following command starts a virtual machine: virsh -c qemu:///system start my_new_vm The following command starts a virtual machine at boot: virsh -c qemu:///system autostart my_new_vm The following command restarts a virtual machine: virsh -c qemu:///system reboot my_new_vm You can save the state of virtual machine in a file. It can be restored later. Note that once you save the virtual machine, it will not be running anymore. The following command saves the state of the virtual machine: virsh -c qemu://system save my_new_vm my_new_vm-290615.state The following command restores a virtual machine from saved state: virsh -c qemu:///system restore my_new_vm-290615.state The following command shuts down a virtual machine: virsh -c qemu:///system shutdown my_new_vm The following command mounts a CD-ROM in the virtual machine: virsh -c qemu:///system attach-disk my_new_vm /dev/cdrom /media/cdrom The virtual machine manager A GUI-type utility for managing virtual machines is virt-manager. You can manage both local and remote virtual machines. The command to install the package is as follows: sudo apt-get install virt-manager The virt-manager works on a GUI environment. Hence, it is advisable to install it on a remote machine other than the production cluster, as production cluster should be used for doing the main tasks. The command to connect the virt-manager to a local server running libvirt is as follows: virt-manager -c qemu:///system If you want to connect the virt-manager from a different machine, then first you need to have SSH connectivity. This is required as libvirt will ask for a password on the machine. Once you have set up passwordless authentication, use the following command to connect manager to server: virt-manager -c qemu+ssh://virtnode1.ubuntuserver.com/system Here, the virtualization server is identified with the hostname ubuntuserver.com. The virtual machine viewer A utility for connecting to your virtual machine's console is virt-viewer. This requires a GUI to work with the virtual machine. Use the following command to install virt-viewer: sudo apt-get install virt-viewer Now, connect to your virtual machine console from your workstation using the following command: virt-viewer -c qemu:///system my_new_vm You may also connect to a remote host using SSH passwordless authentication by using the following command: virt-viewer -c qemu+ssh://virtnode4.ubuntuserver.com/system my_new_vm JeOS JeOS, short for Just Enough Operation System, is pronounced as "Juice" and is an operating system in the Ubuntu flavor. It is specially built for running virtual applications. JeOS is no longer available as a downloadable ISO CD-ROM. However, you can pick up any of the following approaches: Get a server ISO of the Ubuntu OS. While installing, hit F4 on your keyboard. You will see a list of items and select the one that reads Minimal installation. This will install the JeOS variant. Build your own copy with vmbuilder from Ubuntu. The kernel of JeOS is specifically tuned to run in virtual environments. It is stripped off of the unwanted packages and has only the base ones. JeOS takes advantage of the technological advancement in VMware products. A powerful combination of limited size with performance optimization is what makes JeOS a preferred OS over a full server OS in a large virtual installation. Also, with this OS being so light, the updates and security patches will be small and only limited to this variant. So, the users who are running their virtual applications on the JeOS will have less maintenance to worry about compared to a full server OS installation. vmbuilder The second way of getting the JeOS is by building your own copy of Ubuntu; you need not download any ISO from the Internet. The beauty of vmbuilder is that it will get the packages and tools based on your requirements. Then, build a virtual machine with these and the whole process is quick and easy. Essentially, vmbuilder is a script that will automate the process of creating a virtual machine, which can be easily deployed. Currently, the virtual machines built with vmbuilder are supported on KVM and Xen hypervisors. Using command-line arguments, you can specify what additional packages you require, remove the ones that you feel aren't necessary for your needs, select the Ubuntu version, and do much more. Some developers and admins contributed to the vmbuilder and changed the design specifics, but kept the commands same. Some of the goals were as follows: Reusability by other distributions Plugin feature added for interactions, so people can add logic for other environments A web interface along with CLI for easy access and maintenance Setup Firstly, we will need to set up libvirt and KVM before we use vmbuilder. libvirt was covered in the previous section. Let's now look at setting up KVM on your server. We will install some additional packages along with the KVM package, and one of them is for enabling X server on the machine. The command that you will need to run on your Ubuntu server is as follows: sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils The output of this command will be as follows: Let's look at what each of the packages mean: libvirt-bin: This is used by libvirtd for administration of KVM and Qemu qemu-kvm: This runs in the background ubuntu-vm-builder: This is a tool for building virtual machines from the command line bridge-utils: This enables networking for various virtual machines Adding users to groups You will have to add the user to the libvirtd command; this will enable them to run virtual machines. The command to add the current user is as follows: sudo adduser `id -un` libvirtd The output is as follows:   Installing vmbuilder Download the latest vmbuilder called python-vm-builder. You may also use the older ubuntu-vm-builder, but there are slight differences in the syntax. The command to install python-vm-builder is as follows: sudo apt-get install python-vm-builder The output will be as follows:   Defining the virtual machine While defining the virtual machine that you want to build, you need to take care of the following two important points: Do not assume that the enduser will know the technicalities of extending the disk size of virtual machine if the need arises. Either have a large virtual disk so that the application can grow or document the process to do so. However, it would be better to have your data stored in an external storage device. Allocating RAM is fairly simple. But remember that you should allocate your virtual machine an amount of RAM that is safe to run your application. To check the list of parameters that vmbuilder provides, use the following command: vmbuilder ––help   The two main parameters are virtualization technology, also known as hypervisor, and targeted distribution. The distribution we are using is Ubuntu 14.04, which is also known as trusty because of its codename. The command to check the release version is as follows: lsb_release -a The output is as follows:   Let's build a virtual machine on the same version of Ubuntu. Here's an example of building a virtual machine with vmbuilder: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system Now, we will discuss what the parameters mean: --suite: This specifies which Ubuntu release we want the virtual machine built on --flavour: This specifies which virtual kernel to use to build the JeOS image --arch: This specifies the processor architecture (64 bit or 32 bit) -o: This overwrites the previous version of the virtual machine image --libvirt: This adds the virtual machine to the list of available virtual machines Now that we have created a virtual machine, let's look at the next steps. JeOS installation We will examine the settings that are required to get our virtual machine up and running. IP address A good practice for assigning IP address to the virtual machines is to set a fixed IP address, usually from the private pool. Then, include this info as part of the documentation. We will define an IP address with following parameters: --ip (address): This is the IP address in dotted form --mask (value): This is the IP mask in dotted form (default is 255.255.255.0) --net (value): This is the IP net address (default is X.X.X.0) --bcast (value): This is the IP broadcast (default is X.X.X.255) --gw (address): This is the gateway address (default is X.X.X.1) --dns (address): This is the name server address (default is X.X.X.1) Our command looks like this now: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 You may have noticed that we have assigned only the IP, and all others will take the default value. Enabling the bridge We will have to enable the bridge for our virtual machines, as various remote hosts will have to access the applications. We will configure libvirt and modify the vmbuilder template to do so. First, create the template hierarchy and copy the default template into this folder: mkdir -p VMBuilder/plugins/libvirt/templates cp /etc/vmbuilder/libvirt/* VMBuilder/plugins/libvirt/templates/ Use your favorite editor and modify the following lines in the VMBuilder/plugins/libvirt/templates/libvirtxml.tmpl file: <interface type='network'> <source network='default'/> </interface> Replace these lines with the following lines: <interface type='bridge'> <source bridge='br0'/> </interface>   Partitions You have to allocate partitions to applications for their data storage and working. It is normal to have a separate storage space for each application in /var. The command provided by vmbuilder for this is --part: --part PATH vmbuilder will read the file from the PATH parameter and consider each line as a separate partition. Each line has two entries, mountpoint and size, where size is defined in MBs and is the maximum limit defined for that mountpoint. For this particular exercise, we will create a new file with name vmbuilder.partition and enter the following lines for creating partitions: root 6000 swap 4000 --- /var 16000 Also, please note that different disks are identified by the delimiter ---. Now, the command should be like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition Use to wrap long text to the next line. Setting the user and password We have to define a user and a password in order for the user to log in to the virtual machine after startup. For now, let's use a generic user identified as user and the password password. We can ask user to change the password after first login. The following parameters are used to set the username and password: --user (username): This sets the username (default is ubuntu) --name (fullname): This sets a name for the user (default is ubuntu) --pass (password): This sets the password for the user (default is ubuntu) So, now our command will be as follows: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password Final steps in the installation – first boot There are certain things that will need to be done at the first boot of a machine. We will install openssh-server at first boot. This will ensure that each virtual machine has a key, which is unique. If we had done this earlier in the setup phase, all virtual machines would have been given the same key; this might have posed a security issue. Let's create a script called first_boot.sh and run it at the first boot of every new virtual machine: # This script will run the first time the virtual machine boots # It is run as root apt-get update apt-get install -qqy --force-yes openssh-server Then, add the following line to the command line: --firstboot first_boot.sh Final steps in the installation – first login Remember we had specified a default password for the virtual machine. This means all the machines where this image will be used for installation will have the same password. We will prompt the user to change the password at first login. For this, we will use a shell script named first_login.sh. Add the following lines to the file: # This script is run the first time a user logs in. echo "Almost at the end of setting up your machine" echo "As a security precaution, please change your password" passwd Then, add the parameter to your command line: --firstlogin first_login.sh Auto updates You can make your virtual machine update itself at regular intervals. To enable this feature, add a package named unattended-upgrades to the command line: --addpkg unattended-upgrades ACPI handling ACPI handling will enable your virtual machine to take care of shutdown and restart events that are received from a remote machine. We will install the acipd package for the same: --addpkg acipd The complete command So, the final command with the parameters that we discussed previously would look like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password --firstboot first_boot.sh --firstlogin first_login.sh --addpkg unattended-upgrades --addpkg acipd   Summary In this article, we discussed various virtualization techniques. We discussed virtualization as well as the tools and packages that help in creating and running a virtual machine. Also, you learned about the ways we can view, manage, connect to, and make use of the applications running on the virtual machine. Then, we saw the lightweight version of Ubuntu that is fine-tuned to run virtualization and applications on a virtual platform. At the later stages of this article, we covered how to build a virtual machine from a command line, how to add packages, how to set up user profiles, and the steps for first boot and first login. Resources for Article: Further resources on this subject: Introduction to OpenVPN [article] Speeding up Gradle builds for Android [article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 6715

article-image-identifying-best-places
Packt
16 Sep 2015
9 min read
Save for later

Identifying the Best Places

Packt
16 Sep 2015
9 min read
In this article by Ben Mearns, author of the book QGIS Blueprints, we will take a look at how the raster data can be analyzed, enhanced, and used for map production. Specifically, you will learn to produce a grid of the suitable locations based on the criteria values in other grids using raster analysis and map algebra. Then, using the grid, we will produce a simple click-based map. The end result will be a site suitability web application with click-based discovery capabilities. We'll be looking at the suitability for the farmland preservation selection. In this article, we will cover the following topics: Vector data ETL for raster analysis Batch processing Leaflet map application publication with QGIS2Leaf (For more resources related to this topic, see here.) Vector data Extract, Transform, and Load Our suitability analysis uses map algebra and criteria grids to give us a single value for the suitability for some activity in every place. This requires that the data be expressed in the raster (grid) format. So, let's perform the other necessary ETL steps and then convert our vector data to raster. We will perform the following actions: Ensure that our data has identical spatial reference systems. For example, we may be using a layer of the roads maintained by the state department of transportation and a layer of land use maintained by the department of natural resources. These layers must have identical spatial reference systems or be transformed to have identical systems. Extract geographic objects according to their classes as defined in some attribute table field if we want to operate on them while they're still in the vector form. If no further analysis is necessary, convert to raster. Loading data and establishing the CRS conformity It is important for the layers in this project to be transformed or projected into the same geographic or projected coordinate system. This is necessary for an accurate analysis and for publication to the web formats. Perform the following steps for this: Disable 'on the fly' projection if it is turned on. Otherwise, 'on the fly' will automatically project your data again to display it with the layers that are already in the Canvas. Navigate to Setting | Options and perform the settings shown in the following screenshot: Add the project layers: Navigate to Layer | Add Layer | Vector Layer. Add the following layers from within c2/data. ApplicantsCountyEasementsLanduseRoads You can select multiple layers to add by pressing Shift and clicking on the contiguous files or pressing Ctrl and clicking on the noncontiguous files. Import the Digital Elevation Model from c2/data/dem/dem.tif. Navigate to Layer | Add Layer | Raster Layer. From the dem directory, select dem.tif and then click on Open. Even though the layers are in a different CRS, QGIS does not warn us in this case. You must discover the issue by checking each layer individually. Check the CRS of the county layer and one other layer: Highlight the county layer in the Layers panel. Navigate to Layer | Properties. The CRS is displayed under the General tab in the Coordinate reference system section: Note that the county layer is in EPSG: 26957, while the others are in EPSG: 2776. We will transform the county layer from EPSG:26957 to EPSG:2776. Navigate to Layer | Save As | Select CRS. We will save all the output from this article in c2/output. To prepare the layers for conversion to raster, we will add a new generic column to all the layers populated with the number 1. This will be translated to a Boolean type raster, where the presence of the object that the raster represents (for example, roads) is indicated by a cell of 1 and all others with a zero. Follow these steps for the applicants, easements, and roads: Navigate to Layer | Toggle Editing. Then, navigate to Layer | Open Attribute Table. Add a column with the button at the top of the Attribute table dialog. Use value as the name for the new column and the following data format options: Select the new column from the dropdown in the Attribute table and enter 1 into the value box: Click on Update All. Navigate to Layer | Toggle Editing. Finally, save. The extracting (filtering) features Let's suppose that our criteria includes only a subset of the features in our roads layer—major unlimited access roads (but not freeways), a subset of the features as determined by a classification code (CFCC). To temporarily extract this subset, we will do a layer query by performing the following steps: Filter the major roads from the roads layer. Highlight the roads layer. Navigate to Layer | Query. Double-click on CFCC to add it to the expression. Click on the = operator to add to the expression Under the Values section, click on All to view all the unique values in the CFCC field. Double-click on A21 to add this to the expression. Do this for all the codes less than A36. Include A63 for highway on-ramps. You selection code will look similar to this: "CFCC" = 'A21' OR "CFCC" = 'A25' OR "CFCC" = 'A31' OR "CFCC" = 'A35' OR "CFCC" = 'A63' Click on OK, as shown in the following screenshot: Create a new c2/output directory. Save the roads layer as a new layer with only the selected features (major_roads) in this directory. To clear a layer filter, return to the query dialog on the applied layer (highlight it in the Layers pane; navigate to Layer | Query and click on Clear). Repeat these steps for the developed (LULC1 = 1) and agriculture (LULC1 = 2) landuses (separately) from the landuse layer. Converting to raster In this section, we will convert all the needed vector layers to raster. We will be doing this in batch, which will allow us to repeat the same operation many times over multiple layers. Doing more at once—working in batch The QGIS Processing Framework provides capabilities to run the same operation many times on different data. This is called batch processing. A batch process is invoked from an operation's context menu in the Processing Toolbox. The batch dialog requires that the parameters for each layer be populated for every iteration. Convert the vector layers to raster. Navigate to Processing Toolbox. Select Advanced Interface from the dropdown at the bottom of Processing Toolbox (if it is not selected, it will show as Simple Interface). Type rasterize to search for the Rasterize tool. Right-click on the Rasterize tool and select Execute as batch process: Fill in the Batch Processing dialog, making sure to specify the parameters as follows: Parameter Value Input layer (For example, roads) Attribute field value Output raster size Output resolution in map units per pixel Horizontal 30 Vertical 30 Raster type Int16 Output layer (For example, roads) The following images show how this will look in QGIS: Scroll to the right to complete the entry of parameter values.   Organize the new layers (optional step).    Batch sometimes gives unfriendly names based on some bug in the dialog box.    Change the layer names by doing the following for each layer created by batch:    Highlight the layer.    Navigate to Layer | Properties.    Change the layer name to the name of the vector layer from which this was created (for example, applicants). You should be able to find a hint for this value in the layer properties in the layer source (name of the .tif file).    Group the layers.    Press Shift + click on all the layers created by batch and the previous roads raster.    Navigate to Right click | Group selected. Publishing the results as a web application Now that we have completed our modeling for the site selection of a farmland for conservation, let's take steps to publish this for the Web. QGIS2leaf QGIS2leaf allows us to export our QGIS map to web map formats (JavaScript, HTML, and CSS) using the Leaflet map API. Leaflet is a very lightweight, extensible, and responsive (and trendy) web mapping interface. QGIS2Leaf converts all our vector layers to GeoJSON, which is the most common textual way to express the geographic JavaScript objects. As our operational layer is in GeoJSON, Leaflet's click interaction is supported, and we can access the information in the layers by clicking. It is a fully editable HTML and JavaScript file. You can customize and upload it to an accessible web location. QGIS2leaf is very simple to use as long as the layers are prepared properly (for example, with respect to CRS) up to this point. It is also very powerful in creating a good starting application including GeoJSON, HTML, and JavaScript for our Leaflet web map. Make sure to install the QGIS2Leaf plugin if you haven't already. Navigate to Web | QGIS2leaf | Exports a QGIS Project to a working Leaflet webmap. Click on the Get Layers button to add the currently displayed layers to the set that QGIS2leaf will export. Choose a basemap and enter the additional details if so desired. Select Encode to JSON. These steps will produce a map application similar to the following one. We'll take a look at how to restore the labels: Summary In this article, using the site selection example, we covered basic vector data ETL, raster analysis, and web map creation. We started with vector data, and after unifying CRS, we prepared the attribute tables. We then filtered and converted it to raster grids using batch processing. Finally, we published the prepared vector output with QGIS2Leaf as a simple Leaflet web map application with a strong foundation for extension. Resources for Article:   Further resources on this subject: Style Management in QGIS [article] Preparing to Build Your Own GIS Application [article] Geocoding Address-based Data [article]
Read more
  • 0
  • 0
  • 2074

article-image-straight-blender
Packt
16 Sep 2015
18 min read
Save for later

Straight into Blender!

Packt
16 Sep 2015
18 min read
 In this article by Romain Caudron and Pierre-Armand Nicq, the authors of Blender 3D By Example, you will start getting familiar with Blender. (For more resources related to this topic, see here.) Here, navigation within the interface will be presented. Its approach is atypical in comparison to other 3D software, such as Autodesk Maya® or Autodesk 3DS Max®, but once you get used to this, it will be extremely effective. If you have had the opportunity to use Blender before, it is important to note that the interface went through changes during the evolution of the software (especially since version 2.5). We will give you an idea of the possibilities that this wonderful free and open source software gives by presenting different workflows. You will learn some vocabulary and key concepts of 3D creation so that you will not to get lost during your learning. Finally, you will have a brief introduction to the projects that we will carry out throughout this book. Let's dive into the third dimension! The following topics will be covered in this article: Learning some theory and vocabulary Navigating the 3D viewport How to set up preferences Using keyboard shortcuts to save time An overview of the 3D workflow Before learning how to navigate the Blender interface, we will give you a short introduction to the 3D workflow. An anatomy of a 3D scene To start learning about Blender, you need to understand some basic concepts. Don't worry, there is no need to have special knowledge in mathematics or programming to create beautiful 3D objects; it only requires curiosity. Some artistic notions are a plus. All 3D elements, which you will handle, will evolve in to a scene. There is a three-dimensional space with a coordinate system composed of three axes. In Blender, the x axis shows the width, y axis shows the depth, and the z axis shows the height. Some softwares use a different approach and reverses the y and z axes. These axes are color-coded, we advise you to remember them: the x axis in red, the y axis in green and the z axis in blue. A scene may have the scale you want and you can adjust it according to your needs. This looks like a film set for a movie. A scene can be populated by one or more cameras, lights, models, rigs, and many other elements. You will have the control of their placement and their setup. A 3D scene looks like a film set. A mesh is made of vertices, edges, and faces. The vertices are some points in the scene space that are placed at the end of the edges. They could be thought of as 3D points in space and the edges connect them. Connected together, the edges and the vertices form a face, also called a polygon. It is a geometric plane, which has several sides as its name suggests. In 3D software, a polygon is constituted of at least three sides. It is often essential to favor four-sided polygons during modeling for a better result. You will have an opportunity to see this in more detail later. Your actors and environments will be made of polygonal objects, or more commonly called as meshes. If you have played old 3D games, you've probably noticed the very angular outline of the characters; it was, in fact, due to a low count of polygons. We must clarify that the orientation of the faces is important for your polygon object to be illuminated. Each face has a normal. This is a perpendicular vector that indicates the direction of the polygon. In order for the surface to be seen, it is necessary that the normals point to the outside of the model. Except in special cases where the interior of a polygonal object is empty and invisible. You will be able to create your actors and environment as if you were handling virtual clay to give them the desired shape. Anatomy of a 3D Mesh To make your characters presentable, you will have to create their textures, which are 2D images that will be mapped to the 3D object. UV coordinates will be necessary in order to project the texture onto the mesh. Imagine an origami paper cube that you are going to unfold. This is roughly the same. These details are contained in a square space with the representation of the mesh laid flat. You can paint the texture of your model in your favorite software, even in Blender. This is the representation of the UV mapping process. The texture on the left is projected to the 3D model on the right. After this, you can give the illusion of life to your virtual actors by animating them. For this, you will need to place animation keys spaced on the timeline. If you change the state of the object between two keyframes, you will get the illusion of movement—animation. To move the characters, there is a very interesting process that uses a bone system, mimicking the mechanism of a real skeleton. Your polygon object will be then attached to the skeleton with a weight assigned to the vertices on each bone, so if you animate the bones, the mesh components will follow them. Once your characters, props, or environment are ready, you will be able to choose a focal length and an adequate framework for your camera. In order to light your scene, the choice of the render engine will be important for the kind of lamps to use, but usually there are three types of lamps as used in cinema productions. You will have to place them carefully. There are directional lights, which behave like the sun and produce hard shadows. There are omnidirectional lights, which will allow you to simulate diffuse light, illuminating everything around it and casting soft shadows. There are also spots that will simulate a conical shape. As in the film industry or other imaging creation fields, good lighting is a must-have in order to sell the final picture. Lighting is an expressive and narrative element that can magnify your models, or make them irrelevant. Once everything is in place, you are going to make a render. You will have a choice between a still image and an animated sequence. All the given parameters with the lights and materials will be calculated by the render engine. Some render engines offer an approach based on physics with rays that are launched from the camera. Cycles is a good example of this kind of engine and succeeds in producing very realistic renders. Others will have a much simpler approach, but none less technically based on visible elements from the camera. All of this is an overview of what you will be able to achieve while reading this book and following along with Blender. What can you do with Blender? In addition to being completely free and open source, Blender is a powerful tool that is stable and with an integral workflow that will allow you to understand your learning of 3D creation with ease. Software updates are very frequent; they fix bugs and, more importantly, add new features. You will not feel alone as Blender has an active and passionate community around it. There are many sites providing tutorials, and an official documentation detailing the features of Blender. You will be able to carry out everything you need in Blender, including things that are unusual for a 3D package such as concept art creation, sculpting, or digital postproduction, which we have not yet discussed, including compositing and video editing. This is particularly interesting in order to push the aesthetics of your future images and movies to another level. It is also possible to make video games. Also, note that the Blender game engine is still largely unknown and underestimated. Although this aspect of the software is not as developed as other specialized game engines, it is possible to make good quality games without switching to another software. You will realize that the possibilities are enormous, and you will be able to adjust your workflow to suit your needs and desires. Software of this type could scare you by its unusual handling and its complexity, but you'll realize that once you have learned its basics, it is really intuitive in many ways. Getting used to the navigation in Blender Now that you have been introduced to the 3D workflow, you will learn how to navigate the Blender interface, starting with the 3D viewport. An introduction to the navigation of the 3D Viewport It is time to learn how to navigate in the Blender viewport. The viewport represents the 3D space, in which you will spend most of your time. As we previously said, it is defined by three axes (x, y, and z). Its main goal is to display the 3D scene from a certain point of view while you're working on it. The Blender 3D Viewport When you are navigating through this, it will be as if you were a movie director but with special powers that allow you to film from any point of view. The navigation is defined by three main actions: pan, orbit, and zoom. The pan action means that you will move horizontally or vertically according to your current point of view. If we connect that to our cameraman metaphor, it's like if you were moving laterally to the left, or to the right, or moving up or down with a camera crane. By default, in Blender the shortcut to pan around is to press the Shift button and the Middle Mouse Button (MMB), and drag the mouse. The orbit action means that you will rotate around the point that you are focusing on. For instance, imagine that you are filming a romantic scene of two actors and that you rotate around them in a circular manner. In this case, the couple will be the main focus. In a 3D scene, your main focus would be a 3D character, a light, or any other 3D object. To orbit around in the Blender viewport, the default shortcut is to press the MMB and then drag the mouse. The last action that we mentioned is zoom. The zoom action is straightforward. It is the action of moving our point of view closer to an element or further away from an element. In Blender, you can zoom in by scrolling your mouse wheel up and zoom out by scrolling your mouse wheel down. To gain time and precision, Blender proposes some predefined points of view. For instance, you can quickly go in a top view by pressing the numpad 7, you can also go in a front view by pressing the numpad 1, you can go in a side view by pressing the numpad 3, and last but not least, the numpad 0 allows you to go in Camera view, which represents the final render point of the view of your scene. You can also press the numpad 5 in order to activate or deactivate the orthographic mode. The orthographic mode removes perspective. It is very useful if you want to be precise. It feels as if you were manipulating a blueprint of the 3D scene. The difference between Perspective (left) and Orthographic (right) If you are lost, you can always look at the top left corner of the viewport in order to see in which view you are, and whether the orthographic mode is on or off. Try to learn by heart all these shortcuts; you will use them a lot. With repetition, this will become a habit. What are editors? In Blender, the interface is divided into subpanels that we call editors; even the menu bar where you save your file is an editor. Each editor gives you access to tools categorized by their functionality. You have already used an editor, the 3D view. Now it's time to learn more about the editor's anatomy. In this picture, you can see how Blender is divided into editors The anatomy of an editor There are 17 different editors in Blender and they all have the same base. An editor is composed of a Header, which is a menu that groups different options related to the editor. The first button of the header is to switch between other editors. For instance, you can replace the 3D view by the UV Image Editor by clicking on it. You can easily change its place by right-clicking on it in an empty space and by choosing the Flip to Top/Bottom option. The header can be hidden by selecting its top edge and by pulling it down. If you want to bring it back, press the little plus sign at the far right. The header of the 3D viewport. The first button is for switching between editors, and also, we can choose between different options in the menu In some editors, you can get access to hidden panels that give you other options. For instance, in the 3D view you can press the T key or the N key to toggle them on or off. As in the header, if a sub panel of an editor is hidden, you can click on the little plus sign to display it again. Split, Join, and Detach Blender offers you the possibility of creating editors where you want. To do this, you need to right-click on the border of an editor and select Split Area in order to choose where to separate them. Right-click on the border of an editor to split it into two editors The current editor will then be split in two editors. Now you can switch to any other editor that you desire by clicking on the first button of the header bar. If you want to merge two editors into one, you can right-click on the border that separates them and select the Join Area button. You will then have to click on the editor that you want to erase by pointing the arrow on it. Use the Join Area option to join two editors together You then have to choose which editor you want to remove by pointing and clicking on it. We are going to see another method of splitting editors that is nice. You can drag the top right corner of an editor and another editor will magically appear! If you want to join back two editors together, you will have to drag the top right corner in the direction of the editor that you want to remove. The last manipulation can be tricky at first, but with a little bit of practice, you will also be able to do it with closed eyes! The top right corner of an editor If you have multiple monitors, it could be a great idea to detach some editors in a separated window. With this, you could gain space and won't be overwhelmed by a condensed interface. In order to do this, you will need to press the Shift key and drag the top right corner of the editor with the Left Mouse Button (LMB). Some useful layout presets Blender offers you many predefined layouts that depend on the context of your creation. For instance, you can select the Animation preset in order to have all the major animation tools, or you can use the UV Editing preset in order to prepare your texturing. To switch between the presets, go to the top of the interface (in the Info editor, near the Help menu) and click on the drop-down menu. If you want, you can add new presets by clicking on the plus sign or delete presets by clicking on the X button. If you want to rename a preset, simply enter a new name in the corresponding text field. The following screenshot shows the Layout presets drop-down menu: The layout presets drop-down menu Setting up your preferences When we start learning new software, it's good to know how to set up your preferences. Blender has a large number of options, but we will show you just the basic ones in order to change the default navigation style or to add new tools that we call add-ons in Blender. An introduction to the Preferences window The preferences window can be opened by navigating to the File menu and selecting the User Preferences option. If you want, you can use the Ctrl + Alt + U shortcut or the Cmd key and comma key on a Mac system. There are seven tabs in this window as shown here: The different tabs that compose the Preferences window A nice thing that Blender offers is the ability to change its default theme. For this, you can go to the Themes tab and choose between different presets or even change the aspect of each interface elements. Another useful setting to change is the number of undo that is 32 steps, by default. To change this number, go to the Editing tab and under the Undo label, slide the Steps to the desired value. Customizing the default navigation style We will now show you how to use a different style of navigation in the viewport. In many other 3D programs, such as Autodesk Maya®, you can use the Alt key in order to navigate in the 3D view. In order to activate this in Blender, navigate to the Input tab, and under the Mouse section, check the Emulate 3 Button Mouse option. Now if you want to use this navigation style in the viewport, you can press Alt and LMB to orbit around, Ctrl + Alt and the LMB to zoom, and Alt + Shift and the LMB to pan. Remember these shortcuts as they will be very useful when we enter the sculpting mode while using a pen tablet. The Emulate 3 Button Mouse checkbox is shown as follows: The Emulate 3 Button Mouse will be very useful when sculpting using a pen tablet Another useful setting is the Emulate Numpad. It allows you to use the numeric keys that are above the QWERTY keys in addition to the numpad keys. This is very useful for changing the views if you have a laptop without a numpad, or if you want to improve your workflow speed. The Emulate Numpad allows you to use the numeric keys above the QWERTY keys in order to switch views or toggle the perspective on or off Improving Blender with add-ons If you want even more tools, you can install what is called as add-ons on your copy of Blender. Add-ons, also called Plugins or Scripts, are Python files with the .py extension. By default, Blender comes with many disabled add-ons ordered by category. We will now activate two very useful add-ons that will improve our speed while modeling. First, go to the Add-ons tab, and click on the Mesh button in the category list at the left. Here, you will see all the default mesh add-ons available. Click on the check-boxes at the left of the Mesh: F2 and Mesh: LoopTools subpanels in order to activate these add-ons. If you know the name of the add-on you want to activate, you can try to find it by typing its name in the search bar. There are many websites where you can download free add-ons, starting from the official Blender website. If you want to install a script, you can click on the Install from File button and you will be asked to select the corresponding Python file. The official Blender Add-ons Catalog You can find it at http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts. The following screenshot shows the steps for activating the add-ons: Steps for Add-ons activation Where are the add-ons on the hard-disk? All the scripts are placed in the add-ons folder that is located wherever you have installed Blender on your hard disk. This folder will usually be at Your Installation PathBlender FoundationBlender2.VersionNumberscriptsaddons. If you find it easier, you can drop the Python files here instead of at the standard installation. Don't forget to click on the Save User Settings button in order to save all your changes! Summary In this article, you have learned the steps behind 3D creations. You know what a mesh is and what it is composed of. Then you have been introduced to navigation in Blender by manipulating the 3D viewport and going through the user preference menu. In the later sections, you configured some preferences and extended Blender by activating some add-ons. Resources for Article: Further resources on this subject: Editing the UV islands[article] Working with Blender[article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 3159

article-image-how-deploy-simple-django-app-using-aws
Liz Tom
16 Sep 2015
6 min read
Save for later

How to Deploy a Simple Django App Using AWS

Liz Tom
16 Sep 2015
6 min read
So you've written your first Django app and now you want to show the world your awesome To Do List. If you like me, your first Django app was from the awesome Django tutorial on their site. You may have heard of AWS. What exactly does this mean, and how does it pertain to getting your app out there. AWS is Amazon Web Services. They have many different products, but we're just going to focus on using one today: Elastic Compute Cloud (EC2) - Scalable virtual private servers. So you have your Django app and it runs beautifully locally. The goal is to reproduce everything but on Amazon's servers. Note: There are many different ways to set up your servers, this is just one way. You can and should experiment to see what works best for you. Application Server First up we're going to need to spin up a server to host your application. Let's go back, since the very first step would actually be to sign up for an AWS account. Please make sure to do that first. Now that we're back on track, you'll want to log into your account and go to your management dashboard. Click on EC2 under compute. Then click "Launch Instance". Now choose your operating system. I use Ubuntu because that's what we use at work. Basically, you should choose an operating system that is as close to the operating system that you use to develop in. Step 2 has you choosing an instance type. Since this is a small app and I want to be in the free tier the t2.micro will do. When you have a production ready app to go, you can read up more on EC2 instance types here. Basically you can add more power to your EC2 instance as you move up. Step 3: Click Next: Configure Instance Details For a simple app we don't need to change anything on this page. One thing to note is the Purchasing option. There are three different types of EC2 Purchasing Options, Spot Instances, Reserved Instances and Dedicated Instances. See them but since we're still on the free tier, let's not worry about this for now. Step 4: Click Next: Add Storage You don't need to change anything here, but this is where you'd click Next: Tag Instance (Step 5). You also don't need to change anything here, but if you're managing a lot of EC2 instances it's probably a good idea to to tag your instances. Step 6: Click Next: Configure Security Group. Under Type select HTTP and the rest should autofill. Otherwise you will spend hours wondering why Nginx hates you and doesn't want to work. Finally, Click Launch. A modal should have popped up prompting you to select an existing key pair or create a new key pair. Unless you already have an exisiting key pair, select Create a new key pair and give it name. You have to download this file and make sure to keep it somewhere safe and somewhere you will remember. You won't be able to download this file again, but you can always spin up another EC2 instance, and create a new key again. Click Launch Instances! You did it! You launched an EC2 instance! Configuring your EC2 Instance But I'm sorry to tell you that your journey is not over. You'll still need to configure your server with everything it needs to run your Django app. Click View Instances. This should bring you to a panel that shows you if your instance is running or not. You'll need to grab your Public IP address from here. So do you remember that private key you downloaded? You'll be needing that for this step. Open your terminal: cd path/to/your/secret/key chmod 400 your_key-pair_name.pem chmod 400 your_key-pair_name.pem is to set the permissions on the key so only you can read it. Now let's SSH to your instance. ssh -i path/to/your/secret/key/your_key-pair_name.pem ubuntu@IP-ADDRESS Since we're running Ubuntu and will be using apt, we need to make sure that apt is up to date: sudo apt-get update Then you need your webserver (nginx): sudo apt-get install nginx Since we installed Ubuntu 14.04, Nginx starts up automatically. You should be able to visit your public IP address and see a screen that says Welcome to nginx! Great, nginx was downloaded correctly and is all booted up. Let's get your app on there! Since this is a Django project, you'll need to install Django on your server. sudo apt-get install python-pip sudo pip install virtualenv sudo pip install git Pull your project down from github: git clone my-git-hub-url In your project's root directory make sure you have at a minimum a requirements.txt file with the following: django gunicorn Side note: gunicorn is a Python WSGI HTTP Server for UNIX. You can find out more here. Make a virtualenv and install your pip requirements using: pip install -r requirements.txt Now you should have django and gunicorn installed. Since nginx starts automatically you'll want to shut it down. sudo service nginx stop Now you'll turn on gunicorn by running: gunicorn app-name.wsgi Now that gunicorn is up and running it's time to turn on nginx: cd ~/etc/nginx sudo vi nginx.conf Within the http block either at the top or the bottom, you'll want to insert this block: server { listen 80; server_name public-ip-address; access_log /var/log/nginx-access.log; error_log /var/log/nginx-error.log; root /home/ubuntu/project-root; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Now start up nginx again: sudo service nginx start Go to your public IP address and you should see your lovely app on the Internet. The End Congratulations! You did it. You just deployed your awesome Django app using AWS. Do a little dance, pat yourself on back and feel good about what you just accomplished! But, one note, as soon as you close your connection and terminate gunicorn, your app will no longer be running. You'll need to set up something like Upstart to keep your app running all the time. Hope you had fun!   About the author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C. Liz’s passion for full stack development and digital media makes her a natural fit at ISL. Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 19500

article-image-configuring-and-securing-virtual-private-cloud
Packt
16 Sep 2015
7 min read
Save for later

Configuring and Securing a Virtual Private Cloud

Packt
16 Sep 2015
7 min read
In this article by Aurobindo Sarkar and Sekhar Reddy, author of the book Amazon EC2 Cookbook, we will cover recipes for: Configuring VPC DHCP options Configuring networking connections between two VPCs (VPC peering) (For more resources related to this topic, see here.) In this article, we will focus on recipes to configure AWS VPC (Virtual Private Cloud) against typical network infrastructure requirements. VPCs help you isolate AWS EC2 resources, and this feature is available in all AWS regions. A VPC can span multiple availability zones in a region. AWS VPC also helps you run hybrid applications on AWS by extending your existing data center into the public cloud. Disaster recovery is another common use case for using AWS VPC. You can create subnets, routing tables, and internet gateways in VPC. By creating public and private subnets, you can put your web and frontend services in public subnet, your application databases and backed services in a private subnet. Using VPN, you can extend your on-premise data center. Another option to extend your on-premise data center is AWS Direct Connect, which is a private network connection between AWS and you're on-premise data center. In VPC, EC2 resources get static private IP addresses that persist across reboots, which works in the same way as DHCP reservation. You can also assign multiple IP addresses and Elastic Network Interfaces. You can have a private ELB accessible only within your VPC. You can use CloudFormation to automate the VPC creation process. Defining appropriate tags can help you manage your VPC resources more efficiently. Configuring VPC DHCP options DHCP options sets are associated with your AWS account, so they can be used across all your VPCs. You can assign your own domain name to your instances by specifying a set of DHCP options for your VPC. However, only one DHCP Option set can be associated with a VPC. Also, you can't modify the DHCP option set after it is created. In case your want to use a different set of DHCP options, then you will need to create a new DHCP option set and associate it with your VPC. There is no need to restart or relaunch the instances in the VPC after associating the new DHCP option set as they can automatically pick up the changes. How to Do It… In this section, we will create a DHCP option set and then associate it with your VPC. Create a DHCP option set with a specific domain name and domain name servers. In our example, we execute commands to create a DHCP options set and associate it with our VPC. We specify domain name testdomain.com and DNS servers (10.2.5.1 and 10.2.5.2) as our DHCP options. $ aws ec2 create-dhcp-options --dhcp-configuration Key=domain-name,Values=testdomain.com Key=domain-name-servers,Values=10.2.5.1,10.2.5.2 Associate the DHCP option and set your VPC (vpc-bb936ede). $ aws ec2 associate-dhcp-options --dhcp-options-id dopt-dc7d65be --vpc-id vpc-bb936ede How it works… DHCP provides a standard for passing configuration information to hosts in a network. The DHCP message contains an options field in which parameters such as the domain name and the domain name servers can be specified. By default, instances in AWS are assigned an unresolvable host name, hence we need to assign our own domain name and use our own DNS servers. The DHCP options sets are associated with the AWS account and can be used across our VPCs. First, we create a DHCP option set. In this step, we specify the DHCP configuration parameters as key value pairs where commas separate the values and multiple pairs are separated by spaces. In our example, we specify two domain name servers and a domain name. We can use up to four DNS servers. Next, we associate the DHCP option set with our VPC to ensure that all existing and new instances launched in our VPC will use this DHCP options set. Note that if you want to use a different set of DHCP options, then you will need to create a new set and again associate them with your VPC as modifications to a set of DHCP options is not allowed. In addition, you can let the instances pick up the changes automatically or explicitly renew the DHCP lease. However, in all cases, only one set of DHCP options can be associated with a VPC at any given time. As a practice, delete the DHCP options set when none of your VPCs are using it and you don't need it any longer. Configuring networking connections between two VPCs (VPC peering) In this recipe, we will configure VPC peering. VPC peering helps you connect instances in two different VPCs using their private IP addresses. VPC peering is limited to within a region. However, you can create VPC peering connection between VPCs that belong to different AWS accounts. The two VPCs that participate in VPC peering must not have matching or overlapping CIDR addresses. To create a VPC connection, the owner of the local VPC has to send the request to the owner of the peer VPC located in the same account or a different account. Once the owner of peer VPC accepts the request, the VPC peering connection is activated. You will need to update the routes in your route table to send traffic to the peer VPC and vice versa. You will also need to update your instance security groups to allow traffic from–to the peer VPC. How to Do It… Here, we present the commands to creating a VPC peering connection, accepting a peering request, and adding the appropriate route in your routing table. Create a VPC peering connection between two VPCs with IDs vpc-9c19a3f4 and vpc-0214e967. Record VpcPeeringConnectionId for further use $ aws ec2 create-vpc-peering-connection --vpc-id vpc-9c19a3f4 --peer-vpc-id vpc-0214e967 Accept VPC peering connection. Here, we will accept the VPC peering connection request with ID pcx-cf6aa4a6. $ aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-cf6aa4a6 Add a route in the route table for the VPC peering connection. The following command create route with destination CIDR (172.31.16.0/20) and VPC peer connection ID (pcx-0e6ba567) in route table rtb-7f1bda1a. $ aws ec2 create-route --route-table-id rtb-7f1bda1a --destination-cidr-block 172.31.16.0/20 --vpc-peering-connection-id pcx-0e6ba567 How it works… First, we request a VPC peering connection between two VPCs: a requester VPC that we own (i.e., vpc-9c19a3f4) and a peer VPC with that we want to create a connection (vpc-0214e967). Note that the peering connection request expires after 7 days. In order tot activate the VPC peering connection, the owner of the peer VPC must accept the request. In our recipe, as the owner of the peer VPC, we accept the VPC peering connection request. However, note that the owner of the peer VPC may be a person other than you. You can use the describe-vpc-peering-connections to view your outstanding peering connection requests. The VPC peering connection should be in the pending-acceptance state for you to accept the request. After creating the VPC peering connection, we created a route in our local VPC subnet's route table to direct traffic to the peer VPC. You can also create peering connections between two or more VPCs to provide full access to resources or peer one VPC to access centralized resources. In addition, peering can be implemented between a VPC and specific subnets or instances in one VPC with instances in another VPC. Refer to Amazon VPC documentation to set up the most appropriate peering connections for your specific requirements. Summary In this article, you learned configuring VPC DHCP options as well as configuring networking connections between two VPCs. The book Amazon EC2 Cookbook will cover recipes that relate to designing, developing, and deploying scalable, highly available, and secure applications on the AWS platform. By following the steps in our recipes, you will be able to effectively and systematically resolve issues related to development, deployment, and infrastructure for enterprise-grade cloud applications or products. Resources for Article: Further resources on this subject: Hands-on Tutorial for Getting Started with Amazon SimpleDB [article] Amazon SimpleDB versus RDBMS [article] Amazon DynamoDB - Modelling relationships, Error handling [article]
Read more
  • 0
  • 0
  • 1898
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-remote-desktop-your-pi-everywhere
Packt
16 Sep 2015
6 min read
Save for later

Remote Desktop to Your Pi from Everywhere

Packt
16 Sep 2015
6 min read
In this article by Gökhan Kurt, author of the book Raspberry Pi Android Projects, we will make a gentle introduction to both Pi and Android platforms to warm us up. Many users of the Pi face similar problems when they wish to administer it. You have to be near your Pi and connect a screen and a keyboard to it. We will solve this everyday problem by remotely connecting to our Pi desktop interface. The article covers the following topics: Installing necessary components in the Pi and Android Connecting the Pi and Android (For more resources related to this topic, see here.) Installing necessary components in the Pi and Android The following image shows you that the LXDE desktop manager comes with an initial setup and a few preinstalled programs: LXDE desktop management environment By clicking on the screen image on the tab bar located at top, you will be able to open a terminal screen that we will use to send commands to the Pi. The next step is to install a component called x11vnc. This is a VNC server for X, the window management component of Linux. Issue following command on the terminal: sudo apt-get install x11vnc This will download and install x11vnc to the Pi. We can even set a password to be used by VNC clients that will remote desktop to this Pi using the following command and provide a password to be used later on: x11vnc –storepasswd Next, we can get the x11vnc server running whenever the Pi is rebooted and the LXDE desktop manager starts. This can be done through the following steps: Go into the .config directory on the Pi user's home directory located at /home/pi:cd /home/pi/.config Make a subdirectory here named autostart:mkdir autostart Go into the autostart directory:cd autostart Start editing a file named x11vnc.desktop. As a terminal editor, I am using nano, which is the easiest one to use on the Pi for novice users, but there are more exciting alternatives, such as vi: nano x11vnc.desktop Add the following content into this file: [Desktop Entry] Encoding=UTF-8 Type=Application Name=X11VNC Comment= Exec=x11vnc -forever -usepw -display :0 -ultrafilexfer StartupNotify=false Terminal=false Hidden=false Save and exit using (Ctrl+X, Y, <Enter>) in order if you are using nano as the editor of your choice. Now you should reboot the Pi to get the server running using the following command: sudo reboot After rebooting, we can now find out what IP address our Pi has been given in the terminal window by issuing the ifconfig command. The IP address assigned to your Pi is to be found under the eth0 entry and is given after the inet addr keyword. Write this address down: Example output from ifconfig command The next step is to download a VNC client to your Android device.In this project, we will use a freely available client for Android, namely androidVNC or as it is named in the Play Store—VNC Viewer for Android by androidVNC team + antlersoft. The latest version in use at the writing of this book was 0.5.0. Note that in order to be able to connect your Android VNC client to the Pi, both the Pi and the Android device should be connected to the same network. Android through Wi-Fi and Pi through its Ethernet port. Connecting the Pi and Android Install and open androidVNC on your device. You will be presented with a first activity user interface asking for the details of the connection. Here, you should provide Nickname for the connection, Password you enter when you run the x11vnc –storepasswd command, and the IP Address of the Pi that you have found out using the ifconfig command. Initiate the connection by pressing the Connect button, and you should now be able to see the Pi desktop on your Android device. In androidVNC, you should be able to move the mouse pointer by clicking on the screen and under the options menu in the androidVNC app, you will find out how to send text and keys to the Pi with the help of Enter and Backspace. You may even find it convenient to connect to the Pi from another computer. I recommend using RealVNC for this purpose, which is available on Windows, Linux, and Mac OS. What if I want to use Wi-Fi on the Pi? In order to use a Wi-Fi dongle on the Pi, first of all, open the wpa-supplicant configuration file using the nano editor with the following command: sudo nano /etc/wpa_supplicant/wpa_supplicant.conf Add the following to the end of this file: network={ ssid="THE ID OF THE NETWORK YOU WANT TO CONNECT" psk="PASSWORD OF YOUR WIFI" } I assume that you have set up your wireless home network to use WPA-PSK as the authentication mechanism. If you have another mechanism, you should refer to the wpa_supplicant documentation. LXDE provides even better ways to connect to Wi-Fi networks through a GUI. It can be found on the upper-right corner of the desktop environment on the Pi. Connecting from everywhere Now, we have connected to the Pi from our device, which we need to connect to the same network as the Pi. However, most of us would like to connect to the Pi from around the world as well. To do this, first of all, we need to now the IP address of the home network assigned to us by our network provider. By going to http://whatismyipaddress.com URL, we can figure out what our home network's IP address is. The next step is to log in to our router and open up requests to the Pi from around the world. For this purpose, we will use a functionality found on most modern routers called port forwarding. Be aware of the risks contained in port forwarding. You are opening up access to your Pi from all around the world, even to malicious ones. I strongly recommend that you change the default password of the user pi before performing this step. You can change passwords using the passwd command. By logging onto a router's management portal and navigating to the Port Forwarding tab, we can open up requests to the Pi's internal network IP address, which we have figured out previously, and the default port of the VNC server, which is 5900. Now, we can provide our external IP address to androidVNC from anywhere around the world instead of an internal IP address that works only if we are on the same network as the Pi. Port forwarding settings on Netgear router administration page Refer to your router's user manual to see how to change the Port Forwarding settings. Most routers require you to connect through the Ethernet port in order to access the management portal instead of Wi-Fi. Summary In this article, we installed Raspbian, warmed up with the Pi, and connected the Pi using an Android device. Resources for Article:   Further resources on this subject: Raspberry Pi LED Blueprints [article] Color and motion finding [article] From Code to the Real World [article]
Read more
  • 0
  • 0
  • 13338

article-image-implementing-microsoft-dynamics-ax
Packt
16 Sep 2015
6 min read
Save for later

Implementing Microsoft Dynamics AX

Packt
16 Sep 2015
6 min read
 In this article by Yogesh Kasat and JJ Yadav, authors of the book Microsoft Dynamics AX Implementation Guide, you will learn one of the important topic in Microsoft Dynamics AX implementation process—configuration data management. (For more resources related to this topic, see here.) The configuration of an ERP system is one of the most important parts of the process. Configuration means setting up the base data and parameters to enable your product features such as financial, shipping, sales tax, and so on. Microsoft Dynamics AX has been developed based on the generic requirements of various organizations and contains the business processes belonging to diverse business segments. It is a very configurable product that allows the implementation team to configure features based on specific business needs. During the project, the implementation team identifies the relevant components of the system and sets up and aligns these components to meet the specific business requirements. This process starts in the analysis phase of the project carrying on through the design, development, and deployment phases. Configuration management is different from data migration. Data migration broadly covers the transactional data of the legacy system and core data such as Opening balances, Open AR, Open AP, customers, vendors, and so on. When we talk about configuration management, we are referring to items like fiscal years and periods, chart of accounts, segments, and defining applicable rules, journal types, customer groups, terms of payments, module-based parameters, workflows, number sequences, and the like. In a broader sense, configuration covers the basic parameters, setup data, and reference data which you configure for the different modules in Dynamics AX. The following diagram shows the different phases of configuration management: In any ERP implementation project, you deal with multiple environments. For example, you start with CRP, after the development you move to the test environment, and then training, UAT, and production, as shown in the following diagram: One of the biggest challenges that an implementation team faces is moving the configuration from one environment to another. If configurations keep changing in every environment, it becomes more difficult to manage them. Similar to code promotion and release management across environments, configuration changes need to be tracked through a change-control process across environments to ensure that you are testing with a consistent set of configurations. The objective is to keep track of all the configuration changes and make sure that they make it to the final cut in the production environment. The following sections outline some approaches used for configuration data management in the Dynamics AX project. The golden environment An environment that is pristine without any transactions—the golden environment—is sometimes referred to as a stage or pre-prod environment. Create the configurations from scratch and/or use various tools to create and update the configuration data. Develop a process to update the configuration in the golden environment once it has been changed and approved in the test environments. The golden environment can be turned into a production environment or the data can be copied over to the production environment using database restore. The golden environment database can be used as a starting point for every run of data migration. For example, if you are preparing for UAT, use the golden environment database as a starting point. Copy to UAT and perform data migration in your UAT environment. This would ensure time you are testing with the golden configurations (If the configuration is missing in the golden environment, you would be able to catch it during testing and fix your UAT and the golden environment too). The pros of the golden environment are given as follows: The golden environment is a single environment for controlling the configuration data It uses all the tools available for the initial configuration There are less number of chances for corruption of the configuration data The cons of the golden environment are given as follows: There is a risk of missing configuration updates due to not following the processes (as the configuration updates are made directly in the testing and UAT environments). There are chances of migrating the revision data into the production environment like workflow history, address revisions, and policies versions. There is a risk of migrating environment-specific data from the golden environment to the production environment. This is not useful for a project going live in multiple phases, as you will not be able to transfer the incremental configuration data using database restore. You must keep the environment in sync with the latest code. Copying the template company In this approach, the implementation team typically defines a template legal entity and configures the template company from scratch. Once completed, the template company's configuration data is copied over to the actual legal entity using the data export/import process. This approach is useful for projects going live in multiple phases, where a global template is created and used across different legal entities. Whereas, in AX 2012, a lot configuration data is shared and it makes it almost impossible to copy the company data. Building configuration templates In this approach, the implementation team typically builds a repository of all the configurations done in a file, imports them in each subsequent environment, and finally, in the production environment. The pros of building configuration templates are as follows: It is a clean approach. You can version-control the configuration file. This approach is very useful for projects going live in multiple phases, as you can import the incremental configuration data in the subsequent releases. This approach may need significant development efforts to create the X+ scripts or DIXF custom entities to import all the required configurations. Summary Clearly there are several options to choose from for configuration data management but they have their own pros and cons. While building configuration template is ideal solution for configuration data management it could be costly as it may need significant development effort to build custom entity to export and import data across environments. The golden environment process is widely used on the implementation projects as it’s easy to manage and require minimal development team involvement. Resources for Article: Further resources on this subject: Web Services and Forms[article] Setting Up and Managing E-mails and Batch Processing[article] Integrating Microsoft Dynamics GP Business Application fundamentals[article]
Read more
  • 0
  • 0
  • 4251

article-image-recommender-systems
Packt
16 Sep 2015
6 min read
Save for later

Recommender Systems

Packt
16 Sep 2015
6 min read
In this article by Suresh K Gorakala and Michele Usuelli, authors of the book Building a Recommendation System with R, we will learn how to prepare relevant data by covering the following topics: Selecting the most relevant data Exploring the most relevant data Normalizing the data Binarizing the data (For more resources related to this topic, see here.) Data preparation Here, we show how to prepare the data to be used in recommender models. These are the steps: Select the relevant data. Normalize the data. Selecting the most relevant data On exploring the data, you will notice that the table contains: Movies that have been viewed only a few times; their rating might be biased because of lack of data Users that rated only a few movies; their rating might be biased We need to determine the minimum number of users per movie and vice versa. The correct solution comes from an iteration of the entire process of preparing the data, building a recommendation model, and validating it. Since we are implementing the model for the first time, we can use a rule of thumb. After having built the models, we can come back and modify the data preparation. We define ratings_movies containing the matrix that we will use. It takes the following into account: Users who have rated at least 50 movies Movies that have been watched at least 100 times The following code shows this: ratings_movies <- MovieLense[rowCounts(MovieLense) > 50, colCounts(MovieLense) > 100] ratings_movies ## 560 x 332 rating matrix of class 'realRatingMatrix' with 55298 ratings. ratings_movies contains about half the number of users and a fifth of the number of movies that MovieLense has. Exploring the most relevant data Let's visualize the top 2 percent of users and movies of the new matrix: # visualize the top matrix min_movies <- quantile(rowCounts(ratings_movies), 0.98) min_users <- quantile(colCounts(ratings_movies), 0.98) Let's build the heat-map: image(ratings_movies[rowCounts(ratings_movies) > min_movies, colCounts(ratings_movies) > min_users], main = ""Heatmap of the top users and movies"") As you have already noticed, some rows are darker than the others. This might mean that some users give higher ratings to all the movies. However, we have visualized the top movies only. In order to have an overview of all the users, let's take a look at the distribution of the average rating by users: average_ratings_per_user <- rowMeans(ratings_movies) Let's visualize the distribution: qplot(average_ratings_per_user) + stat_bin(binwidth = 0.1) + ggtitle(""Distribution of the average rating per user"") As suspected, the average rating varies a lot across different users. Normalizing the data Users that give high (or low) ratings to all their movies might bias the results. We can remove this effect by normalizing the data in such a way that the average rating of each user is 0. The prebuilt normalize function does it automatically: ratings_movies_norm <- normalize(ratings_movies) Let's take a look at the average rating by user. sum(rowMeans(ratings_movies_norm) > 0.00001) ## [1] 0 As expected, the mean rating of each user is 0 (apart from the approximation error). We can visualize the new matrix using an image. Let's build the heat-map: # visualize the normalised matrix image(ratings_movies_norm[rowCounts(ratings_movies_norm) > min_movies,colCounts(ratings_movies_norm) > min_users],main = ""Heatmap of the top users and movies"") The first difference that we can notice are the colors, and it's because the data is continuous. Previously, the rating was an integer number between 1 and 5. After normalization, the rating can be any number between -5 and 5. There are still some lines that are more blue and some that are more red. The reason is that we are visualizing only the top movies. We already checked that the average rating is 0 for each user. Binarizing the data A few recommendation models work on binary data, so we might want to binarize our data, that is, define a table containing only 0s and 1s. The 0s will be treated as either missing values or bad ratings. In our case, we can do either of the following: Define a matrix that has 1 if the user rated the movie and 0 otherwise. In this case, we are losing the information about the rating. Define a matrix that has 1 if the rating is more than or equal to a definite threshold (for example 3) and 0 otherwise. In this case, giving a bad rating to a movie is equivalent to not rating it. Depending on the context, one choice is more appropriate than the other. The function to binarize the data is binarize. Let's apply it to our data. First, let's define a matrix equal to 1 if the movie has been watched, that is, if its rating is at least 1. ratings_movies_watched <- binarize(ratings_movies, minRating = 1) Let's take a look at the results. In this case, we will have black-and-white charts, so we can visualize a bigger portion of users and movies, for example, 5 percent. Similar to what we did earlier, let's select the 5 percent using quantile. The row and column counts are the same as the original matrix, so we can still apply rowCounts and colCounts on ratings_movies: min_movies_binary <- quantile(rowCounts(ratings_movies), 0.95) min_users_binary <- quantile(colCounts(ratings_movies), 0.95) Let's build the heat-map: image(ratings_movies_watched[rowCounts(ratings_movies) > min_movies_binary, colCounts(ratings_movies) > min_users_binary],main = ""Heatmap of the top users and movies"") Only a few cells contain non-watched movies. This is just because we selected the top users and movies. Let's use the same approach to compute and visualize the other binary matrix. Now, each cell is one if the rating is above a threshold, for example 3, and 0 otherwise. ratings_movies_good <- binarize(ratings_movies, minRating = 3) Let's build the heat-map: image(ratings_movies_good[rowCounts(ratings_movies) > min_movies_binary, colCounts(ratings_movies) > min_users_binary], main = ""Heatmap of the top users and movies"") As expected, we have more white cells now. Depending on the model, we can leave the ratings matrix as it is or normalize/binarize it. Summary In this article, you learned about data preparation and how you should select, explore, normalize, and binarize the data. Resources for Article: Further resources on this subject: Structural Equation Modeling and Confirmatory Factor Analysis [article] Warming Up [article] https://www.packtpub.com/books/content/supervised-learning [article]
Read more
  • 0
  • 0
  • 2624

article-image-crud-operations-rest
Packt
16 Sep 2015
11 min read
Save for later

CRUD Operations in REST

Packt
16 Sep 2015
11 min read
In this article by Ludovic Dewailly, the author of Building a RESTful Web Service with Spring, we will learn how requests to retrieve data from a RESTful endpoint, created to access the rooms in a sample property management system, are typically mapped to the HTTP GET method in RESTful web services. We will expand on this by implementing some of the endpoints to support all the CRUD (Create, Read, Update, Delete) operations. In this article, we will cover the following topics: Mapping the CRUD operations to the HTTP methods Creating resources Updating resources Deleting resources Testing the RESTful operations Emulating the PUT and DELETE methods (For more resources related to this topic, see here.) Mapping the CRUD operations[km1]  to HTTP [km2] [km3] methods The HTTP 1.1 specification defines the following methods: OPTIONS: This method represents a request for information about the communication options available for the requested URI. This is, typically, not directly leveraged with REST. However, this method can be used as a part of the underlying communication. For example, this method may be used when consuming web services from a web page (as a part of the C[km4] ross-origin resource sharing mechanism). GET: This method retrieves the information identified by the request URI. In the context of the RESTful web services, this method is used to retrieve resources. This is the method used for read operations (the R in CRUD). HEAD: The HEAD requests are semantically identical to the GET requests except the body of the response is not transmitted. This method is useful for obtaining meta-information about resources. Similar to the OPTIONS method, this method is not typically used directly in REST web services. POST: This method is used to instruct the server to accept the entity enclosed in the request as a new resource. The create operations are typically mapped to this HTTP method. PUT: This method requests the server to store the enclosed entity under the request URI. To support the updating of REST resources, this method can be leveraged. As per the HTTP specification, the server can create the resource if the entity does not exist. It is up to the web service designer to decide whether this behavior should be implemented or resource creation should only be handled by POST requests. DELETE: The last operation not yet mapped is for the deletion of resources. The HTTP specification defines a DELETE method that is semantically aligned with the deletion of RESTful resources. TRACE: This method is used to perform actions on web servers. These actions are often aimed to aid development and the testing of HTTP applications. The TRACE requests aren't usually mapped to any particular RESTful operations. CONNECT: This HTTP method is defined to support HTTP tunneling through a proxy server. Since it deals with transport layer concerns, this method has no natural semantic mapping to the RESTful operations. The RESTful architecture does not mandate the use of HTTP as a communication protocol. Furthermore, even if HTTP is selected as the underlying transport, no provisions are made regarding the mapping of the RESTful operations to the HTTP method. Developers could feasibly support all operations through POST requests. This being said, the following CRUD to HTTP method mapping is commonly used in REST web services: Operation HTTP method Create POST Read GET Update PUT Delete DELETE Our sample web service will use these HTTP methods to support CRUD operations. The rest of this article will illustrate how to build such operations. Creating r[km5] esources The inventory component of our sample property management system deals with rooms. If we have already built an endpoint to access the rooms. Let's take a look at how to define an endpoint to create new resources: @RestController @RequestMapping("/rooms") public class RoomsResource { @RequestMapping(method = RequestMethod.POST) public ApiResponse addRoom(@RequestBody RoomDTO room) { Room newRoom = createRoom(room); return new ApiResponse(Status.OK, new RoomDTO(newRoom)); } } We've added a new method to our RoomsResource class to handle the creation of new rooms. @RequestMapping is used to map requests to the Java method. Here we map the POST requests to addRoom(). Not specifying a value (that is, path) in @RequestMapping is equivalent to using "/". We pass the new room as @RequestBody. This annotation instructs Spring to map the body of the incoming web request to the method parameter. Jackson is used here to convert the JSON request body to a Java object. With this new method, the POSTing requests to http://localhost:8080/rooms with the following JSON body will result in the creation of a new room: { name: "Cool Room", description: "A room that is very cool indeed", room_category_id: 1 } Our new method will return the newly created room: { "status":"OK", "data":{ "id":2, "name":"Cool Room", "room_category_id":1, "description":"A room that is very cool indeed" } } We can decide to return only the ID of the new resource in response to the resource creation. However, since we may sanitize or otherwise manipulate the data that was sent over, it is a good practice to return the full resource. Quickly testing endpoints[km6]  For the purpose of quickly testing our newly created endpoint, let's look at testing the new rooms created using Postman. Postman (https://www.getpostman.com) is a Google Chrome plugin extension that provides tools to build and test web APIs. This following screenshot illustrates how Postman can be used to test this endpoint: In Postman, we specify the URL to send the POST request to http://localhost:8080/rooms, with the "[km7] application/json" content type header and the body of the request. Sending this requesting will result in a new room being created and returned as shown in the following: We have successfully added a room to our inventory service using Postman. It is equally easy to create incomplete requests to ensure our endpoint performs any necessary sanity checks before persisting data into the database. JSON versus[km8]  form data Posting forms is the traditional way of creating new entities on the web and could easily be used to create new RESTful resources. We can change our method to the following: @RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) public ApiResponse addRoom(String name, String description, long roomCategoryId) { Room room = createRoom(name, description, roomCategoryId); return new ApiResponse(Status.OK, new RoomDTO(room)); } The main difference with the previous method is that we tell Spring to map form requests (that is, with application/x-www-form-urlencoded the content type) instead of JSON requests. In addition, rather than expecting an object as a parameter, we receive each field individually. By default, Spring will use the Java method attribute names to map incoming form inputs. Developers can change this behavior by annotating attribute with @RequestParam("…") to specify the input name. In situations where the main web service consumer is a web application, using form requests may be more applicable. In most cases, however, the former approach is more in line with RESTful principles and should be favored. Besides, when complex resources are handled, form requests will prove cumbersome to use. From a developer standpoint, it is easier to delegate object mapping to a third-party library such as Jackson. Now that we have created a new resource, let's see how we can update it. Updating r[km9] esources Choosing URI formats is an important part of designing RESTful APIs. As seen previously, rooms are accessed using the /rooms/{roomId} path and created under /rooms. You may recall that as per the HTTP specification, PUT requests can result in creation of entities, if they do not exist. The decision to create new resources on update requests is up to the service designer. It does, however, affect the choice of path to be used for such requests. Semantically, PUT requests update entities stored under the supplied request URI. This means the update requests should use the same URI as the GET requests: /rooms/{roomId}. However, this approach hinders the ability to support resource creation on update since no room identifier will be available. The alternative path we can use is /rooms with the room identifier passed in the body of the request. With this approach, the PUT requests can be treated as POST requests when the resource does not contain an identifier. Given the first approach is semantically more accurate, we will choose not to support resource create on update, and we will use the following path for the PUT requests: /rooms/{roomId} Update endpoint[km10]  The following method provides the necessary endpoint to modify the rooms: @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) public ApiResponse updateRoom(@PathVariable long roomId, @RequestBody RoomDTO updatedRoom) { try { Room room = updateRoom(updatedRoom); return new ApiResponse(Status.OK, new RoomDTO(room)); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError(999, "No room with ID " + roomId)); } } As discussed in the beginning of this article, we map update requests to the HTTP PUT verb. Annotating this method with @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) instructs Spring to direct the PUT requests here. The room identifier is part of the path and mapped to the first method parameter. In fashion similar to the resource creation requests, we map the body to our second parameter with the use of @RequestBody. Testing update requests[km11]  With Postman, we can quickly create a test case to update the room we created. To do so, we send a PUT request with the following body: { id: 2, name: "Cool Room", description: "A room that is really very cool indeed", room_category_id: 1 } The resulting response will be the updated room, as shown here: { "status": "OK", "data": { "id": 2, "name": "Cool Room", "room_category_id": 1, "description": "A room that is really very cool indeed." } } Should we attempt to update a nonexistent room, the server will generate the following response: { "status": "ERROR", "error": { "error_code": 999, "description": "No room with ID 3" } } Since we do not support resource creation on update, the server returns an error indicating that the resource cannot be found. Deleting resources[km12]  It will come as no surprise that we will use the DELETE verb to delete REST resources. Similarly, the reader will have already figured out that the path to delete requests will be /rooms/{roomId}. The Java method that deals with room deletion is as follows: @RequestMapping(value = "/{roomId}", method = RequestMethod.DELETE) public ApiResponse deleteRoom(@PathVariable long roomId) { try { Room room = inventoryService.getRoom(roomId); inventoryService.deleteRoom(room.getId()); return new ApiResponse(Status.OK, null); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError( 999, "No room with ID " + roomId)); } } By declaring the request mapping method to be RequestMethod.DELETE, Spring will make this method handle the DELETE requests. Since the resource is deleted, returning it in the response would not make a lot of sense. Service designers may choose to return a boolean flag to indicate the resource was successfully deleted. In our case, we leverage the status element of our response to carry this information back to the consumer. The response to deleting a room will be as follows: { "status": "OK" } With this operation, we have now a full-fledged CRUD API for our Inventory Service. Before we conclude this article, let's discuss how REST developers can deal with situations where not all HTTP verbs can be utilized. HTTP method override In certain situations (for example, when the service or its consumers are behind an overzealous corporate firewall, or if the main consumer is a web page), only the GET and POST HTTP methods might be available. In such cases, it is possible to emulate the missing verbs by passing a customer header in the requests. For example, resource updates can be handle using POST requests by setting a customer header (for example, X-HTTP-Method-Override) to PUT to indicate that we are emulating a PUT request via a POST request. The following method will handle this scenario: @RequestMapping(value = "/{roomId}", method = RequestMethod.POST, headers = {"X-HTTP-Method-Override=PUT"}) public ApiResponse updateRoomAsPost(@PathVariable("roomId") long id, @RequestBody RoomDTO updatedRoom) { return updateRoom(id, updatedRoom); } By setting the headers attribute on the mapping annotation, Spring request routing will intercept the POST requests with our custom header and invoke this method. Normal POST requests will still map to the Java method we had put together to create new rooms. Summary In this article, we've performed the implementation of our sample RESTful web service by adding all the CRUD operations necessary to manage the room resources. We've discussed how to organize URIs to best embody the REST principles and looked at how to quickly test endpoints using Postman. Now that we have a fully working component of our system, we can take some time to discuss performance. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article] Time Travelling with Spring[article]
Read more
  • 0
  • 0
  • 20530
article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6271

article-image-writing-custom-spring-boot-starters
Packt
16 Sep 2015
10 min read
Save for later

Writing Custom Spring Boot Starters

Packt
16 Sep 2015
10 min read
 In this article by Alex Antonov, author of the book Spring Boot Cookbook, we will cover the following topics: Understanding Spring Boot autoconfiguration Creating a custom Spring Boot autoconfiguration starter (For more resources related to this topic, see here.) Introduction Its time to take a look behind the scenes and find out the magic behind the Spring Boot autoconfiguration and write some starters of our own as well. This is a very useful capability to possess, especially for large software enterprises where the presence of a proprietary code is inevitable and it is very helpful to be able to create internal custom starters that would automatically add some of the configuration or functionalities to the applications. Some likely candidates can be custom configuration systems, libraries, and configurations that deal with connecting to databases, using custom connection pools, http clients, servers, and so on. We will go through the internals of Spring Boot autoconfiguration, take a look at how new starters are created, explore conditional initialization and wiring of beans based on various rules, and see that annotations can be a powerful tool, which provides the consumers of the starters more control over dictating what configurations should be used and where. Understanding Spring Boot autoconfiguration Spring Boot has a lot of power when it comes to bootstrapping an application and configuring it with exactly the things that are needed, all without much of the glue code that is required of us, the developers. The secret behind this power actually comes from Spring itself or rather from the Java Configuration functionality that it provides. As we add more starters as dependencies, more and more classes will appear in our classpath. Spring Boot detects the presence or absence of specific classes and based on this information, makes some decisions, which are fairly complicated at times, and automatically creates and wires the necessary beans to the application context. Sounds simple, right? How to do it… Conveniently, Spring Boot provides us with an ability to get the AUTO-CONFIGURATION REPORT by simply starting the application with the debug flag. This can be passed to the application either as an environment variable, DEBUG, as a system property, -Ddebug, or as an application property, --debug. Start the application by running DEBUG=true ./gradlew clean bootRun. Now, if you look at the console logs, you will see a lot more information printed there that is marked with the DEBUG level log. At the end of the startup log sequence, we will see the AUTO-CONFIGURATION REPORT as follows: ========================= AUTO-CONFIGURATION REPORT ========================= Positive matches: ----------------- … DataSourceAutoConfiguration - @ConditionalOnClass classes found: javax.sql.DataSource,org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType (OnClassCondition) … Negative matches: ----------------- … GsonAutoConfiguration - required @ConditionalOnClass classes not found: com.google.gson.Gson (OnClassCondition) … How it works… As you can see, the amount of information that is printed in the debug mode can be somewhat overwhelming; so I've selected only one example of positive and negative matches each. For each line of the report, Spring Boot tells us why certain configurations have been selected to be included, what they have been positively matched on, or, for the negative matches, what was missing that prevented a particular configuration to be included in the mix. Let's look at the positive match for DataSourceAutoConfiguration: The @ConditionalOnClass classes found tells us that Spring Boot has detected the presence of a particular class, specifically two classes in our case: javax.sql.DataSource and org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType. The OnClassCondition indicates the kind of matching that was used. This is supported by the @ConditionalOnClass and @ConditionalOnMissingClass annotations. While OnClassCondition is the most common kind of detection, Spring Boot also uses many other conditions. For example, OnBeanCondition is used to check the presence or absence of specific bean instances, OnPropertyCondition is used to check the presence, absence, or a specific value of a property as well as any number of the custom conditions that can be defined using the @Conditional annotation and Condition interface implementations. The negative matches show us a list of configurations that Spring Boot has evaluated, which means that they do exist in the classpath and were scanned by Spring Boot but didn't pass the conditions required for their inclusion. GsonAutoConfiguration, while available in the classpath as it is a part of the imported spring-boot-autoconfigure artifact, was not included because the required com.google.gson.Gson class was not detected as present in the classpath, thus failing the OnClassCondition. The implementation of the GsonAutoConfiguration file looks as follows: @Configuration @ConditionalOnClass(Gson.class) public class GsonAutoConfiguration { @Bean @ConditionalOnMissingBean public Gson gson() { return new Gson(); } } After looking at the code, it is very easy to make the connection between the conditional annotations and report information that is provided by Spring Boot at the start time. Creating a custom Spring Boot autoconfiguration starter We have a high-level idea of the process by which Spring Boot decides which configurations to include in the formation of the application context. Now, let's take a stab at creating our own Spring Boot starter artifact, which we can include as an autoconfigurable dependency in our build. Let's build a simple starter that will create another CommandLineRunner that will take the collection of all the Repository instances and print out the count of the total entries for each. We will start by adding a child Gradle project to our existing project that will house the codebase for the starter artifact. We will call it db-count-starter. How to do it… We will start by creating a new directory named db-count-starter in the root of our project. As our project has now become what is known as a multiproject build, we will need to create a settings.gradle configuration file in the root of our project with the following content: include 'db-count-starter' We should also create a separate build.gradle configuration file for our subproject in the db-count-starter directory in the root of our project with the following content: apply plugin: 'java' repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { compile("org.springframework.boot:spring- boot:1.2.3.RELEASE") compile("org.springframework.data:spring-data- commons:1.9.2.RELEASE") } Now we are ready to start coding. So, the first thing is to create the directory structure, src/main/java/org/test/bookpubstarter/dbcount, in the db-count-starter directory in the root of our project. In the newly created directory, let's add our implementation of the CommandLineRunner file named DbCountRunner.java with the following content: public class DbCountRunner implements CommandLineRunner { protected final Log logger = LogFactory.getLog(getClass()); private Collection<CrudRepository> repositories; public DbCountRunner(Collection<CrudRepository> repositories) { this.repositories = repositories; } @Override public void run(String... args) throws Exception { repositories.forEach(crudRepository -> logger.info(String.format("%s has %s entries", getRepositoryName(crudRepository.getClass()), crudRepository.count()))); } private static String getRepositoryName(Class crudRepositoryClass) { for(Class repositoryInterface : crudRepositoryClass.getInterfaces()) { if (repositoryInterface.getName(). startsWith("org.test.bookpub.repository")) { return repositoryInterface.getSimpleName(); } } return "UnknownRepository"; } } With the actual implementation of DbCountRunner in place, we will now need to create the configuration object that will declaratively create an instance during the configuration phase. So, let's create a new class file called DbCountAutoConfiguration.java with the following content: @Configuration public class DbCountAutoConfiguration { @Bean public DbCountRunner dbCountRunner(Collection<CrudRepository> repositories) { return new DbCountRunner(repositories); } } We will also need to tell Spring Boot that our newly created JAR artifact contains the autoconfiguration classes. For this, we will need to create a resources/META-INF directory in the db-count-starter/src/main directory in the root of our project. In this newly created directory, we will place the file named spring.factories with the following content: org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.test.bookpubstarter.dbcount.DbCountAutoConfiguration For the purpose of our demo, we will add the dependency to our starter artifact in the main project's build.gradle by adding the following entry in the dependencies section: compile project(':db-count-starter') Start the application by running ./gradlew clean bootRun. Once the application is complied and has started, we should see the following in the console logs: 2015-04-05 INFO org.test.bookpub.StartupRunner : Welcome to the Book Catalog System! 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : AuthorRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : PublisherRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : BookRepository has 1 entries 2015-04-05 INFO o.t.b.dbcount.DbCountRunner : ReviewerRepository has 0 entries 2015-04-05 INFO org.test.bookpub.BookPubApplication : Started BookPubApplication in 8.528 seconds (JVM running for 9.002) 2015-04-05 INFO org.test.bookpub.StartupRunner           : Number of books: 1 How it works… Congratulations! You have now built your very own Spring Boot autoconfiguration starter. First, let's quickly walk through the changes that we made to our Gradle build configuration and then we will examine the starter setup in detail. As the Spring Boot starter is a separate, independent artifact, just adding more classes to our existing project source tree would not really demonstrate much. To make this separate artifact, we had a few choices: making a separate Gradle configuration in our existing project or creating a completely separate project altogether. The most ideal solution, however, was to just convert our build to Gradle Multi-Project Build by adding a nested project directory and subproject dependency to build.gradle of the root project. By doing this, Gradle actually creates a separate artifact JAR for us but we don't have to publish it anywhere, only include it as a compile project(':db-count-starter') dependency. For more information about Gradle multi-project builds, you can check out the manual at http://gradle.org/docs/current/userguide/multi_project_builds.html. Spring Boot Auto-Configuration Starter is nothing more than a regular Spring Java Configuration class annotated with the @Configuration annotation and the presence of spring.factories in the classpath in the META-INF directory with the appropriate configuration entries. During the application startup, Spring Boot uses SpringFactoriesLoader, which is a part of Spring Core, in order to get a list of the Spring Java Configurations that are configured for the org.springframework.boot.autoconfigure.EnableAutoConfiguration property key. Under the hood, this call collects all the spring.factories files located in the META-INF directory from all the jars or other entries in the classpath and builds a composite list to be added as application context configurations. In addition to the EnableAutoConfiguration key, we can declare the following automatically initializable other startup implementations in a similar fashion: org.springframework.context.ApplicationContextInitializer org.springframework.context.ApplicationListener org.springframework.boot.SpringApplicationRunListener org.springframework.boot.env.PropertySourceLoader org.springframework.boot.autoconfigure.template.TemplateAvailabilityProvider org.springframework.test.contex.TestExecutionListener Ironically enough, a Spring Boot Starter does not need to depend on the Spring Boot library as its compile time dependency. If we look at the list of class imports in the DbCountAutoConfiguration class, we will not see anything from the org.springframework.boot package. The only reason that we have a dependency declared on Spring Boot is because our implementation of DbCountRunner implements the org.springframework.boot.CommandLineRunner interface. Summary Resources for Article: Further resources on this subject: Welcome to the Spring Framework[article] Time Travelling with Spring[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article]
Read more
  • 0
  • 0
  • 7741

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 1
  • 12966
article-image-groovy-closures
Packt
16 Sep 2015
9 min read
Save for later

Groovy Closures

Packt
16 Sep 2015
9 min read
In this article by Fergal Dearle, the author of the book Groovy for Domain-Specific Languages - Second Edition, we will focus exclusively on closures. We will take a close look at them from every angle. Closures are the single most important feature of the Groovy language. Closures are the special seasoning that helps Groovy stand out from Java. They are also the single most powerful feature that we will use when implementing DSLs. In the article, we will discuss the following topics: We will start by explaining just what a closure is and how we can define some simple closures in our Groovy code We will look at how many of the built-in collection methods make use of closures for applying iteration logic, and see how this is implemented by passing a closure as a method parameter We will look at the various mechanisms for calling closures A handy reference that you might want to consider having at hand while you read this article is GDK Javadocs, which will give you full class descriptions of all of the Groovy built-in classes, but of particular interest here is groovy.lang.Closure. (For more resources related to this topic, see here.) What is a closure Closures are such an unfamiliar concept to begin with that it can be hard to grasp initially. Closures have characteristics that make them look like a method in so far as we can pass parameters to them and they can return a value. However, unlike methods, closures are anonymous. A closure is just a snippet of code that can be assigned to a variable and executed later: def flintstones = ["Fred","Barney"] def greeter = { println "Hello, ${it}" } flintstones.each( greeter ) greeter "Wilma" greeter = { } flintstones.each( greeter ) greeter "Wilma" Because closures are anonymous, they can easily be lost or overwritten. In the preceding example, we defined a variable greeter to contain a closure that prints a greeting. After greeter is overwritten with an empty closure, any reference to the original closure is lost. It's important to remember that greeter is not the closure. It is a variable that contains a closure, so it can be supplanted at any time. Because greeter has a dynamic type, we could have assigned any other object to it. All closures are a subclass of the type groovy.lang.Closure. Because groovy.lang is automatically imported, we can refer to Closure as a type within our code. By declaring our closures explicitly as Closure, we cannot accidentally assign a non-closure to them: Closure greeter = { println it } For each closure that is declared in our code, Groovy generates a Closure class for us, which is a subclass of groovy.lang.Closure. Our closure object is an instance of this class. Although we cannot predict what exact type of closure is generated, we can rely on it being a subtype of groovy.lang.Closure. Closures and collection methods We will encounter Groovy lists and see some of the iteration functions, such as the each method: def flintstones = ["Fred","Barney"] flintstones.each { println "Hello, ${it}" } This looks like it could be a specialized control loop similar to a while loop. In fact, it is a call to the each method of Object. The each method takes a closure as one of its parameters, and everything between the curly braces {} defines another anonymous closure. Closures defined in this way can look quite similar to code blocks, but they are not the same. Code defined in a regular Java or Groovy style code block is executed as soon as it is encountered. With closures, the block of code defined in the curly braces is not executed until the call() method of the closure is made: println "one" def two = { println "two" } println "three" two.call() println "four" Will print the following: one three two four Let's dig a bit deeper into the structure of the each of the calls shown in the preceding code. I refer to each as a call because that's what it is—a method call. Groovy augments the standard JDK with numerous helper methods. This new and improved JDK is referred to as the Groovy JDK, or GDK for short. In the GDK, Groovy adds the each method to the java.lang.Object class. The signature of the each method is as follows: Object each(Closure closure) The java.lang.Object class has a number of similar methods such as each, find, every, any, and so on. Because these methods are defined as part of Object, you can call them on any Groovy or Java object. They make little sense on most objects, but they do something sensible if not very useful: given: "an Integer" def number = 1 when: "we call the each method on it" number.each { println it } then: "just the object itself gets passed into the Closure" "1" == output() These methods all have specific implementations for all of the collection types, including arrays, lists, ranges, and maps. So, what is actually happening when we see the call to flintstones.each is that we are calling the list's implementation of the each method. Because each takes a Closure as its last and only parameter, the following code block is interpreted by Groovy as an anonymous Closure object to be passed to the method. The actual call to the closure passed to each is deferred until the body of the each method itself is called. The closure may be called multiple times—once for every element in the collection. Closures as method parameters We already know that parentheses around method parameters are optional, so the previous call to each can also be considered equivalent to: flintstones.each ({ println "Hello, ${it}") Groovy has a special handling for methods whose last parameter is a closure. When invoking these methods, the closure can be defined anonymously after the method call parenthesis. So, yet another legitimate way to call the preceding line is: flintstones.each() { println "hello, ${it}" } The general convention is not to use parentheses unless there are parameters in addition to the closure: given: def flintstones = ["Fred", "Barney", "Wilma"] when: "we call findIndexOf passing int and a Closure" def result = flintstones.findIndexOf(0) { it == 'Wilma'} then: result == 2 The signature of the GDK findIndexOf method is: int findIndexOf(int, Closure) We can define our own methods that accept closures as parameters. The simplest case is a method that accepts only a single closure as a parameter: def closureMethod(Closure c) { c.call() } when: "we invoke a method that accepts a closure" closureMethod { println "Closure called" } then: "the Closure passed in was executed" "Closure called" == output() Method parameters as DSL This is an extremely useful construct when we want to wrap a closure in some other code. Suppose we have some locking and unlocking that needs to occur around the execution of a closure. Rather than the writer of the code to locking via a locking API call, we can implement the locking within a locker method that accepts the closure: def locked(Closure c) { callToLockingMethod() c.call() callToUnLockingMethod() } The effect of this is that whenever we need to execute a locked segment of code, we simply wrap the segment in a locked closure block, as follows: locked { println "Closure called" } In a small way, we are already writing a mini DSL when we use these types on constructs. This call to the locked method looks, to all intents and purposes, like a new language construct, that is, a block of code defining the scope of a locking operation. When writing methods that take other parameters in addition to a closure, we generally leave the Closure argument to last. As already mentioned in the previous section, Groovy has a special syntax handling for these methods, and allows the closure to be defined as a block after the parameter list when calling the method: def closureMethodInteger(Integer i, Closure c) { println "Line $i" c.call() } when: "we invoke a method that accepts an Integer and a Closure" closureMethodInteger(1) { println "Line 2" } then: "the Closure passed in was executed with the parameter" """Line 1 Line 2""" == output() Forwarding parameters Parameters passed to the method may have no impact on the closure itself, or they may be passed to the closure as a parameter. Methods can accept multiple parameters in addition to the closure. Some may be passed to the closure, while others may not: def closureMethodString(String s, Closure c) { println "Greet someone" c.call(s) } when: "we invoke a method that accepts a String and a Closure" closureMethodString("Dolly") { name -> println "Hello, $name" } then: "the Closure passed in was executed with the parameter" """Greet someone Hello, Dolly""" == output() This construct can be used in circumstances where we have a look-up code that needs to be executed before we have access to an object. Say we have customer records that need to be retrieved from a database before we can use them: def withCustomer (id, Closure c) { def cust = getCustomerRecord(id) c.call(cust) } withCustomer(12345) { customer -> println "Found customer ${customer.name}" } We can write an updateCustomer method that saves the customer record after the closure is invoked, and amend our locked method to implement transaction isolation on the database, as follows: class Customer { String name } def locked (Closure c) { println "Transaction lock" c.call() println "Transaction release" } def update (customer, Closure c) { println "Customer name was ${customer.name}" c.call(customer) println "Customer name is now ${customer.name}" } def customer = new Customer(name: "Fred") At this point, we can write code that nests the two method calls by calling update as follows: locked { update(customer) { cust -> cust.name = "Barney" } } This outputs the following result, showing how the update code is wrapped by updateCustomer, which retrieves the customer object and subsequently saves it. The whole operation is wrapped by locked, which includes everything within a transaction: Transaction lock Customer name was Fred Customer name is now Barney Transaction release Summary In this article, we covered closures in some depth. We explored the various ways to call a closure and the means of passing parameters. We saw how we can pass closures as parameters to methods, and how this construct can allow us to appear to add mini DSL syntax to our code. Closures are the real "power" feature of Groovy, and they form the basis of most of the DSLs. Resources for Article: Further resources on this subject: Using Groovy Closures Instead of Template Method [article] Metaprogramming and the Groovy MOP [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article]
Read more
  • 0
  • 0
  • 3406

article-image-raspberry-pi-led-blueprints
Packt
16 Sep 2015
5 min read
Save for later

Raspberry Pi LED Blueprints

Packt
16 Sep 2015
5 min read
Blinking LEDs is a popular application in the field of embedded development. In Raspberry Pi LED Blueprints by Agus Kurniawan, we are going to design, build, and test LED-based projects using the Raspberry Pi. To Implement real LED-based projects for Raspberry Pi, we need to learn how to interface various LED modules, such as LEDs, 7-segment, 4-digit 7-segment, and dot matrix to Raspberry Pi. We will get hands-on experience by exploring real-time LEDs with this project-based book. (For more resources related to this topic, see here.) Why Raspberry Pi? The Raspberry Pi was designed by the Raspberry Pi Foundation in the UK initially to help schoolkids learn basic computer science knowledge. The Raspberry Pi uses Linux as a basic programming language, and they attempt to come up with their own language that fits this technology better sometime in the future. Although Raspberry Pi is as small as the size of a credit card, it works like a normal computer at a relatively low price. A Raspberry Pi can easily control an LED, which is a simple actuator device that displays lighting. This book will provide you with the ability to control LEDs using Raspberry Pi. What this article covers? This article covers introduction of Raspberry Pi GPIO. In this, we will learn how to use different libraries to access Raspberry Pi GPIO. The step-by-step procedure to install it is also provided along with the Python command. Introducing Raspberry Pi GPIO General-purpose input/output (GPIO) is a generic pin on the Raspberry Pi, which can be used to interact with external devices, for instance, sensor and actuator devices. In general, you can see Raspberry Pi GPIO pinouts in the following figure: To access Raspberry Pi GPIO, we can use several GPIO libraries. If you are working with Python, Raspbian has already installed the RPi.GPIO library to access Raspberry Pi GPIO. You can read more about RPi.GPIO at https://pypi.python.org/pypi/RPi.GPIO. You can verify the RPi.GPIO library from a Python terminal by importing the RPi.GPIO module. If you don’t find this library on Python at runtime or get the error message ImportError: No module named RPi.GPIO, you can install it by compiling from the source code. For instance, if we want to install RPi.GPIO 0.5.11, type the following commands: wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.11.tar.gz tar -xvzf RPi.GPIO-0.5.11.tar.gz cd RPi.GPIO-0.5.11/ sudo python setup.py install To install and update through the apt command, your Raspberry Pi must be connected to the Internet. Another way to access Raspberry Pi GPIO is to use WiringPi. It is a library written in C for Raspberry Pi to access GPIO pins. You can read more about WiringPi at http://wiringpi.com/. To install WiringPi, you can type the following commands: sudo apt-get update sudo apt-get install git-core git clone git://git.drogon.net/wiringPi cd wiringPi sudo ./build Please make sure that your Pi network does not block the git protocol for git://git.dragon.net/wiringPi. You can browsed https://git.drogon.net/?p=wiringPi;a=summary for this code. The next step is to install the WiringPi interface for Python, so you can access Raspberry Pi GPIO from the Python program. Type the following commands: sudo apt-get install python-dev python-setuptools git clone https://github.com/Gadgetoid/WiringPi2-Python.git cd WiringPi2-Python sudo python setup.py install When finished, you can verify it by showing GPIO map from the Raspberry Pi board using the following gpio tool: gpio readall You should see the GPIO map from the Raspberry Pi board on the terminal. You can also see values in the wPi column, which will be used in the WirinPi program as GPIO value parameters. In this book, you can find more information about how to use it on the WiringPi library. What you need for this book? We are going to use Raspberry Pi 2 board Model B. To make Raspberry Pi work, we need OS that acts as a bridge between the hardware and the user. There are many OS options that you can use for Raspberry Pi. This book uses Raspbian for the OS platform for Raspberry Pi. To deploy Raspbian on Raspberry Pi 2 Model B, we need microSD card of at least 4 GB size. Who this book is written for? This book is for those who want to learn how to build Raspberry Pi projects using LEDs, 7-segment, 4-digit 7-segment, and dot matrix modules. You will also learn to implement those modules in real applications, including interfacing with wireless modules and the Android mobile app. However, you don't need to have any previous experience with the Raspberry Pi or Android platforms. Summary In this article, we learned different techniques to install Raspberry Pi GPIO. Read Raspberry Pi LED Blueprints to start designing and implementing several projects based on LEDs, such as 7-segments, 4-digit 7-segment, and dot matrix displays. Other related titles are: Raspberry Pi Blueprints Raspberry Pi Super Cluster Learning Raspberry Pi Raspberry Pi Robotic Projects Resources for Article: Further resources on this subject: Color and motion finding [article] Basic Image Processing [article] Develop a Digital Clock [article]
Read more
  • 0
  • 0
  • 16847
Modal Close icon
Modal Close icon