Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-new-functionality-opencv-30
Packt
25 Aug 2014
5 min read
Save for later

New functionality in OpenCV 3.0

Packt
25 Aug 2014
5 min read
In this article by Oscar Deniz Suarez, coauthor of the book OpenCV Essentials, we will cover the forthcoming Version 3.0, which represents a major evolution of the OpenCV library for Computer Vision. Currently, OpenCV already includes several new techniques that are not available in the latest official release (2.4.9). The new functionality can be already used by downloading and compiling the latest development version from the official repository. This article provides an overview of some of the new techniques implemented. Other numerous lower-level changes in the forthcoming Version 3.0 (updated module structure, C++ API changes, transparent API for GPU acceleration, and so on) are not discussed. (For more resources related to this topic, see here.) Line Segment Detector OpenCV users have had the Hough transform-based straight line detector available in the previous versions. An improved method called Line Segment Detector (LSD) is now available. LSD is based on the algorithm described at http://dx.doi.org/10.5201/ipol.2012.gjmr-lsd. This method has been shown to be more robust and faster than the best previous Hough-based detector (the Progressive Probabilistic Hough Transform). The detector is now part of the imgproc module. OpenCV provides a short sample code ([opencv_source_code]/samples/cpp/lsd_lines.cpp), which shows how to use the LineSegmentDetector class. The following table shows the main components of the class: Method Function <constructor> The constructor allows to enter parameters of the algorithm; particularly; the level of refinements we want in the result detect This method detects line segments in the image drawSegments This method draws the segments in a given image compareSegments This method draws two sets of segments in a given image. The two sets are drawn with blue and red color lines Connected components The previous versions of OpenCV have included functions for working with image contours. Contours are the external limits of connected components (that is, regions of connected pixels in a binary image). The new functions, connectedComponents and connectedComponentsWithStats retrieve connected components as such. The connected components are retrieved as a labeled image with the same dimensions as the input image. This allows drawing the components on the original image easily. The connectedComponentsWithStats function retrieves useful statistics about each component shown in the following table: CC_STAT_LEFT  The leftmost (x) coordinate, which is the inclusive start of the bounding box in the horizontal direction CC_STAT_TOP  The topmost (y) coordinate, which is the inclusive start of the bounding box in the vertical direction CC_STAT_WIDTH  The horizontal size of the bounding box CC_STAT_HEIGHT  The vertical size of the bounding box CC_STAT_AREA  The total area (in pixels) of the connected component Scene text detection Text recognition is a classic problem in Computer Vision. Thus, Optical Character Recognition (OCR) is now routinely used in our society. In OCR, the input image is expected to depict typewriter black text over white background. In the last years, researchers aim at the more challenging problem of recognizing text "in the wild" on street signs, indoor signs, with diverse backgrounds and fonts, colors, and so on. The following figure shows and example of the difference between the two scenarios. In this scenario, OCR cannot be applied to the input images. Consequently, text recognition is actually accomplished in two steps. The text is first localized in the image and then character or word recognition is performed on the cropped region. OpenCV now provides a scene text detector based on the algorithm described in Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 (Providence, Rhode Island, USA). The implementation of OpenCV makes use of additional improvements found at http://158.109.8.37/files/GoK2013.pdf. OpenCV includes an example ([opencv_source_code]/samples/cpp/textdetection.cpp) that detects and draws text regions in an input image. The KAZE and AKAZE features Several 2D features have been proposed in the computer vision literature. Generally, the two most important aspects in feature extraction algorithms are computational efficiency and robustness. One of the latest contenders is the KAZE (Japanese word meaning "Wind") and Accelerated-KAZE (AKAZE) detector. There are reports that show that KAZE features are both robust and efficient, as compared with other widely-known features (BRISK, FREAK, and so on). The underlying algorithm is described in KAZE Features, Pablo F. Alcantarilla, Adrien Bartoli, and Andrew J. Davison, in European Conference on Computer Vision (ECCV), Florence, Italy, October 2012. As with other keypoint detectors in OpenCV, the KAZE implementation allows retrieving both keypoints and descriptors (that is, a feature vector computed around the keypoint neighborhood). The detector follows the same framework used in OpenCV for other detectors, so drawing methods are also available. Computational photography One of the modules with most improvements in the forthcoming Version 3.0 is the computational photography module (photo). The new techniques include the functionalities mentioned in the following table: Functionality Description HDR imaging Functions for handling High-Dynamic Range images (tonemapping, exposure alignment, camera calibration with multiple exposures, and exposure fusion) Seamless cloning Functions for realistically inserting one image into other image with an arbitrary-shape region of interest. Non-photorealistic rendering This technique includes non-photorealistic filters (such as pencil-like drawing effect) and edge-preserving smoothing filters (those are similar to the bilateral filter). New modules Finally, we provide a list with the new modules in development for version 3.0: Module name Description videostab Global motion estimation, Fast Marching method softcascade Implements a stageless variant of the cascade detector, which is considered more accurate shape Shape matching and retrieval. Shape context descriptor and matching algorithm, Hausdorff distance and Thin-Plate Splines cuda<X> Several modules with CUDA-accelerated implementations of other functions in the library Summary In this article, we learned about the different functionalities in OpenCV 3.0 and its different components. Resources for Article: Further resources on this subject: Wrapping OpenCV [article] A quick start – OpenCV fundamentals [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 18965

article-image-report-data-filtering
Packt
25 Aug 2014
13 min read
Save for later

Report Data Filtering

Packt
25 Aug 2014
13 min read
In this article, written by Yoav Yahav, author of the book SAP BusinessObjects Reporting Cookbook, we will cover the following recipes: Applying a simple filter Working with the filter bar Using input controls Working with an element link (For more resources related to this topic, see here.) Filtering data can be done in several ways. We can filter the results at the query level when there is a requirement to use a mandatory filter or set of filters that will fetch only specific types of rows that will correspond to the business question; otherwise, the report won't be accurate or useful. The other level of filtering is performed at the report level. This level of filtering interacts with the data that was retrieved by the user and enables us to eliminate irrelevant rows. The main question that arises when using a report-level filter is why shouldn't we implement filters in the query level? Well, the answer has various reasons, which are as follows: We need to compare and analyze just a part of the entire data that the query retrieved (for example, filtering the first quarter's data out of the current year's entire dataset) We need to view the data separately, for example, each tab can be filtered by a different value (for example, each report tab can display a different region's data) We need to filter measure objects that are different from the aggregative level of the query; for example, we have retrieved a well-detailed query displaying sales of various products at the customer level, but we also need to display only the products that had income of more than one million dollars in another report tab The business user requires interactive functionality from the filter: a drop-down box, a checklist, a spinner, or a slider—capabilities that can't be performed by a query filter We need to perform additional calculations on a variable in the report and apply a filter to it In this article, we will explore the different types of filters that can be applied in reports: simple ones, interactive ones, and filters that can combine interactivity and a custom look and feel adjusted by the business user. Applying a simple filter The first type of filter is a basic one that enables us to implement quick and simple filter logic, which is similar to the way we build it on the query panel. Getting ready We have created a query that retrieves a dataset displaying the Net Sales by Product, Line, and Year. Using a simple filter, we would like to filter only the year 2008 records as well as the Sports Line. How to do it... Perform the following steps to apply a simple filter: We will navigate to the Analysis toolbar, and in the Filters tab, click on the Filter icon and choose Add Filter, as shown in the following screenshot: In the Report Filter window, as shown in the following screenshot, we will be able to add filters, edit them, and apply them on a specific table or on the entire report tab: By clicking on the Add filter icon located in the top-left corner, we will be able to add a condition. Clicking on this button will open the list of existing objects in the report; by choosing the Year object, we will add our first filter, as shown in the following screenshot: After we choose the Year object, a filter condition structure will appear in the top-right corner of the window, enabling us to pick an operator and a value similar to the way we establish query filters, as shown in the following screenshot: We will add a second filter as well using the Add filter button and adding the Line object to the filter area. The AND operator will appear between the two filters, establishing an intersection relationship between them. This operator can be easily changed to the OR operator by clicking on it. The table will be affected accordingly and will display only the year 2008 and the Sports Line records, as shown: In order to edit the filter, we can either access it through the Analysis toolbar or mark one of the filtered columns, enabling us to get an easier edit using the toolbar or the right-click menu, as shown in the following screenshot: How it works... The report filter simply corresponds to the values defined in the filtered set of conditions that are established by simple and easy use of the filter panel. The filters can be applied on any type of data display, table, or chart. Like the query filters, the report filters use the logic of the operators AND/OR as well and can be used by clicking on the operator name. In order to view the filters that have been applied to the report tabs and tables, you can navigate to the document structure and filter's left-hand side panel and click on the Filter button. There's more... Filters can be applied on a specific table in the report or on the entire report. In order to switch between these options, when you create the filter, you need to mark the report area. To create a report-level filter or a specific column in a table, you need to filter a specific table in the report tab. Working with the filter bar Another great functionality that filters can provide us is interaction with the report data. There are cases when we are required to perform quick filtering as well as switch dynamically between values as we need to analyze different filtered datasets. Working with the filter bar can address these requirements simply and easily. Getting ready We want to perform a quick dynamic filtering on our existing table by adding the Country dimension object to the filter bar. How to do it.... Perform the following steps: By navigating to the Analysis toolbar and then to the Interact tab, we will click on the Filter Bar icon: By doing so, a gray filter pane area will appear under the formula bar with a guiding message saying Drop objects here to add report filters, as shown in the following screenshot: In order to create a new filter, we can either drag an object directly to the filter pane area from the Available Objects pane or use the quick add icon located in the filter bar on the left-hand side of the message. In our scenario, we will use the Available Objects pane and drag the Country dimension object directly to the filter bar: By adding the Country object to the filter bar, a quick drop-down list filter will be created, enabling us to filter the table data by choosing any country value: This filter will enable us to quickly create filtered sets of data using the drop-down list as well as using an interactive component that doesn't have to be in the table. How it works... The filter bar is an interactive component that enables us to create as many dynamic filters as we need and to locate them in a single filtering area for easy control of the data, and they aren't even required to appear in the table itself. The filter bar is restricted to filter only single values; in order to filter several values, we will need to either use a different type of filter, such as input control, or create a grouped values logic. There's more... When we drop several dimension objects onto the filter pane, they will be displayed accordingly; however, a cascading effect of filters (picking a specific country in the first filter and in the second filter seeing only that country) will be supported only if hierarchies have been defined in the universe. Using input controls Input controls are another type of filter that enable us to interact with the report data. An input control performs as an interactive left-hand side panel, which can be created in various types that have a different look and feel as well as a different functionality. We can use an input control to address the following tasks: Applying a different look and feel to the filter—making filters more intuitive and easy to operate (using radio buttons, comboboxes, and other filter types) Applying multiple values Applying dynamic filters to measure values using input control components, such as spinners and sliders Enabling a default value option and a custom list of values Getting ready In this example, we will filter several components in the report area, a chart and a table, using the Region dimension object. We will be using the multiple value option to enhance the filter functionality. First, we will navigate to the input control panel located in the left-hand side area as the third option from the top and click on the New option. How to do it... Perform the following steps: We will choose the object that we need to filter with the table and the chart, as shown in the following screenshot: After choosing the Region object, we will move on to the Define Input Control window. Input controls enable multiple-value functionality, and in the Choose Control Type window, we will choose the Check boxes input control type, as shown in the following screenshot: In the input control properties located at the right-hand side, we can also add a description to the input control, set a default value, and customize the list of values if we need specific values. After choosing the Check boxes component, we will advance to the next window, choosing the data element we want to apply the control on. We will tick both of the components, the chart and the table, in order to affect all the data components using a single control, as shown in the following screenshot: By clicking on the Finish button, the input control will appear at the left-hand side: We can easily change the selected values to all values (Select All), one, or several values, filtering both of the tables as shown: How it works... As we have seen, input controls act as special interactive filters that can be used by picking one of them from the input control templates, that is, the type that is the most suitable to filter the data in the report. Our main consideration when choosing an input control is to determine the type of list we need to pick: single or multiple. The second consideration should be the interactive functionality that we need from such a control: a simple value pick or perhaps an arithmetic operator, such as a greater or less than operator, which can be applied to a measure object. There's more... An input control can also be created using the Analysis toolbar and the Filter tab. In order to edit the existing input control, we can access the mini toolbar above the input control. Here, we will be able to edit the control, show its dependencies (the elements that are affected by it), or delete it, as shown in the following screenshot: We can also display a single informative cell describing which table and report a filter has been applied on. This useful option can be applied by navigating to the Report Element toolbar, choosing Report Filter Summary from the Cell subtoolbar, and dragging it to the report area, as shown in the following screenshot: By clicking on the Map button, we will switch to a graphical tree view of the input control, showing the values that were picked in the filter as well as its dependencies: If you need to display the values of the input control in the report area, simply drag the object that you used in the control to the report area, turn it into a horizontal table, and then edit the dependencies of the control so that it will be applied on the new table as well. Working with an element link An element link is a feature designed to pass a value from an existing table or a chart to another data component in the report area. Element links transform the values in a table or a chart into dynamic values that can filter other data elements. The main difference between element links and other types of filtering is that when using an element link, we are actually using and working within a table, keeping its structure and passing a parameter from it to another structure. This feature can be great to work with when we are using a detailed table and want to use its values to filter another chart that will visualize the summarized data and vice versa. How to do it... Perform the following steps: We will pass the Country value from the detailed table to the line quantity sales pie chart, enabling the business user to filter the pie dynamically while working with the detailed table. By clicking on the Country column, we will navigate in the speed menu to Linking | Add Element Link, as shown in the following screenshot: In the next window, we will choose the passing method and decide whether to pass the entire row values or a Single object value. In our example, we will use the Single object option, as shown in the following screenshot: In the next screen, we will be able to add a description of our choice to the element link. And finally, we will define the dependencies via the report elements that we want to pass the country value to, as shown in the following screenshot: By clicking on the Finish button, we will switch to the report view, and by marking the Country column or any other column, we will be able to pass the Country value to the pie chart, as shown in the following screenshot: By clicking on a different Country value, such as Colombia, we will be able to pass it to the pie and filter the results accordingly: Notice that the pie chart results have changed and that the country value is marked in bold inside the tooltip box, showing the column that was actually used to pass the value. How it works... The element link simply links the tables and other data components. It is actually a type of input control designed to work directly from a table rather than a filter component panel or a bar. By clicking on any country value, we simply pass it to the dependency component that uses the value as an input in order to present the relevant data. There's more... An element link can be edited and adjusted in a way similar to the way in which an input control is edited. By right-clicking on the Element Link icon located on the header of the rightmost column, we will be able to edit it, as shown in the following screenshot: Another good way to view the element link status and edit it is to switch to the Input Controls panel where you can view it as well, as shown in the following screenshot: Summary In this article, we came to know about the filtering techniques we can apply to the report tables and charts. Resources for Article: Further resources on this subject: Exporting SAP BusinessObjects Dashboards into Different Environments [Article] SAP BusinessObjects: Customizing the Dashboard [Article] SAP HANA Architecture [Article]
Read more
  • 0
  • 0
  • 2066

article-image-camera-calibration
Packt
25 Aug 2014
18 min read
Save for later

Camera Calibration

Packt
25 Aug 2014
18 min read
This article by Robert Laganière, author of OpenCV Computer Vision Application Programming Cookbook Second Edition, includes that images are generally produced using a digital camera, which captures a scene by projecting light going through its lens onto an image sensor. The fact that an image is formed by the projection of a 3D scene onto a 2D plane implies the existence of important relationships between a scene and its image and between different images of the same scene. Projective geometry is the tool that is used to describe and characterize, in mathematical terms, the process of image formation. In this article, we will introduce you to some of the fundamental projective relations that exist in multiview imagery and explain how these can be used in computer vision programming. You will learn how matching can be made more accurate through the use of projective constraints and how a mosaic from multiple images can be composited using two-view relations. Before we start the recipe, let's explore the basic concepts related to scene projection and image formation. (For more resources related to this topic, see here.) Image formation Fundamentally, the process used to produce images has not changed since the beginning of photography. The light coming from an observed scene is captured by a camera through a frontal aperture; the captured light rays hit an image plane (or an image sensor) located at the back of the camera. Additionally, a lens is used to concentrate the rays coming from the different scene elements. This process is illustrated by the following figure: Here, do is the distance from the lens to the observed object, di is the distance from the lens to the image plane, and f is the focal length of the lens. These quantities are related by the so-called thin lens equation: In computer vision, this camera model can be simplified in a number of ways. First, we can neglect the effect of the lens by considering that we have a camera with an infinitesimal aperture since, in theory, this does not change the image appearance. (However, by doing so, we ignore the focusing effect by creating an image with an infinite depth of field.) In this case, therefore, only the central ray is considered. Second, since most of the time we have do>>di, we can assume that the image plane is located at the focal distance. Finally, we can note from the geometry of the system that the image on the plane is inverted. We can obtain an identical but upright image by simply positioning the image plane in front of the lens. Obviously, this is not physically feasible, but from a mathematical point of view, this is completely equivalent. This simplified model is often referred to as the pin-hole camera model, and it is represented as follows: From this model, and using the law of similar triangles, we can easily derive the basic projective equation that relates a pictured object with its image: The size (hi) of the image of an object (of height ho) is therefore inversely proportional to its distance (do) from the camera, which is naturally true. In general, this relation describes where a 3D scene point will be projected on the image plane given the geometry of the camera. Calibrating a camera From the introduction of this article, we learned that the essential parameters of a camera under the pin-hole model are its focal length and the size of the image plane (which defines the field of view of the camera). Also, since we are dealing with digital images, the number of pixels on the image plane (its resolution) is another important characteristic of a camera. Finally, in order to be able to compute the position of an image's scene point in pixel coordinates, we need one additional piece of information. Considering the line coming from the focal point that is orthogonal to the image plane, we need to know at which pixel position this line pierces the image plane. This point is called the principal point. It might be logical to assume that this principal point is at the center of the image plane, but in practice, this point might be off by a few pixels depending on the precision at which the camera has been manufactured. Camera calibration is the process by which the different camera parameters are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. Camera calibration will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process that has been made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is to show it a set of scene points for which their 3D positions are known. Then, you need to observe where these points project on the image. With the knowledge of a sufficient number of 3D points and associated 2D image points, the exact camera parameters can be inferred from the projective equation. Obviously, for accurate results, we need to observe as many points as possible. One way to achieve this would be to take one picture of a scene with many known 3D points, but in practice, this is rarely feasible. A more convenient way is to take several images of a set of some 3D points from different viewpoints. This approach is simpler but requires you to compute the position of each camera view in addition to the computation of the internal camera parameters, which fortunately is feasible. OpenCV proposes that you use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0, with the X and Y axes well-aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. Here is one example of a 6x4 calibration pattern image: The good thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (the number of horizontal and vertical inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false: // output vectors of image points std::vector<cv::Point2f> imageCorners; // number of inner corners on the chessboard cv::Size boardSize(6,4); // Get the chessboard corners bool found = cv::findChessboardCorners(image, boardSize, imageCorners); The output parameter, imageCorners, will simply contain the pixel coordinates of the detected inner corners of the shown pattern. Note that this function accepts additional parameters if you needs to tune the algorithm, which are not discussed here. There is also a special function that draws the detected corners on the chessboard image, with lines connecting them in a sequence: //Draw the corners cv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The following image is obtained: The lines that connect the points show the order in which the points are listed in the vector of detected image points. To perform a calibration, we now need to specify the corresponding 3D points. You can specify these points in the units of your choice (for example, in centimeters or in inches); however, the simplest is to assume that each square represents one unit. In that case, the coordinates of the first point would be (0,0,0) (assuming that the board is located at a depth of Z=0), the coordinates of the second point would be (1,0,0), and so on, the last point being located at (5,3,0). There are a total of 24 points in this pattern, which is too small to obtain an accurate calibration. To get more points, you need to show more images of the same calibration pattern from various points of view. To do so, you can either move the pattern in front of the camera or move the camera around the board; from a mathematical point of view, this is completely equivalent. The OpenCV calibration function assumes that the reference frame is fixed on the calibration pattern and will calculate the rotation and translation of the camera with respect to the reference frame. Let's now encapsulate the calibration process in a CameraCalibrator class. The attributes of this class are as follows: class CameraCalibrator { // input points: // the points in world coordinates std::vector<std::vector<cv::Point3f>> objectPoints; // the point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; Note that the input vectors of the scene and image points are in fact made of std::vector of point instances; each vector element is a vector of the points from one view. Here, we decided to add the calibration points by specifying a vector of the chessboard image filename as input: // Open chessboard images and extract corner points int CameraCalibrator::addChessboardPoints( const std::vector<std::string>& filelist, cv::Size & boardSize) { // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners( image, boardSize, imageCorners); // Get subpixel accuracy on the corners cv::cornerSubPix(image, imageCorners, cv::Size(5,5), cv::Size(-1,-1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 30, // max number of iterations 0.1)); // min accuracy //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes; } The first loop inputs the 3D coordinates of the chessboard, and the corresponding image points are the ones provided by the cv::findChessboardCorners function. This is done for all the available viewpoints. Moreover, in order to obtain a more accurate image point location, the cv::cornerSubPix function can be used, and as the name suggests, the image points will then be localized at a subpixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines the maximum number of iterations and the minimum accuracy in subpixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners have been successfully detected, these points are added to our vectors of the image and scene points using our addPoints method. Once a sufficient number of chessboard images have been processed (and consequently, a large number of 3D scene point / 2D image point correspondences are available), we can initiate the computation of the calibration parameters as follows: // Calibrate the camera // returns the re-projection error double CameraCalibrator::calibrate(cv::Size &imageSize) { //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix, // output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs, // Rs, Ts flag); // set options } In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. These will be described in the next section. How it works... In order to explain the result of the calibration, we need to go back to the figure in the introduction, which describes the pin-hole camera model. More specifically, we want to demonstrate the relationship between a point in 3D at the position (X,Y,Z) and its image (x,y) on a camera specified in pixel coordinates. Let's redraw this figure by adding a reference frame that we position at the center of the projection as seen here: Note that the y axis is pointing downward to get a coordinate system compatible with the usual convention that places the image origin at the upper-left corner. We learned previously that the point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by the pixel's width (px) and height (py), respectively. Note that by dividing the focal length given in world units (generally given in millimeters) by px, we obtain the focal length expressed in (horizontal) pixels. Let's then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel units. Therefore, the complete projective equation is as follows: Recall that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. These equations can be rewritten in the matrix form through the introduction of homogeneous coordinates, in which 2D points are represented by 3-vectors and 3D points are represented by 4-vectors (the extra coordinate is simply an arbitrary scale factor, S, that needs to be removed when a 2D coordinate needs to be extracted from a homogeneous 3-vector). Here is the rewritten projective equation: The second matrix is a simple projection matrix. The first matrix includes all of the camera parameters, which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that returns the value of the intrinsic parameters given by a calibration matrix. More generally, when the reference frame is not at the projection center of the camera, we will need to add a rotation vector (a 3x3 matrix) and a translation vector (a 3x1 matrix). These two matrices describe the rigid transformation that must be applied to the 3D points in order to bring them back to the camera reference frame. Therefore, we can rewrite the projection equation in its most general form: Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (made of a rotation component represented by the matrix entries r1 to r9 and a translation represented by t1, t2, and t3) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration, and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The intrinsic parameters of our test camera obtained from a calibration based on 20 chessboard images are fx=167, fy=178, u0=156, and v0=119. These results are obtained by cv::calibrateCamera through an optimization process aimed at finding the intrinsic and extrinsic parameters that will minimize the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all the points specified during the calibration is called the re-projection error. Let's now turn our attention to the distortion parameters. So far, we have mentioned that under the pin-hole camera model, we can neglect the effect of the lens. However, this is only possible if the lens that is used to capture an image does not introduce important optical distortions. Unfortunately, this is not the case with lower quality lenses or with lenses that have a very short focal length. You may have already noted that the chessboard pattern shown in the image that we used for our example is clearly distorted—the edges of the rectangular board are curved in the image. Also, note that this distortion becomes more important as we move away from the center of the image. This is a typical distortion observed with a fish-eye lens, and it is called radial distortion. The lenses used in common digital cameras usually do not exhibit such a high degree of distortion, but in the case of the lens used here, these distortions certainly cannot be ignored. It is possible to compensate for these deformations by introducing an appropriate distortion model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation that will correct the distortions can be obtained together with the other camera parameters during the calibration phase. Once this is done, any image from the newly calibrated camera will be undistorted. Therefore, we have added an additional method to our calibration class: // remove distortion in an image (after calibration) cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { // called once per calibration cv::initUndistortRectifyMap( cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted; } Running this code results in the following image: As you can see, once the image is undistorted, we obtain a regular perspective image. To correct the distortion, OpenCV uses a polynomial function that is applied to the image points in order to move them at their undistorted position. By default, five coefficients are used; a model made of eight coefficients is also available. Once these coefficients are obtained, it is possible to compute two cv::Mat mapping functions (one for the x coordinate and one for the y coordinate) that will give the new undistorted position of an image point on a distorted image. This is computed by the cv::initUndistortRectifyMap function, and the cv::remap function remaps all the points of an input image to a new image. Note that because of the nonlinear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you will now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There's more... More options are available when it comes to camera calibration. Calibration with known intrinsic parameters When a good estimate of the camera's intrinsic parameters is known, it could be advantageous to input them in the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the CV_CALIB_USE_INTRINSIC_GUESS flag and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (CV_CALIB_FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (CV_CALIB_FIX_RATIO); in which case, you assume the pixels of the square shape. Using a grid of circles for calibration Instead of the usual chessboard pattern, OpenCV also offers the possibility to calibrate a camera by using a grid of circles. In this case, the centers of the circles are used as calibration points. The corresponding function is very similar to the function we used to locate the chessboard corners: cv::Size boardSize(7,7); std::vector<cv::Point2f> centers; bool found = cv:: findCirclesGrid( image, boardSize, centers); See also The A flexible new technique for camera calibration article by Z. Zhang in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, 2000, is a classic paper on the problem of camera calibration Summary In this article, we explored the projective relations that exist between two images of the same scene. Resources for Article: Further resources on this subject: Creating an Application from Scratch [Article] Wrapping OpenCV [Article] New functionality in OpenCV 3.0 [Article]
Read more
  • 0
  • 0
  • 22104

article-image-nservicebus-architecture
Packt
25 Aug 2014
11 min read
Save for later

The NServiceBus Architecture

Packt
25 Aug 2014
11 min read
In this article by Rich Helton, the author of Mastering NServiceBus and Persistence, we will focus on the NServiceBus architecture. We will discuss the different message and storage types supported in NSB. This discussion will include an introduction to some of the tools and advantages of using NSB. We will conceptually look at how some of the pieces fit together while backing up the discussions with code examples. (For more resources related to this topic, see here.) NSB is the cornerstone of automation. As an Enterprise Service Bus (ESB), NSB is the most popular C# ESB solution. NSB is a framework that is used to provide many of the benefits of implementing a service-oriented architecture (SOA). It uses an IBus and its ESB bus to handle messages between NSB services, without having to create custom interaction. This type of messaging between endpoints creates the bus. The services, which are autonomous Windows processes, use both Windows and NSB hosting services. NSB-hosting services provide extra functionalities, such as creating endpoints; setting up Microsoft Queuing (MSMQ), DTC for transactions across queues, subscription storage for publish/subscribe message information, NSB sagas; and much more. Deploying these pieces for messaging manually can lead to errors and a lot of work is involved to get it correct. NSB takes care of provisioning its needed pieces. NSB is not a frontend framework, such as Microsoft's Model-View-Controller (MVC). It is not used as an Object-to-Relationship Mapper (ORM), such as Microsoft's Entity Frameworks, to map objects to SQL Server tables. It is also not a web service framework, such as Microsoft's Windows Communication Foundation (WCF). NSB is a framework to provide the communication and support for services to communicate with each other and provide an end-to-end workflow to process all of these pieces. Benefits of NSB NSB provides many components needed for automation that are only found in ESBs. ESBs provide the following: Separation of duties: From the frontend to the backend by allowing the frontend to fire a message to a service and continue with its processing not worrying about the results until it needs an update. Also, you can separate workflow responsibilities by separating NSB services. One service could be used to send payments to a bank, and another service can be used to provide feedback of the current status of the payment to the MVC-EF database so that a user may see the status of their payment. Message durability: Messages are saved in queues between services so that if the services are stopped, they can start from the messages saved in the queues when they are restarted. This is done so that the messages will persist, until told otherwise. Workflow retries: Messages, or endpoints, can be told to retry a number of times until they completely fail and send an error. The error is automated to return to an error queue. For instance, a web service message can be sent to a bank, and it can be set to retry the web service every 5 minutes for 20 minutes before giving up completely. This is useful while fixing any network or server issues. Monitoring: NSB's ServicePulse can keep a check on the heartbeat of its services. Other monitoring checks can be easily performed on NSB queues to report the number of messages. Encryption: Messages between services and endpoints can be easily encrypted. High availability: Multiple services, or subscribers, could be processing the same or similar messages from various services that live on different servers. When one server, or a service, goes down, others could be made available to take over that are already running. More on endpoints While working with a service-to-service interaction, messages are transmitted in the form of XML through queues that are normally part of Microsoft Server such as MSMQ, SQL Server such as SQL queuing, or even part of Microsoft Azure queues for cloud computing. There are other endpoints that services use to process resources that are not part of service-to-service communications. These endpoints are used to process commands and messages as well, for instance, sending a file to non-NSB-hosted services, sending SFTP files to non-NSB-hosted services, or sending web services, such as payments, to non-NSB services. While at the other end of these communications are non-NSB-hosted services, NSB offers a lot of integrity by checking how these endpoints were processed. NSB provides information on whether a web service was processed or not, with or without errors, and provides feedback and monitoring, and maintains the records through queues. It also provides saga patterns to provide feedback to the originating NSB services of the outcome while storing messages from a particular NSB service to the NSB service of everything that has happened. In many NSB services, an audit queue is used to keep a backup of each message that occurred successfully, and the error queue is used to keep track of any message that was not processed successfully. The application security perspective From the application security perspective, OWASP's top ten list of concerns, available at https://www.owasp.org/index.php/Top_10_2013-Top_10, seems to always surround injection, such as SQL injection, broken authentication, and cross-site scripting (XSS). Once an organization puts a product in production, they usually have policies in place for the company's security personnel to scan the product at will. Not all organizations have these policies in place, but once an organization attaches their product to the Internet, there are armies of hackers that may try various methods to attack the site, depending on whether there is money to be gained or not. Money comes in a new economy these days in the form of using a site as a proxy to stage other attacks, or to grab usernames and passwords that a user may have for a different system in order to acquire a user's identity or financial information. Many companies have suffered bankruptcy over the last decades thinking that they were secure. NSB offers processing pieces to the backend that would normally be behind a firewall to provide some protection. Firewalls provide some protection as well as Intrusion Detection Systems (IDSes), but there is so much white noise for viruses and scans that many real hack attacks may go unnoticed, except by very skilled antihackers. NSB offers additional layers of security by using queuing and messaging. The messages can be encrypted, and the queues may be set for limited authorization from production administrators. NSB hosting versus self-hosting NServiceBus.Host is an executable that will deploy the NSB service. When the NSB service is compiled, it turns into a Windows DLL that may contain all the configuration settings for the IBus. If there are additional settings needed for the endpoint's configuration that are not coded in the IBus's configuration, then it can be resolved by setting these configurations in the Host command. However, NServiceBus.Host need not be used to create the program that is used in NServiceBus. As a developer, you can create a console program that is run by a Window's task scheduler, or even create your own services that run the NSB IBus code as an endpoint. Not using the NSB-hosting engine is normally referred to as self-hosting. The NServiceBus host streamlines service development and deployment, allows you to change technologies without code, and is administrator friendly when setting permissions and accounts. It will deploy your application as an NSB-hosted solution. It can also add configurations to your program at the NServiceBus.Host.exe command line. If you develop a program with the NServiceBus.Host reference, you can use EndpoinConfig.cs to define your IBus configuration in this code, or add it as part of the command line instead of creating your own Program.cs that will do a lot of the same work with more code. When debugging with the NServiceBus.Host reference, the Visual Studio project is creating a windows DLL program that is run by the NserviceBus.Host.exe command. Here's an example form of the properties of a Visual Studio project: The NServiceBus.Host.exe command line has support for deploying Window's services as NSB-hosted services: These configurations are typically referred to as the profile for which the service will be running. Here are some of the common profiles: MultiSite: This turns on the gateway. Master: This makes the endpoint a "master node endpoint". This means that it runs the gateway for multisite interaction, the timeout manager, and the distributor. It also starts a worker that is enlisted with the distributor. It cannot be combined with the worker or distributor profiles. Worker: This makes the current endpoint enlist as a worker with its distributor running on the master node. It cannot be combined with the master or distributor profiles. Distributor: This starts the endpoint only as a distributor. This means that the endpoint does no actual work and only distributes the load among its enlisted workers. It cannot be combined with the Master and Worker profiles. Performance counters: This turns on the NServiceBus-specific performance counters. Performance counters are installed by default when you run a Production profile. Lite: This keeps everything in memory with the most detailed logging. Integration: This uses technologies closer to production but without a scale-out option and less logging. It is used in testing. Production: This uses scale-out-friendly technologies and minimal file-based logging. It is used in production. Using Powershell commands Many items can be managed in the Package Manager console program of Visual Studio 2012. Just as we add commands to the NServiceBus.Host.exe file to extend profiles and configurations, we may also use VS2012 Package Manager to extend some of the functionalities while debugging and testing. We will use the ScaleOut solution discussed later just to double check that the performance counters are installed correctly. We need to make sure that the PowerShell commandlets are installed correctly first. We do this by using Package Manager: Install the package, NServiceBus.PowerShell Import the module, .packagesNServiceBus.PowerShell.4.3.0libnet40NServiceBus.PowerShell.dll Test NServiceBusPerformanceCountersInstallation The "Import module" step is dependent on where NService.PowerShell.dll was installed during the "Install package" process. The "Install-package" command will add the DLL into a package directory related to the solution. We can find out more on PowerShell commandlets at http://docs.particular.net/nservicebus/managing-nservicebus-using-powershell and even by reviewing the help section of Package Manager. Here, we see that we can insert configurations into App.config when we look at the help section, PM> get-help about_NServiceBus. Message exchange patterns Let's discuss the various exchange patterns now. The publish/subscribe pattern One of the biggest benefits of using the ESB technology is the benefits of the publish/subscribe message pattern; refer to http://en.wikipedia.org/wiki/Publish-subscribe_pattern. The publish/subscribe pattern has a publisher that sends messages to a queue, say a MSMQ MyPublisher queue. Subscribers, say Subscriber1 and Subscriber2, will listen for messages on the queue that the subscribers are defined to take from the queue. If MyPublisher cannot process the messages, it will return them to the queue or to an error queue, based on the reasons why it could not process the message. The queue that the subscribers are looking for on the queue are called endpoint mappings. The publisher endpoint mapping is usually based on the default of the project's name. This concept is the cornerstone to understand NSB and ESBs. No messages will be removed, unless they are explicitly told to be removed by a service. Therefore, no messages will be lost, and all are accounted for from the services. The configuration data is saved to the database. Also, the subscribers can respond back to MyPublisher with messages indicating that everything was alright or not using the queue. So why is this important? It's because all the messages can then be accounted for, and feedback can be provided to all the services. A service is a Windows service that is created and hosted by the NSB host program. It could also be a Windows command console program or even an MVC program, but the service program is always up and running on the server, continuously checking queues and messages that are sent to it from other endpoints. These messages could be commands, such as instructions to go and look at the remote server to see whether it is still running, or data messages such as sending a particular payment to the bank through a web service. For NSB, we formalize that events are used in publish/subscribe, and commands are used in a request-response message exchange pattern. Windows Server could have too many services, so some of these services could just be standing by, waiting to take over if one service is not responding or processing messages simultaneously. This provides a very high availability.
Read more
  • 0
  • 0
  • 6454

article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 11866

Packt
25 Aug 2014
16 min read
Save for later

Solving problems – closest good restaurant

Packt
25 Aug 2014
16 min read
In this article by Steven F. Lott author of Python for Secret Agents, we will use Python to meet our secret informant at a good restaurant that's a reasonable distance from our base. In order to locate a good restaurant, we need to gather some additional information. In this case, good means a passing grade from the health inspectors. Before we can even have a meeting, we'll need to use basic espionage skills to locate the health code survey results for local restaurants. (For more resources related to this topic, see here.) We'll create a Python application to combine many things to sort through the results. We'll perform the following steps: We'll start with the restaurant health score information. We need to geocode the restaurant addresses if it hasn't been done already. In some cases, geocoding is done for us. In other cases, we'll be using a web service for this. We need to filter and organize restaurants by good scores. We'll also need to use our haversine() function to compute the distance from our base. Finally, we need to communicate this to our network, ideally using a short NAC code embedded within an image that we post to a social media site. In many cities, the health code data is available online. A careful search will reveal a useful dataset. In other cities, the health inspection data isn't readily available online. We might have to dig considerably deep to track down even a few restaurants near our base of operations. Some cities use Yelp to publicize restaurant health code inspection data. We can read about the YELP API to search for restaurants on the following link: http://www.yelp.com/developers/documentation We might also find some useful data on InfoChimps at http://www.infochimps.com/tags/restaurant. One complexity we often encounter is the use of HTML-based APIs for this kind of information. This is not intentional obfuscation, but the use of HTML complicates analysis of the data. Parsing HTML to extract meaningful information isn't easy; we'll need an extra library to handle this. We'll look at two approaches: good, clean data and more complex HTML data parsing. In both cases, we need to create a Python object that acts as a container for a collection of attributes. First, we'll divert to look at the SimpleNamespace class. Then, we'll use this to collect information. Creating simple Python objects We have a wide variety of ways to define our own Python objects. We can use the central built-in types such as dict to define an object that has a collection of attribute values. When looking at information for a restaurant, we could use something like this: some_place = { 'name': 'Secret Base', 'address': '333 Waterside Drive' } Since this is a mutable object, we can add attribute values and change the values of the existing attributes. The syntax is a bit clunky, though. Here's what an update to this object looks like: some_place['lat']= 36.844305 some_place['lng']= -76.29112 One common solution is to use a proper class definition. The syntax looks like this: class Restaurant: def __init__(self, name, address): self.name= name self.address= address We've defined a class with an initialization method, __init__(). The name of the initialization method is special, and only this name can be used. When the object is built, the initialization method is evaluated to assign initial values to the attributes of the object. This allows us to create an object more succinctly: some_place= Restaurant( name='Secret Base', address='333 Waterside Drive' ) We've used explicit keyword arguments. The use of name= and address= isn't required. However, as class definitions become more complex, it's often more flexible and more clear to use keyword argument values. We can update the object nicely too, as follows: This works out best when we have a lot of unique processing that is bound to each object. In this case, we don't actually have any processing to associate with the attributes; we just want to collect those attributes in a tidy capsule. The formal class definition is too much overhead for such a simple problem. Python also gives us a very flexible structure called a namespace. This is a mutable object that we can access using simple attribute names, as shown in the following code: from types import SimpleNamespace some_place= SimpleNamespace( name='Secret Base', address='333 Waterside Drive' ) The syntax to create a namespace must use keyword arguments (name='The Name'). Once we've created this object, we can update it using a pleasant attribute access, as shown in the following snippet: some_place.lat= 36.844305 some_place.lng= -76.29112 The SimpleNamespace class gives us a way to build an object that contains a number of individual attribute values. We can also create a namespace from a dictionary using Python's ** notation. Here's an example: >>> SimpleNamespace( **{'name': 'Secret Base', 'address': '333 Waterside Drive'} ) namespace(address='333 Waterside Drive', name='Secret Base') The ** notation tells Python that a dictionary object contains keyword arguments for the function. The dictionary keys are the parameter names. This allows us to build a dictionary object and then use it as the arguments to a function. Recall that JSON tends to encode complex data structures as a dictionary. Using this ** technique, we can transform a JSON dictionary into SimpleNamespace, and replace the clunky object['key'] notation with a cleaner object.key notation. Working with HTML web services – tools In some cases, the data we want is tied up in HTML websites. The City of Norfolk, for example, relies on the State of Virginia's VDH health portal to store its restaurant health code inspection data. In order to make sense of the intelligence encoded in the HTML notation on the WWW, we need to be able to parse the HTML markup that surrounds the data. Our job is greatly simplified by the use of special higher-powered weaponry; in this case, BeautifulSoup. Start with https://pypi.python.org/pypi/beautifulsoup4/4.3.2 or http://www.crummy.com/software/BeautifulSoup/. If we have Easy Install (or PIP), we can use these tools to install BeautifulSoup. We can use Easy Install to install BeautifulSoup like this: sudo easy_install-3.3 beautifulsoup4 Mac OS X and GNU/Linux users will need to use the sudo command. Windows users won't use the sudo command. Once we have BeautifulSoup, we can use it to parse the HTML code looking for specific facts buried in an otherwise cryptic jumble of HTML tags. Before we can go on, you'll need to read the quickstart documentation and bring yourself up to speed on BeautifulSoup. Once you've done that, we'll move to extracting data from HTML web pages. Start with http://www.crummy.com/software/BeautifulSoup/bs4/doc/#quick-start. An alternative tool is scrapy. For information see http://scrapy.org. Also, read Instant Scrapy Web Mining and Scraping, Travis Briggs, Packt Publishing, for details on using this tool. Unfortunately, as of this writing, scrapy is focused on Python 2, not Python 3. Working with HTML web services – getting the page In the case of VDH health data for the City of Norfolk, the HTML scraping is reasonably simple. We can leverage the strengths of BeautifulSoup to dig into the HTML page very nicely. Once we've created a BeautifulSoup object from the HTML page, we will have an elegant technique to navigate down through the hierarchy of the HTML tags. Each HTML tag name (html, body, and so on) is also a BeautifulSoup query that locates the first instance of that tag. An expression such as soup.html.body.table can locate the first <table> in the HTML <body> tag. In the case of the VDH restaurant data, that's precisely the data we want. Once we've found the table, we need to extract the rows. The HTML tag for each row is <tr> and we can use the BeautifulSoup table.find_all("tr") expression to locate all rows within a given <table> tag. Each tag's text is an attribute, .text. If the tag has attributes, we can treat the tag as if it's a dictionary to extract the attribute values. We'll break down the processing of the VDH restaurant data into two parts: the web services query that builds Soup from HTML and the HTML parsing to gather restaurant information. Here's the first part, which is getting the raw BeautifulSoup object: scheme_host= "http://healthspace.com" def get_food_list_by_name(): path= "/Clients/VDH/Norfolk/Norolk_Website.nsf/Food-List-ByName" form = { "OpenView": "", "RestrictToCategory": "FAA4E68B1BBBB48F008D02BF09DD656F", "count": "400", "start": "1", } query= urllib.parse.urlencode( form ) with urllib.request.urlopen(scheme_host + path + "?" + query) as data: soup= BeautifulSoup( data.read() ) return soup This repeats the web services queries we've seen before. We've separated three things here: the scheme_host string, the path string, and query. The reason for this is that our overall script will be using the scheme_host with other paths. And we'll be plugging in lots of different query data. For this basic food_list_by_name query, we've built a form that will get 400 restaurant inspections. The RestrictToCategory field in the form has a magical key that we must provide to get the Norfolk restaurants. We found this via a basic web espionage technique: we poked around on the website and checked the URLs used when we clicked on each of the links. We also used the Developer mode of Safari to explore the page source. In the long run, we want all of the inspections. To get started, we've limited ourselves to 400 so that we don't spend too long waiting to run a test of our script. The response object was used by BeautifulSoup to create an internal representation of the web page. We assigned this to the soup variable and returned it as the result of the function. In addition to returning the soup object, it can also be instructive to print it. It's quite a big pile of HTML. We'll need to parse this to get the interesting details away from the markup. Working with HTML web services – parsing a table Once we have a page of HTML information parsed into a BeautifulSoup object, we can examine the details of that page. Here's a function that will locate the table of restaurant inspection details buried inside the page. We'll use a generator function to yield each individual row of the table, as shown in the following code: def food_table_iter( soup ): """Columns are 'Name', '', 'Facility Location', 'Last Inspection', Plus an unnamed column with a RestrictToCategory key """ table= soup.html.body.table for row in table.find_all("tr"): columns = [ td.text.strip() for td in row.find_all("td") ] for td in row.find_all("td"): if td.a: url= urllib.parse.urlparse( td.a["href"] ) form= urllib.parse.parse_qs( url.query ) columns.append( form['RestrictToCategory'][0] ) yield columns Notice that this function begins with a triple-quoted string. This is a docstring and it provides documentation about the function. Good Python style insists on a docstring in every function. The Python help system will display the docstrings for functions, modules, and classes. We've omitted them to save space. Here, we included it because the results of this particular iterator can be quite confusing. This function requires a parsed Soup object. The function uses simple tag navigation to locate the first <table> tag in the HTML <body> tag. It then uses the table's find_all() method to locate all of the rows within that table. For each row, there are two pieces of processing. First, a generator expression is used to find all the <td> tags within that row. Each <td> tag's text is stripped of excess white space and the collection forms a list of cell values. In some cases, this kind of processing is sufficient. In this case, however, we also need to decode an HTML <a> tag, which has a reference to the details for a given restaurant. We use a second find_all("td") expression to examine each column again. Within each column, we check for the presence of an <a> tag using a simple if td.a: loop. If there is an <a> tag, we can get the value of the href attribute on that tag. When looking at the source HTML, this is the value inside the quotes of <a href="">. This value of an HTML href attribute is a URL. We don't actually need the whole URL. We only need the query string within the URL. We've used the urllib.parse.urlparse() function to extract the various bits and pieces of the URL. The value of the url.query attribute is just the query string, after the ?. It turns out, we don't even want the entire query string; we only want the value for the key RestrictToCategory. We can parse the query string with urllib.parse.parse_qs() to get a form-like dictionary, which we assigned to the variable form. This function is the inverse of urllib.parse.urlencode(). The dictionary built by the parse_qs() function associates each key with a list of values. We only want the first value, so we use form['RestrictToCategory'][0] to get the key required for a restaurant. Since this food_table_iter () function is a generator, it must be used with a for statement or another generator function. We can use this function with a for statement as follows: for row in food_table_iter(get_food_list_by_name()): print(row) This prints each row of data from the HTML table. It starts like this: ['Name', '', 'Facility Location', 'Last Inspection'] ["Todd's Refresher", '', '150 W. Main St #100', '6-May-2014', '43F6BE8576FFC376852574CF005E3FC0'] ["'Chick-fil-A", '', '1205 N Military Highway', '13-Jun-2014', '5BDECD68B879FA8C8525784E005B9926'] This goes on for 400 locations. The results are unsatisfying because each row is a flat list of attributes. The name is in row[0] and the address in row[2]. This kind of reference to columns by position can be obscure. It would be much nicer to have named attributes. If we convert the results to a SimpleNamespace object, we can then use the row.name and row.address syntax. Making a simple Python object from columns of data We really want to work with an object that has easy-to-remember attribute names and not a sequence of anonymous column names. Here's a generator function that will build a SimpleNamespace object from a sequence of values produced by a function such as the food_table_iter() function: def food_row_iter( table_iter ): heading= next(table_iter) assert ['Name', '', 'Facility Location', 'Last Inspection'] == heading for row in table_iter: yield SimpleNamespace( name= row[0], address= row[2], last_inspection= row[3], category= row[4] ) This function's argument must be an iterator like food_table_iter(get_food_list_by_name()). The function uses next(table_iter) to grab the first row, since that's only going to be a bunch of column titles. We'll assert that the column titles really are the standard column titles in the VDH data. If the assertion ever fails, it's a hint that VDH web data has changed. For every row after the first row, we build a SimpleNamespace object by taking the specific columns from each row and assigning them nice names. We can use this function as follows: soup= get_food_list_by_name() raw_columns= food_table_iter(soup) for business in food_row_iter( raw_column ): print( business.name, business.address ) The processing can now use nice attribute names, for example, business.name, to refer to the data we extracted from the HTML page. This makes the rest of the programming meaningful and clear. What's also important is that we've combined two generator functions. The food_table_iter() function will yield small lists built from HTML table rows. The food_row_iter() function expects a sequence of lists that can be iterated, and will build SimpleNamespace objects from that sequence of lists. This defines a kind of composite processing pipeline built from smaller steps. Each row of the HTML table that starts in food_table_iter() is touched by food_row_iter() and winds up being processed by the print() function. Continuing down this path The next steps are also excellent examples of the strengths of Python for espionage purposes. We need to geocode the restaurant addresses if it hasn't been done already. In some cases, geocoding is done for us. In other cases, we'll be using a web service for this. It varies from city to city whether or not the data is geocoded. One popular geocoding service (Google) can be accessed using Python's httplib and json modules. In a few lines of code we can extract the location of an address. We'll also need to implement the haversine formula for computing the distances between two points on the globe. This is not only easy, but the code is available on the web as a tidy example of good Python programming. Well worth an agent's time to search for this code. Once we have the raw data on good restaurants close to our secret lair, we still need to filter and make the final decision. Given the work done in the previous steps, it's a short, clear Python loop that will show a list of restaurants with top health scores within short distances of our lair. As we noted above, we'll also need to communicate this. We can use steganography to encode a message into an image file. In addition to data scraping from the web, and using web services, Python is also suitable for this kind of bit-and-byte-level fiddling with the internals of a TIFF image. Every secret agent can leverage Python for gathering, analyzing and distributing information. Summary In this article we learned about different functionalities in OpenCV 3.0. Resources for Article: Further resources on this subject: Getting Started with Python 2.6 Text Processing [article] Python 3: Building a Wiki Application [article] Python 3: Designing a Tasklist Application [article]
Read more
  • 0
  • 0
  • 21797
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-article-test-2
Packt
24 Aug 2014
17 min read
Save for later

as

Packt
24 Aug 2014
17 min read
  Various subsystem configurations   This article by Arnold Johansson and Anders Welen, the authors of WildFly Performance Tuning, talks about the various subsystem configurations available for WildFly. In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools.   The thread pool executor subsystem   The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems.   The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools:   <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport">   <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener">   <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool>   <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/>   <thread-factory name="infinispan-factory"/> </scheduled-thread-pool>   <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/>   <thread-factory name="infinispan-factory"/> </scheduled-thread-pool>   </subsystem>   ...   <cache-container name="web" default-cache="repl" listener-executor="infinispan-listener" eviction-executor="infinispan-eviction" replication-queue-executor="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true">   <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container>   The following thread pools are available:   •        unbounded-queue-thread-pool   •        bounded-queue-thread-pool   •        blocking-bounded-queue-thread-pool   •        queueless-thread-pool   •        blocking-queueless-thread-pool   •        scheduled-thread-pool   The details of these thread pools are described in the following sections:   unbounded-queue-thread-pool   The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads             Max allowed threads running simultaneously This specifies the amount of time that pool threads should be keepalive-time kept running when idle. (If not specified, threads will run until   the executor is shut down.)   thread-factory           This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool   The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in   the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded.   The configuration properties are shown in the following table: core-threads             Optional and should be less that max-threads   queue-length   max-threads     keepalive-time   This specifies the maximum size of the queue.   This specifies the maximum number of threads that are allowed to run simultaneously.   This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)   handoff- This specifies an executor to which tasks will be delegated, in executor the event that a task cannot be accepted. allow-core- This specifies whether core threads may time-out; if false, only timeout threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads.   blocking-bounded-queue-thread-pool   The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads             Optional and should be less that max-threads   queue-length max-threads   keepalive-time   This specifies the maximum size of the queue.   This specifies the maximum number of simultaneous threads allowed to run.   This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.)   allow-core- This specifies whether core threads may time-out; if false, only timeout threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads   queueless-thread-pool   The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded.   The configuration properties are shown in the following table:   max-threads Max allowed threads running simultaneously   The amount of time that pool threads should be kept running keepalive-time   when idle. (If not specified, threads will run until the executor is   shut down.) handoff- Specifies an executor to delegate tasks to in the event that a task executor cannot be accepted thread-factory The thread factory to use to create worker threads   blocking-queueless-thread-pool   The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads             Max allowed threads running simultaneously This specifies the amount of time that pool threads should be keepalive-time kept running when idle. (If not specified, threads will run until the executor is shut down.)   thread-factory   This specifies the thread factory to use to create worker threads scheduled-thread-pool   The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time.   The configuration properties are shown in the following table: max-threads             Max allowed threads running simultaneously This specifies the amount of time that pool threads should be keepalive-time kept running when idle. (If not specified, threads will run until the executor is shut down.)   thread-factory   This specifies the thread factory to use to create worker threads Monitoring   All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test.   Using CLI, run the following command:   /subsystem=threads/unbounded-queue-thread-pool=test:read-resource(include-runtime=true)   The response to the preceding command is as follows:   {   "outcome" => "success", "result" => {   "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined,   "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L,   "thread-factory" => undefined   }   }   Using JMX (query and result in the JConsole UI), run the following code:   jboss.as:subsystem=threads,unbounded-queue-thread-pool=test   An example thread pool by JMX is shown in the following screenshot:                                         An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console                                                                             Example thread pool—Admin Console   The future of the thread subsystem   According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted forremoval in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though.   Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at    https://issues.jboss.org/browse/WFLY-1383.   Java EE Connector Architecture and resource adapters   The Java EE Connector Architecture (JCA) defines a contract for an Enterprise Information Systems (EIS) to use when integrating with the application server. EISincludes databases, messaging systems, and other servers/systems external to an application server. The purpose is to provide a standardized API for developers and integration of various application server services such as transaction handling.   The EIS provides a so called Resource Adaptor (RA) that is deployed in WildFly and configured in the resource-adaptor subsystem. The RA is normally realized as one or more Java classes with configuration files stored in a Resource Archive (RAR) file. This file has the same characteristics as a regular Java Archive (JAR) file, but with the rar suffix.   The following code is a dummy example of how a JCA connection pool setup may appear in a WildFly configuration file:   <subsystem > <resource-adapters>   <resource-adapter> <archive>eisExample.rar</archive>   <!-- Resource adapter level config-property --> <config-property name="Server">   localhost </config-property> <config-property name="Port">   6666 </config-property> <transaction-support>   LocalTransaction </transaction-support> <connection-definitions>   <connection-definition class-name="ManagedConnectionFactory" jndi-name="java:/eisExample/ConnectionFactory" pool-name="EISExampleConnectionPool">   <pool> <min-pool-size>10</min-pool-size> <max-pool-size>100</max-pool-size> <prefill>true</prefill>   </pool> </connection-definition>   </connection-definitions> </resource-adapter> </resource-adapters> </subsystem>   By default in WildFly, these pools will not be populated until used for the first time. By setting prefill to true, the pool will be be populated during deployment.   Retrieving and using a connection as a developer is easy. Just perform a JNDI lookup for the factory at java:/eisExample/ConnectionFactory and then get a connection from that factory. Other usages that will be running for a long time will not benefit from pooling and will create their connection directly from the RA. An example of this is a Message Driven Bean (MDB) that listens on a RA for messages.   The settings for this connection pool can be fetched in runtime by running the following command in the CLI:   /subsystem=resource-adapters/resource-adapter=eisExample.rar/connection-definitions=EISExampleConnectionPool:read-resource(include-runtime=true)   The response to the preceding command is as follows:   {   "outcome" => "success", "result" => {   "allocation-retry" => undefined, "allocation-retry-wait-millis" => undefined, "background-validation" => false, "background-validation-millis" => undefined, "blocking-timeout-wait-millis" => undefined, "capacity-decrementer-class" => undefined, "capacity-decrementer-properties" => undefined,   "capacity-incrementer-class" => undefined, "capacity-incrementer-properties" => undefined, "class-name" => "ManagedConnectionFactory", "enabled" => true,   "enlistment" => true,   "flush-strategy" => "FailingConnectionOnly", "idle-timeout-minutes" => undefined, "initial-pool-size" => undefined, "interleaving" => false,   "jndi-name" => "java:/eisExample/ConnectionFactory", "max-pool-size" => 100,   "min-pool-size" => 10, "no-recovery" => false, "no-tx-separate-pool" => false, "pad-xid" => false, "pool-prefill" => false, "pool-use-strict-min" => false, "recovery-password" => undefined,   "recovery-plugin-class-name" => undefined, "recovery-plugin-properties" => undefined, "recovery-security-domain" => undefined, "recovery-username" => undefined, "same-rm-override" => undefined, "security-application" => false, "security-domain" => undefined, "security-domain-and-application" => undefined, "sharable" => true,   "use-ccm" => true, "use-fast-fail" => false, "use-java-context" => true, "use-try-lock" => undefined, "wrap-xa-resource" => true,   "xa-resource-timeout" => undefined, "config-properties" => undefined }   }   Using JMX (URI and result in the JConsole UI):   jboss.as:subsystem=resource-adapters, resource-adapter=eisExample.rar, connection-definitions=EISExampleConnectionPool   An example connection pool for a RA is shown in the following screenshot:                                                                   An example connection pool for an RA Besides the connection pool, the JCA subsystem in WildFly uses two internal thread pools:   •        short-running-threads   •        long-running-threads These thread pools are of the type blocking-bounded-queue-thread-pool and the behavior of this type is described earlier in the Thread pool executor subsystem section. The following command is an example of a CLI command to change queue-length for the short-running-threads pool:   /subsystem=jca/workmanager=default/short-running-threads=default:write-attribute(name=queue-length, value=100)   These pools can all be administered and monitored using both CLI and JMX. The following example and screenshot show the access to the short-running-threads pool:   Using CLI, run the following command:   /subsystem=jca/workmanager=default/short-running-threads=default:read-resource(include-runtime=true)   The response to the preceding command is as follows:   {   "outcome" => "success", "result" => {   "allow-core-timeout" => false, "core-threads" => 50, "current-thread-count" => 0, "handoff-executor" => undefined, "keepalive-time" => {   "time" => 10L, "unit" => "SECONDS" }   "largest-thread-count" => 0, "max-threads" => 50,   "name" => "default", "queue-length" => 50, "queue-size" => 0, "rejected-count" => 0, "thread-factory" => undefined   }   }   Using JMX (URI and result in the JConsole UI):   jboss.as:subsystem=jca,workmanager=default,short-running-threads=default   The JCA thread pool can be seen in the following screenshot:                   The JCA thread pool   If your application depends heavily on JCA, these pools should be monitored, and perhaps tuned as needed, to provide improved performance.   The Batch API subsystem   The Batch API is new in JEE 7 and is implemented in WildFly by the Batch subsystem. Internally it uses an unbounded-queue-thread-pool (see the description earlier in this article). If the application uses the Batch API extensively, the pool settings may need adjustment.   The configuration can be fetched using the CLI or by JMX. Using CLI, run the following command: /subsystem=batch/thread-pool=batch:read-resource(include-runtime=true)   The response to the preceding command is as follows:   {   "outcome" => "success", "result" => {   "keepalive-time" => { "time" => 100L, "unit" => "MILLISECONDS"   },   "max-threads" => 10, "name" => "batch", "thread-factory" => undefined   }   }   Using JMX (URI and result in the JConsole UI):   jboss.as:subsystem=batch,thread-pool=batch   The Batch API thread pool is shown in the following screenshot:             The Batch API thread pool   The Remoting subsystem   The Remoting subsystem exposes a connector to allow inbound communications with JNDI, JMX, and the EJB subsystem through multiplexing over the HTTP port (default 8080).   What happens is that the web container (the subsystem Undertow in WildFly) uses something called HTTP Upgrade to redirect, for example, EJB3 calls to the Remoting subsystem, if applicable. This new feature in WildFly makes life easier   for administrators as all the scattered ports from earlier versions are now narrowed down to two: one for the application (8080) and one for management (9990). All this is based on Java NIO API and utilizes a framework called XNIO (http://www.jboss.org/xnio).   The XNIO-based implementation uses a bounded-queue-thread-pool (see the description earlier in this article) with the following attributes:   Attribute Description   task-core-threads This specifies the number of core threads for the Remoting   worker task thread pool       task-max-threads This specifies the maximum number of threads for the   Remoting worker task thread pool   task-keepalive This specifies the number of milliseconds to keep noncore   Remoting worker task threads alive       task-limit This specifies the maximum number of Remoting worker   tasks to allow before rejecting         The settings can be managed using CLI by running the following command:   /subsystem=remoting:read-resource(include-runtime=true)   The response to the preceding command is as follows:   {   "outcome" => "success", "result" => {   "worker-read-threads" => 1, "worker-task-core-threads" => 4, "worker-task-keepalive" => 60, "worker-task-limit" => 16384, "worker-task-max-threads" => 8, "worker-write-threads" => 1, "connector" => undefined,   "http-connector" => {"http-remoting-connector" => undefined}, "local-outbound-connection" => undefined, "outbound-connection" => undefined, "remote-outbound-connection" => undefined   }   }   The Transactions subsystem   The Transaction subsystem has a fail-safe transaction log. It will, by default, store data on disk at ${jboss.server.data.dir}/tx-object-store. For a standalone server instance, this will point to the $WILDFLY_HOME/standalone/data/tx-object-store/ directory. The disk you choose to store your transaction log mustgive high performance and must be reliable. A good choice would be a local RAID, configured to write through cache. Even if remote disk storage is possible, the network overhead can be a performance bottleneck.   One way to point out another path for this object storage is to use the following CLI commands specifying an absolute path:   /subsystem=transactions:write-attribute(name=object-store-path,value="/ mount/diskForTx")   reload   XA – Two Phase Commit (2PC)   The use of XA is somewhat costly and it shouldn't be used if it isn't necessary with distributed transaction between two or more resources (often databases, but also such things as JMS). If needed, we strongly recommend using XA instead of   trying to build something yourself, such as compensating transactions to guarantee consistency between the resources. Such solutions can very quickly become quite advanced and the result will probably not outperform the XA protocol anyway.   Even though WildFly supports Last Resource Commit Optimization (LRCO), it shouldn't be used for performance optimization. It is only intended as a workaround to provide limited support to use one non-XA resource within an XA transaction.   These were the various configurations possible in WildFly.
Read more
  • 0
  • 0
  • 1694

article-image-configuring-organization-network-services
Packt
22 Aug 2014
9 min read
Save for later

Configuring organization network services

Packt
22 Aug 2014
9 min read
This article by Lipika Pal, the author of the book VMware vCloud Director Essentials, teaches you to configure organization network services. Edge devices can be used as DNS relay hosts owing to the release of vCloud Networking and Security suite 5.1. However, before we jump onto how to do it and why you should do it, let us discuss the DNS relay host technology itself. (For more resources related to this topic, see here.) If your client machines want to send their DNS queries, they contact DNS relay, which is nothing but a host. The queries are sent by the relay host to the provider's DNS server or any other entity specified using the Edge device settings. The answer received by the Edge device is then sent back to the machines. The Edge device also stores the answer for a short period of time, so any other machine in your network searching for the same address receives the answer directly from the Edge device without having to ask internet servers again. In other words, the Edge device has this tiny memory called DNS cache that remembers the queries. The following diagram illustrates one of the setups and its workings: In this example, you see an external interface configured on Edge to act as a DNS relay interface. On the client side, we configured Client1 VM such that it uses the internal IP of the Edge device (192.168.1.1) as a DNS server entry. In this setup, Client1 requests DNS resolution (step 1) for the external host, google.com, from Edge's gateway internal IP. To resolve google.com, the Edge device will query its configured DNS servers (step 2) and return that resolution to Client1 (step 3). Typical uses of this feature are as follows: DMZ environment Multi-tenant environment Accelerated resolution time Configuring DNS relay To configure DNS relay in a vShield Edge device, perform the following steps. Configure DNS relay when creating an Edge device or when there is an Edge device available. This is an option for an organization gateway and not for a vApp or Org network. Now, let's develop an Edge gateway in an organization vDC while enabling DNS relay by executing the following steps: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. Click on the Organization VDCs link and on the right-hand side, you will see some organization vDCs created. Click on any organization vDC. Doing this will take you to the vDC page. Click on the Administration page and double-click on Virtual Datacenter. Then click on the Edge Gateways tab. Click on the green-colored + sign as shown in the following screenshot: On the Configure Edge Gateway screen, click on the Configure IP Settings section. Use the other default settings and click on Next. On the Configure External Networks screen, select the external network and click on Add. You will see a checkbox on this same screen. Use the default gateway for DNS relay. Once you do, select it and click on Next, as shown in the following screenshot: Select the default value on the Configure IP Settings page and click on Next. Specify a name for this Edge gateway and click on Next. Review the information and click on Finish. Let's look an alternative way to configure this, assuming you already have an Edge gateway and are trying to configure DNS Relay. Execute the following steps to configure it: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. On the Home screen, click on Edge Gateways. Select an appropriate Edge gateway, right-click, and select Properties, as shown in the following screenshot: Click on the Configure External Networks tab. Scroll down and select the Use default gateway for DNS Relay. checkbox, as shown in the following screenshot: Click on OK. In this section, we learned to configure DNS relay. In the next section, we discuss the configuration of a DHCP service in vCloud Director. DHCP services in vCloud Director vShield Edge devices support IP address pooling using the DHCP service. vShield Edge DHCP service listens on the vShield Edge internal interface for DHCP discovery. It uses the internal interface's IP address on vShield Edge as the default gateway address for all clients. The broadcast and subnet mask values of the internal interface are used for the container network. However, when you translate this with vCloud, not all types of networks support DHCP. That said, the Direct Connect network does not support DHCP. So, only routed and isolated networks support the vCNS DHCP service. The following diagram illustrates a routed organization vCD network: In the preceding diagram, the DHCP service provides an IP address from the Edge gateway to the Org networks connected to it. The following diagram shows how vApp is connected to a routed external network and gets a DHCP service: The following diagram shows a vApp network and a vApp connected to it, and DHCP IP address being obtained from the vShield Edge device: Configuring DHCP pools in vCloud Director The following actions are required to set up Edge DHCP: Add DHCP IP pools Enable Edge DHCP services As a prerequisite, you should know which Edge device is connected to which Org vDC network. Execute the following steps to configure DHCP pool: Open up a supported browser. Go to the URL of the vCD server; for example, https://serverFQDN/cloud. Log in to vCD by typing an administrator user ID and password. Click on the Edge Gateways link. Select the appropriate gateway, right-click on it, and select Edge Gateway Services, as shown in the following screenshot: The first service is DHCP, as shown in the following screenshot: Click on Add. From the drop-down combobox, select the network that you want the DHCP to applied be on. Specify the IP range. Select Enable Pool and click on OK, as shown in the following screenshot: Click on the Enable DHCP checkbox and then on OK. In this section, we learned about the DHCP pool, its functionality, and how to configure it. Understanding VPN tunnels in vCloud Director It's imperative that we first understand the basics of CloudVPN tunnels and then move on to a use case. We can then learn to configure a VPN tunnel. A VPN tunnel is an encrypted or more precisely, encapsulated network path on a public network. This is often used to connect two different corporate sites via the Internet. In vCloud Director, you can connect two organizations through an external network, which can also be used by other organizations. The VPN tunnel prevents users in other organizations from being able to monitor or intercept communications. VPNs must be anchored at both ends by some kind of firewall or VPN device. In vCD, the VPNs are facilitated by vShield Edge devices. When two systems are connected by a VPN tunnel, they communicate like they are on the same network. Let's have a look at the different types of VPN tunnels you can create in vCloud Director: VPN tunnels between two organization networks in the same organization VPN tunnels between two organization networks in two different organizations VPN tunnels between an organization network and a remote network outside of VMware vCloud While only a system administrator can create an organization network, organization administrators have the ability to connect organization networks using VPN tunnels. If the VPN tunnel connects two different organizations, then the organization administrator from each organization must enable the connection. A VPN cannot be established between two different organizations without the authorization of either both organization administrators or the system administrator. It is possible to connect VPN tunnels between two different organizations in two different instances of vCloud Director. The following is a diagram of a VPN connection between two different organization networks in a single organization: The following diagram shows a VPN tunnel between two organizations. The basic principles are exactly the same. vCloud Director can also connect VPN tunnels to remote devices outside of vCloud. These devices must be IPSec-enabled and can be network switches, routers, firewalls, or individual computer systems. This ability to establish a VPN tunnel to a device outside of vCD can significantly increase the flexibility of vCloud communications. The following diagram illustrates a VPN tunnel to a remote network: Configuring a virtual private network To configure an organization-to-organization VPN tunnel in vCloud Director, execute the following steps: Start a browser. Insert the URL of the vCD server into it, for example, https://serverFQDN/cloud. Log in to vCD using the administrator user ID and password. Click on the Manage & Monitor tab. Click on the Edge Gateways link in the panel on the left-hand side. Select an appropriate gateway, right-click, and select Edge Gateway Services. Click on the VPN tab. Click on Configure Public IPs. Specify a public IP and click on OK, as shown in the following screenshot: Click on Add to add the VPN endpoint. Click on Establish VPN to and specify an appropriate VPN type (in this example, it is the first option), as shown in the following screenshot: If this VPN is within the same organization, then select the Peer Edge Gateway option from the dropdown. Then, select the local and peer networks. Select the local and peer endpoints. Now click on OK. Click on Enable VPN and then on OK. This section assumes that either the firewall service is disabled or the default rule is set to accept all on both sides. In this section, we learned what VPN is and how to configure it within a vCloud Director environment. In the next section, we discuss static routing and various use cases and implementation.
Read more
  • 0
  • 0
  • 7505

article-image-apache-maven-and-m2eclipse
Packt
22 Aug 2014
8 min read
Save for later

Apache Maven and m2eclipse

Packt
22 Aug 2014
8 min read
In this article by Sanjay Shah, author of the book Maven for Eclipse, we will learn the following topics: The Maven project structure Downloading Maven Maven versus Ant Creating a Maven project Checking out and importing a Maven project The Maven project build architecture POM (Project Object Model) POM relationships Project dependencies Dependency scopes Plugins and goals Installing Maven Writing unit tests Generating site documentation and HTML reports m2eclipse preferences (For more resources related to this topic, see here.) The Maven project structure Maven, as stated in earlier chapters, follows convention over configuration. Downloading Maven To download Maven, please visit http://maven.apache.org/download.cgi. Click on the latest version, apache-maven-x.x.x-bin.zip; at the time of writing this, the current version is apache-maven-3.2.1-bin.zip. Download the latest version as shown in the following screenshot: Maven versus Ant Before the emergence of Maven, Ant was the most widely used build tool across Java projects. Ant emerged from the concept of creating files in C/C++ programming to a platform-independent build tool. Ant used XML files to define the build process and its corresponding dependencies. Creating a Maven project m2eclipse makes the creation of Maven projects simple. Maven projects can be created in the following two ways: Using an archetype Without using an archetype Using an archetype An archetype is a plugin that allows a user to create Maven projects using a defined template known as archetype. There are different archetypes for different types of projects. Archetypes are primarily available to create the following: Maven plugins Simple web applications Simple projects Checking out a Maven project Checking out a Maven project means checking out from the source code versioning system. Before we process this, we need to make sure we have the Maven connector installed for the corresponding SCM we plan to use. Importing a Maven project Importing a Maven project is like importing any other Java project. The steps to import a Maven project are as follows: From the File menu, click on Import. Choose Import, a source window appears, expand Maven and click on Existing Maven Projects as shown in the following screenshot: In the next wizard, we have to choose the Maven project's location. Navigate to the corresponding location using the Browse...button, and click on Finish to finish the import as shown in the following screenshot; the project will be imported in the workspace: The Maven project build architecture The following figure shows the common build architecture for Maven projects. Essentially, every Maven project contains a POM file that defines every aspect of the project essentials. Maven uses the POM details to decide upon different actions and artifact generation. POM (Project Object Model) POM stands for Project Object Model. It is primarily an XML representation of a project in a file named pom.xml. POM is the identity of a Maven project and without it, the project has no existence. It is analogous to a Make file or a build.xml file of Ant. In a nutshell, the contents of POM fall under the following four categories: Project information: This provides general information of the project such as the project name, URL, organization, list of developers and contributors, license, and so on. POM relationships: In rare cases, a project can be a single entity and does not depend on other projects. This section provides information about its dependency, inheritance from the parent project, its sub modules, and so on. Build settings: These settings provide information about the build configuration of Maven. Usually, behavior customization such as the location of the source, tests, report generation, build plugins, and so on is done. Build environment: This specifies and activates the build settings for different environments. It also uses profiles to differentiate between development, testing, and production environments. POM relationships POM relationships identify the relationship they possess with respect to other modules, projects, and other POMs. This relationship could be in the form of dependencies, multimodule projects, parent-child also known as inheritance, and aggregation. A Maven repository can be one of the following types: Local Central Remote The local repository A local repository is one that resides in the same machine where a Maven build runs. The central repository The central repository is the repository provided by the Maven community. It contains a large repository of commonly used libraries. This repository comes into play when Maven does not find libraries in the local repository. The central repository can be found at: http://search.maven.org/#browse. The remote repository Enterprises usually maintain their own repositories for the libraries that are being used for the project. These differ from the local repository; a repository is maintained on a separate server, different from the developer's machine and is accessible within the organization. Project dependencies The powerful feature of Maven is its dependency management for any project. Dependencies may be external libraries or internal (in-house) libraries/project. Dependency scopes Dependency scopes control the availability of dependencies in a classpath and are packaged along with an application. Plugins and goals Maven, essentially, is a plugin framework where every action is the result of some plugin. Each plugin consists of goals (also called Mojos) that define the action to be taken. To put it in simple words, a goal is a unit of work. For example, a compiler plugin has compile as the goal that compiles the source of the project. Installing Maven Maven's installation is a simple two-step process: Setting up Maven home, that is, the M2_HOME variable Adding Maven home to the PATH variable Writing unit tests Writing unit tests is a part of good practice in software development. Maven's test phase executes unit tests and generates the corresponding report. In this section, we will learn about writing a simple unit test for our utility class ConversionUtil, and in the next section, we will see how to execute it and generate reports. Generating site documentation One of the integral features of Maven is that it eases artifacts and site documentation generation. To generate site documentation, add the following dependency in the pom file. Generating unit tests – HTML reports In the preceding section, we ran the unit tests, and the results were generated in the txt and xml format. Often, developers need to generate more readable reports. Also, as a matter of fact, the reports should be a part of site documentation for better collaboration and information available in one place. Other features in m2eclipse The available features are as follows: Add Dependency Add Plugin New Maven Module Project Download JavaDoc Download Sources Update Project Disable Workspace Resolution Disable Maven Nature Add Dependency It allows us to add dependencies to the Maven project. Up until now, we have been editing the pom.xml file and adding dependencies to it. A form-based POM editor m2eclipse provides the option of editing the pom file using a form-based POM editor. In earlier chapters, we played with XML tags and edited the pom file. While directly editing an XML file, the knowledge of tags is required, and there is a high chance that the user will make some errors. However, a form-based editor reduces the chance of a simple error and eases the editing of a pom file without or very minimal XML knowledge behind the scene. Analyzing project dependencies A POM editor has a Dependencies tab that provides a glance of dependencies and an option to manage dependencies of the project. Working with repositories To browse through the repository, navigate to Window | Show View and click on Other...as follows:| The Maven Repositories view constitutes of the following types: Local Repositories Global Repositories Project Repositories Custom Repositories m2eclipse preferences To open m2eclipse preferences, navigate to Window | Preferences. In the Preferences window and search for maven in the filter textbox as follows: The available Maven preferences are as follows: Maven: It allows us to set various options for the maven such as Offline, Debug Output, Download artifact sources, Download artifact JavaDoc, and so on. Archetypes: It allows to add, remove, and edit the maven archetype catalog. Discovery: It is used to discover the m2e connectors available for use Installations: It shows the maven installations and allows to choose maven to use. Lifecycle Mappings: It allows to customize the project build lifecycle for maven projects used by m2eclipse. Templates: It shows the list of all the templates used by the maven. It also provides option to add new templates, edit, remove, import, and export the templates. User interface and User settings: User interface allows to set XML file options and User setting allows to use the custom settings file and reindex the local repository. Warnings: This allows to enable/disable the warning for duplicate groupid and version across parent-child POM. Summary In this article we studied the Maven project structure, how to download and import a Maven project also using m2eclipse, and available preferences of Maven. Resources for Article: Further resources on this subject: JSON to POJO Using Gson in Android Studio [article] Unit Testing Apps with Android Studio [article] Things to Consider When Migrating to the Cloud [article]
Read more
  • 0
  • 0
  • 8754

article-image-using-canvas-and-d3
Packt
21 Aug 2014
9 min read
Save for later

Using Canvas and D3

Packt
21 Aug 2014
9 min read
This article by Pablo Navarro Castillo, author of Mastering D3.js, includes that most of the time, we use D3 to create SVG-based charts and visualizations. If the number of elements to render is huge, or if we need to render raster images, it can be more convenient to render our visualizations using the HTML5 canvas element. In this article, we will learn how to use the force layout of D3 and the canvas element to create an animated network visualization with random data. (For more resources related to this topic, see here.) Creating figures with canvas The HTML canvas element allows you to create raster graphics using JavaScript. It was first introduced in HTML5. It enjoys more widespread support than SVG, and can be used as a fallback option. Before diving deeper into integrating canvas and D3, we will construct a small example with canvas. The canvas element should have the width and height attributes. This alone will create an invisible figure of the specified size: <!— Canvas Element --> <canvas id="canvas-demo" width="650px" height="60px"></canvas> If the browser supports the canvas element, it will ignore any element inside the canvas tags. On the other hand, if the browser doesn't support the canvas, it will ignore the canvas tags, but it will interpret the content of the element. This behavior provides a quick way to handle the lack of canvas support: <!— Canvas Element --> <canvas id="canvas-demo" width="650px" height="60px"> <!-- Fallback image --> <img src="img/fallback-img.png" width="650" height="60"></img> </canvas> If the browser doesn't support canvas, the fallback image will be displayed. Note that unlike the <img> element, the canvas closing tag (</canvas>) is mandatory. To create figures with canvas, we don't need special libraries; we can create the shapes using the canvas API: <script> // Graphic Variables var barw = 65, barh = 60; // Append a canvas element, set its size and get the node. var canvas = document.getElementById('canvas-demo'); // Get the rendering context. var context = canvas.getContext('2d'); // Array with colors, to have one rectangle of each color. var color = ['#5c3566', '#6c475b', '#7c584f', '#8c6a44', '#9c7c39', '#ad8d2d', '#bd9f22', '#cdb117', '#ddc20b', '#edd400']; // Set the fill color and render ten rectangles. for (var k = 0; k < 10; k += 1) { // Set the fill color. context.fillStyle = color[k]; // Create a rectangle in incremental positions. context.fillRect(k * barw, 0, barw, barh); } </script> We use the DOM API to access the canvas element with the canvas-demo ID and to get the rendering context. Then we set the color using the fillStyle method, and use the fillRect canvas method to create a small rectangle. Note that we need to change fillStyle every time or all the following shapes will have the same color. The script will render a series of rectangles, each one filled with a different color, shown as follows: A graphic created with canvas Canvas uses the same coordinate system as SVG, with the origin in the top-left corner, the horizontal axis augmenting to the right, and the vertical axis augmenting to the bottom. Instead of using the DOM API to get the canvas node, we could have used D3 to create the node, set its attributes, and created scales for the color and position of the shapes. Note that the shapes drawn with canvas don't exists in the DOM tree; so, we can't use the usual D3 pattern of creating a selection, binding the data items, and appending the elements if we are using canvas. Creating shapes Canvas has fewer primitives than SVG. In fact, almost all the shapes must be drawn with paths, and more steps are needed to create a path. To create a shape, we need to open the path, move the cursor to the desired location, create the shape, and close the path. Then, we can draw the path by filling the shape or rendering the outline. For instance, to draw a red semicircle centered in (325, 30) and with a radius of 20, write the following code: // Create a red semicircle. context.beginPath(); context.fillStyle = '#ff0000'; context.moveTo(325, 30); context.arc(325, 30, 20, Math.PI / 2, 3 * Math.PI / 2); context.fill(); The moveTo method is a bit redundant here, because the arc method moves the cursor implicitly. The arguments of the arc method are the x and y coordinates of the arc center, the radius, and the starting and ending angle of the arc. There is also an optional Boolean argument to indicate whether the arc should be drawn counterclockwise. A basic shape created with the canvas API is shown in the following screenshot: Integrating canvas and D3 We will create a small network chart using the force layout of D3 and canvas instead of SVG. To make the graph looks more interesting, we will randomly generate the data. We will generate 250 nodes sparsely connected. The nodes and links will be stored as the attributes of the data object: // Number of Nodes var nNodes = 250, createLink = false; // Dataset Structure var data = {nodes: [],links: []}; We will append nodes and links to our dataset. We will create nodes with a radius attribute randomly assigning it a value of either 2 or 4 as follows: // Iterate in the nodes for (var k = 0; k < nNodes; k += 1) { // Create a node with a random radius. data.nodes.push({radius: (Math.random() > 0.3) ? 2 : 4}); // Create random links between the nodes. } We will create a link with probability of 0.1 only if the difference between the source and target indexes are less than 8. The idea behind this way to create links is to have only a few connections between the nodes: // Create random links between the nodes. for (var j = k + 1; j < nNodes; j += 1) { // Create a link with probability 0.1 createLink = (Math.random() < 0.1) && (Math.abs(k - j) < 8); if (createLink) { // Append a link with variable distance between the nodes data.links.push({ source: k, target: j, dist: 2 * Math.abs(k - j) + 10 }); } } We will use the radius attribute to set the size of the nodes. The links will contain the distance between the nodes and the indexes of the source and target nodes. We will create variables to set the width and height of the figure: // Figure width and height var width = 650, height = 300; We can now create and configure the force layout. As we did in the previous section, we will set the charge strength to be proportional to the area of each node. This time, we will also set the distance between the links, using the linkDistance method of the layout: // Create and configure the force layout var force = d3.layout.force() .size([width, height]) .nodes(data.nodes) .links(data.links) .charge(function(d) { return -1.2 * d.radius * d.radius; }) .linkDistance(function(d) { return d.dist; }) .start(); We can create a canvas element now. Note that we should use the node method to get the canvas element, because the append and attr methods will both return a selection, which don't have the canvas API methods: // Create a canvas element and set its size. var canvas = d3.select('div#canvas-force').append('canvas') .attr('width', width + 'px') .attr('height', height + 'px') .node(); We get the rendering context. Each canvas element has its own rendering context. We will use the '2d' context, to draw two-dimensional figures. At the time of writing this, there are some browsers that support the webgl context; more details are available at https://developer.mozilla.org/en-US/docs/Web/WebGL/Getting_started_with_WebGL. Refer to the following '2d' context: // Get the canvas context. var context = canvas.getContext('2d'); We register an event listener for the force layout's tick event. As canvas doesn't remember previously created shapes, we need to clear the figure and redraw all the elements on each tick: force.on('tick', function() { // Clear the complete figure. context.clearRect(0, 0, width, height); // Draw the links ... // Draw the nodes ... }); The clearRect method cleans the figure under the specified rectangle. In this case, we clean the entire canvas. We can draw the links using the lineTo method. We iterate through the links, beginning a new path for each link, moving the cursor to the position of the source node, and by creating a line towards the target node. We draw the line with the stroke method: // Draw the links data.links.forEach(function(d) { // Draw a line from source to target. context.beginPath(); context.moveTo(d.source.x, d.source.y); context.lineTo(d.target.x, d.target.y); context.stroke(); }); We iterate through the nodes and draw each one. We use the arc method to represent each node with a black circle: // Draw the nodes data.nodes.forEach(function(d, i) { // Draws a complete arc for each node. context.beginPath(); context.arc(d.x, d.y, d.radius, 0, 2 * Math.PI, true); context.fill(); }); We obtain a constellation of disconnected network graphs, slowly gravitating towards the center of the figure. Using the force layout and canvas to create a network chart is shown in the following screenshot: We can think that to erase all the shapes and redraw each shape again and again could have a negative impact on the performance. In fact, sometimes it's faster to draw the figures using canvas, because this way the browser doesn't have to manage the DOM tree of the SVG elements (but it still have to redraw them if the SVG elements are changed). Summary In this article, we will learn how to use the force layout of D3 and the canvas element to create an animated network visualization with random data. Resources for Article: Further resources on this subject: Interacting with Data for Dashboards [Article] Kendo UI DataViz – Advance Charting [Article] Visualizing my Social Graph with d3.js [Article]
Read more
  • 0
  • 0
  • 6090
article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 12758

article-image-setting-primefaces
Packt
21 Aug 2014
8 min read
Save for later

Setting up PrimeFaces

Packt
21 Aug 2014
8 min read
This article is written by Sudheer Jonna and Ramkumar Pillai, the authors of PrimeFaces Blueprints. This article will give you an introduction to setting up PrimeFaces using Maven or Ant. The specific topics that will be covered in this article are as follows: An introduction to PrimeFaces, its features, and its role in customized application development PrimeFaces setup and configuration for development (For more resources related to this topic, see here.) An introduction to JavaServer Faces and PrimeFaces JavaServer Faces (JSF) is a component-based MVC framework used for building rich User Interface (UI) Java web applications. JSF is a powerful framework with a six-phase lifecycle, and it will automate the common web application tasks such as decoding the user input, processing the input validations and conversions, and rendering or updating the output in the form of generated HTML. Page authors can easily build a customized UI by just dragging-and-dropping the reusable components on the page that provide a rich look and feel to modern UI applications. JSF has built-in support for input conversions and validations, and Ajax support for the components. Going by the growing popularity of JSF technology, many open source and proprietary UI component frameworks were created to have user interfaces with a fancier look and feel. These component suites were created by introducing their own new components and extending the standard JSF components with additional features. Among all these component suites, PrimeFaces is the best and most popular component suite considering its features, quick releases with more new components and bug fixes, ease of development, extensive documentation, and support from its community. PrimeFaces is a leading, lightweight, open source user interface component library for JSF-based web applications. In the JSF world, it is miles ahead of the other existing component sets because of the many features it has at its disposal: Over 100 sets of components Built-in Ajax-supported components Ease of development, as there are no configurations required A single jar install without the need for any mandatory third-party libraries More than 30 predefined themes and custom themes by using the ThemeRoller support Multibrowser support It is so well designed that it is important to consider its importance when developing web applications. Page authors and application developers can easily develop web pages by simply dragging-and-dropping the components of the webpage and then adding the required features in a step-by-step fashion: customizing the CSS style classes, extending the component widgets, and rendering according to the custom requirements. Setting up and configuring PrimeFaces PrimeFaces is a lightweight single library with minimal external libraries. The only external libraries required are those with component-specific features. Apart from these component-specific features, projects only require JSF runtime implementations such as Oracle Mojarra or Apache MyFaces. The setup and configuration for Maven and non-Maven users is explained in the following two sections. Setting up and configuring using Maven In this section, we will define the various Maven configuration steps required to run a PrimeFaces-based application. Perform the following steps: Configure the PrimeFaces library dependency or Maven coordinates in your project pom.xml file as shown here: <dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>5.0</version> </dependency> Add the PrimeFaces repository to the repositories list of your project pom.xml file as follows: <repository> <id>prime-repo</id> <name>Prime Repo</name> <url>http://repository.primefaces.org</url> </repository> Note that this step is not required for releases after PrimeFaces 4.0. The team started adding its library in the Maven central repository. Configure either of the JSF runtime implementations, Oracle Mojarra or Apache MyFaces. Choose either of the following two blocks of code: This is the runtime implementation for Oracle Mojarra: <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.2.6</version>6 </dependency> This is the runtime implementation for Apache MyFaces: <dependency> <groupId>org.apache.myfaces.core</groupId> <artifactId>myfaces-impl</artifactId> <version>2.2</version> </dependency> Depending on the component-specific features, you can use the following mandatory and optional dependencies. Here is a list of dependencies categorized into mandatory and optional. The following are the mandatory dependencies: Dependencies Version Description JSF runtime 2.0, 2.1, and 2.2 Oracle's Mojarra or Apache MyFaces implementation PrimeFaces 5.0 The PrimeFaces UI component library The following are the optional dependencies: Dependencies Version Description iText 2.7 To use the DataExporter component for PDF format POI 3.7 To use the DataExporter component for Excel format Rome 1.0 To use the Feed reader component commons-fileupload 1.3 To use the fileupload component (when web server / application server doesn't support servlet 3.0) commons-io 2.2 To use the fileupload component Setting up and configuring for non-Maven (or Ant) users In this section, we will define the various non-Maven (or Ant) configurations required to run a PrimeFaces-based application. Perform the following steps: Download the PrimeFaces library from the official download section of PrimeFaces at http://www.primefaces.org/downloads.html. Following this, add the PrimeFaces JAR library to the classpath. You should then download either the JSF library runtimes from Oracle's Mojarra or those from Apache MyFaces from their official sites and add them to the classpath. You can access the JSF library at Oracle by going to https://javaserverfaces.java.net/2.2/download.html or alternatively access it at Apache by going to http://myfaces.apache.org/download.html. After this, you should download the component-specific third-party libraries from their official site and add them to the classpath. Application-level configuration As you know, PrimeFaces is a JSF-based component suite. Therefore, the first thing you have to do is configure the JSF Faces Servlet in your project deployment descriptor file (web.xml). The following is a mandatory configuration for any JSF-based application: <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> It is not mandatory to use all of the JSF extensions or servlet mappings. Any of the preceding servlet mappings is enough to configure Faces Servlet to your project. There are other configurations that can be made to your project. These are shown in the following table: Context parameter name Default value Description THEME Aristo Used to apply a specific theme to your application. All theme names are valid values. SUBMIT Full Enables the Ajax submit mode. The valid values are full and partial. DIR Ltr Defines the component content orientation. The valid values are ltr and rtl. RESET_VALUES False When this is enabled, any Ajax-updated inputs are reset first. The valid values are true and false. SECRET PrimeFaces Defines the secret key to encrypt-decrypt the value of the expressions that are exposed in rendering StreamedContents. CLIENT_SIDE_ VALIDATION False Controls client-side validations to the form components. UPLOADER Auto Defines the fileuploader mode. The valid values are auto, native, and commons. As an example, the following code snippet configures a theme with context-param: <context-param> <param-name>primefaces.THEME</param-name> <param-value>delta</param-value> </context-param> Checking the JSF runtime compatibility PrimeFaces 5.0 supports all the JSF runtime versions: 2.0, 2.1, and 2.2 at the same time using feature detection without having to compile a dependency to any specific version. In other words, some of the features that are available are based on the runtime version used. The newly released JSF 2.2 version supports more popular HTML5. The runtime detection policy for PrimeFaces is quite useful for the newly added features in JSF library. The JSF 2.2 passthrough attribute's feature is a good example of the runtime detection policy. That is, the passthrough attribute only gets rendered if the runtime is JSF 2.2. An introduction to the autofocus and pattern HTML5 attributes' integration with PrimeFaces can be seen in the following example: <!DOCTYPE html> <html > <h:head> </h:head> <h:body> <h:form> <p:inputText value="#{bean.value}" pt_autofocus="autofocus" pt_pattern= "[A-Za-z]"/> </h:form> </h:body> </html> Summary In this article, you have been introduced to the PrimeFaces component suite, its features, and its role in developing custom applications. You have also learned about the setup and configuration for the PrimeFaces library. Resources for Article: Further resources on this subject: Components of PrimeFaces Extensions [Article] Getting Started with PrimeFaces [Article] Integrating Images with JSF, CSS with JSF and, JS with JSF [Article]
Read more
  • 0
  • 0
  • 4314

article-image-choosing-airframe-and-propellers-your-multicopter
Packt
21 Aug 2014
10 min read
Save for later

Choosing the airframe and propellers for your Multicopter

Packt
21 Aug 2014
10 min read
In this article by Ty Audronis, the author of Building Multicopter Video Drone, the process and thought process required to choose a few of the components required to build your multicopter will be discussed. (For more resources related to this topic, see here.) Let's dive into the process of choosing components for your multicopter. There are a ton of choices, permutations, and combinations available. In fact, there are so many choices out there that it's highly unlikely that two do it yourself (DIY) multicopters are configured alike. It's very important to note before we start this article that this is just one example. This is only an example of the thought process involved. This configuration may not be right for your particular needs, but the thought process applies to any multicopter you may build. With all these disclaimers in mind … let's get started! What kind of drone should I build? It sounds obvious, but believe it or not, a lot of people venture into a project like this with one thing in mind: "big!". This is completely the wrong approach to building a multicopter. Big is expensive, big is also less stable, and moreover, when something goes wrong, big causes more damage and is harder to repair. Ask yourself what your purpose is. Is it for photography? Videography? Fun and hobby interest? What will it carry? How many rotors should it have? There are many configurations, but three of these rotor counts are the most common: four, six, and eight (quad, hexa, and octo-copters). The knee-jerk response of most people is again "big". It's about balancing stability and battery life. Although eight rotors do offer more stability, it also decreases flight time because it increases the strain on batteries. In fact, the number of rotors in relation to flight time is exponential and not linear. Having a big platform is completely useless if the batteries only last two or three minutes. Redundancy versus stability Once you get into hexacopter and octocopters, there are two basic configurations of the rotors: redundant and independent. In an independent (or flat) configuration, the rotors are arranged in a circular pattern, equidistant from the center of the platform with each rotor (as you go around) turning in an opposite direction from the one before it. These look a lot like a pie with many slices. In a redundant configuration, the number of spars (poles from the center of the platform) is cut in half, and each has a rotor on the top as well as underneath. Usually, all the rotors on the top spin in one direction, and all rotors at the bottom spin in the opposite direction. The following image shows a redundant hexacopter (left) and an independent hexacopter (right): The advantage of redundancy is apparent. If a rotor should break or fail, the motor underneath it can spin up to keep the craft in the air. However, with less points of lift, stress on the airframe is greater, and stability is not quite as good. If you use the right guidance system, a flat configuration can overcome a failed rotor as well. For this reason (and for battery efficiency), we're going with a flat-six (independent hexacopter) configuration over the redundant, or octocopter configurations. The calculations you'll need There is an exorbitant amount of math involved in calculating just how you're going to make your multicopter fly. An entire book can be written on these calculations alone. However, the work has been done for you! There is a calculator available online at eCalc (http://www.ecalc.ch/xcoptercalc.php?ecalc&lang=en) to calculate how well your multicopter will function and for how long, based on the components you choose. The following screenshot shows the eCalc interface: Choosing your airframe Although we've decided to go with a flat-six airframe, the exact airframe is yet to be decided. The materials, brand, and price can vary incredibly. Let's take a quick look at some specifications you should consider. Carbon fiber versus aluminum Carbon fiber looks cool, sounds even cooler, but what is it? It's exactly what it sounds like. It's basically a woven fabric of carbon strands encased in an epoxy resin. It's extremely easy to form, very strong, and very light. Carbon fiber is the material they make super cars, racing motorcycles, and yes, aircraft from. However, it's very expensive and can be brittle if it's compromised. It can also be welded using nothing more than a superglue-like substance known as C.A. glue (cyanoacrylate or Superglue). Aluminum is also light and strong. However, it's bendable and more flexible. It's less expensive, readily available, and can make an effective airframe. It is also used in cars, racing motorcycles, and aircraft. It cannot be welded easily and requires very special equipment to form it and machine it. Also, aluminum can be easier to drill, while drilling carbon fiber can cause cracks and compromise the strength of the airframe. What we care about in a DIY multicopter is strength, weight, and yes … expense. There is nothing wrong with carbon fiber (in fact, in many ways, it is superior to aluminum), but we're going with an aluminum frame as our starting point. We'll need a fairly large frame (to keep the large rotors, which we'll probably need, from hitting each other while rotating). What we really want to look at is all the stress points on the airframe. If you really think about it, the motor mounts, and where each arm attaches to the hub of the airframe are the areas we need to examine carefully. A metal plate is a must for the motor mounts. If a carbon fiber motor mount is used, a metal backplate is a must. Many a multicopter has been lost because of screws popping right through the motor mounts. The following image shows a motor mount (left) where just such a thing happened. The fix (right) is to use a backplate when encountering carbon fiber motor mounts. This distributes the stress to the whole plate (rather than a small point the size of a screwhead). Washers are usually not enough. Similarly, because we've decided to use an airframe with long arms, leverage must be taken into account on the points where the arms attach to the hub. It's very important to have a sturdy hub that cradles the spars in a way that distributes the stress as much as possible. If a spar is merely sandwiched between two plates with a couple of bolts holding it … that may not be enough to hold the spars firmly. The following image shows a properly cradled spar: In the preceding image, you'll notice that the spars are cradled so that stress in any direction is distributed across a lot of surface area. Furthermore, you'll notice 45 degree angles in the cradles. As the cradle is tightened down, it cinches the aluminum spar and deforms it along these angles. This also prevents the spars from rolling. Between this cradling and the aluminum motor mounts (predrilled for many motor types), we're going to use the Turnigy H.A.L. (Heavy Aerial Lift) hexacopter frame. It carries a 775 mm motor span (plenty of room for up to 14-inch rotors) and has a protective cover for our electronics. Best of all, this frame retails for under 70 USD at http://www.hobbyking.com/hobbyking/store/uh_viewitem. asp?idproduct=25698&aff=492101. Now that we've chosen our airframe, we know it weighs 983 grams (based on the specifications mentioned on the previous link). Let's plug this information into our calculator (refer to the following screenshot). You can see that we've set our copter to 6 rotors, our weight to 983 grams, and specified that this weight is a without Drive system (not including our motors, props, ESCs, or batteries). You can leave all of the other entries alone. These specify the environment you'd be flying in. Air density can affect the efficiency of your rotors, and temperature can affect your motors. These default settings are at your typical temperature and elevation. Unless you're flying in the desert, high elevations, or in the cold, you can leave these alone. We're after what your typical performance will be. Choosing your propellers Let's skip down to the propellers. These will usually dictate what motors you choose, and the motors dictate the ESCs, and the ESCs and motors combined will determine your battery. So, let's take a look at the drive system in that order. This is another huge point of stress. If you consider it, every bit of weight is supported by the props in the air. So, here it's very important to have strong props that cut the air well, with as little flex as possible, and are very light. Flex can produce bounce, which can actually produce harmonic vibration between the guidance system and the flexing of the props (sending your drone into uncontrolled tumbles). Does one of the materials that we've already discussed sound strong, light, and very stiff? If you're thinking carbon fiber, you're right on the money. We're going to have a lot of weight here, so we'll go with pretty large props because they'll move a whole lot more air and carbon fiber because they're strong. The larger the props, the stronger they need to be, and consequently the more powerful the motor, ESC, and battery. Before we start shopping around for parts, let's plug in stats and see what we come up with. When we look at props, there are two stats we need to look at. These are diameter and pitch. The diameter is simple enough. It's just how big the props are. The pitch is another story. The pitch is how much of pitch the blade has. The tips of a propeller are more flat in relation to the rotation. In other words, they twist. Your typical blade would have something more like a 4.7-inch pitch at 10 inches. Why? Believe it or not, these motors encounter a ton of resistance. The resistance comes from the wind, and a fully-pitched blade may sound nice, but believe it or not, propulsion is really more of a game of efficiency than raw power. It's all about the balance. There's no doubt that we'll have to adjust our power system later, so for now let's start big. We'll go with a 14-inch propeller (because it's the biggest that can possibly fit on that frame without the props touching), with a typical (for that size) 8-inch pitch. The following screenshot shows these entries in our calculator: You can see we've entered 14 for Diameter and 8 for Pitch. Our propellers will be typical two-blade props. Three- and four-blade props can provide more lift, but also have more resistance and consequently will kill our batteries faster. The PConst (or power constraint) indicates how much power is absorbed by the props. The value of 1.3 is a typical value. Each brand and size of prop may be slightly different, and unless the specific prop you choose has those statistics available … leave this alone. A value of 1.0 is a perfectly efficient propeller. This is an unattainable value. The gear ratio is 1:1 because we're using a prop directly attached to a motor. If we were using a gear box, we'd change this value accordingly. Don't hit calculate yet. We don't have enough fields filled out. It should be said that most likely these propellers will be too large. We'll probably have to go down to a 12- or even 11-inch propeller (or change our pitch) for maximum efficiency. However … this is a good place to start. Summary In this article, we discussed what are the points to keep in mind when planning to build a multicopter, such as the type of multicopter, number of rotors, and various parameters to consider when choosing the airframe and propellers. Resources for Article:   Further resources on this subject: 3D Websites [article] Managing Adobe Connect Meeting Room [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article]
Read more
  • 0
  • 0
  • 3980
article-image-setting-rig
Packt
21 Aug 2014
16 min read
Save for later

Setting Up The Rig

Packt
21 Aug 2014
16 min read
In this article by Vinci Rufus, the author of the book AngularJS Web Application Development Blueprints, we will see the process of setting up various tools required to start building AngularJS apps. I'm sure you would have heard the saying, "A tool man is known by the tools he keeps." OK fine, I just made that up, but that's actually true, especially when it comes to programming. Sure you can build complete and fully functional AngularJS apps just using a simple text editor and a browser, but if you want to work like a ninja, then make sure that you start using some of these tools as a part of your development workflow. Do note that these tools are not mandatory to build AngularJS apps. Their use is recommended mainly to help improve the productivity. In this article, we will see how to set up and use the following productivity tools: Node.js Grunt Yeoman Karma Protractor Since most of us are running a Mac, Windows, Ubuntu, or another flavor of the Linux operating system, we'll be covering the deployment steps common for all of them. (For more resources related to this topic, see here.) Setting up Node.js Depending on your technology stack, I strongly recommend you have either Ruby or Node.js installed. In case of AngularJS, most of the productivity tools or plugins are available as Node Package Manager (npm), and, hence, we will be setting up Node.js along with npm. Node.js is an open source JavaScript-based platform that uses an event-based Input/output model, making it lightweight and fast. Let us head over to www.nodejs.org and install Node.js. Choose the right version as per your operating system. The current version of Node.js at the time of writing this article is v0.10.x which comes with npm built in, making it a breeze to set up Node.js and npm. Node.js doesn't come with a Graphical User Interface (GUI), so to use Node.js, you will need to open up your terminal and start firing some commands. Now would also be a good time to brush up on your DOS and Unix/Linux commands. After installing Node.js, the first thing you'd want to check is to see if Node.js has been installed correctly. So, let us open up the terminal and write the following command: node –-version This should output the version number of Node.js that's installed on your system. The next would be to see what version of npm we have installed. The command for that would be as follows: npm –-version This will tell you the version number for your npm. Creating a simple Node.js web server with ExpressJS For basic, simple AngularJS apps, you don't really need a web server. You can simply open the HTML files from your filesystem and they would work just fine. However, as you start building complex applications where you are passing data in JSON, web services, or using a Content Delivery Network (CDN), you would find the need to use a web server. The good thing about AngularJS apps is that they could work within any web server, so if you already have IIS, Apache, Nginx, or any other web server running on your development environment, you can simply run your AngularJS project from within the web root folder. In case you don't have a web server and are looking for a lightweight web server, then let us set one up using Node.js and ExpressJS. One could write the entire web server in pure Node.js; however, ExpressJS provides a nice layer of abstraction on top of Node.js so that you can just work with the ExpressJS APIs and don't have to worry about the low-level calls. So, let's first install the ExpressJS module for Node.js. Open up your terminal and fire the following command: npm install -g express-generator This will globally install ExpressJS. Omit the –g to install ExpressJS locally in the current folder. When installing ExpressJS globally on Linux or Mac, you will need to run it via sudo as follows: sudo npm install –g express-generator This will let npm have the necessary permissions to write to the protected local folder under the user. The next step is to create an ExpressJS app; let us call it my-server. Type the following command in the terminal and hit enter: express my-server You'll see something like this: create : my-server create : my-server/package.json create : my-server/app.js create : my-server/public create : my-server/public/javascripts create : my-server/public/images create : my-server/public/stylesheets create : my-server/public/stylesheets/style.css create : my-server/routes create : my-server/routes/index.js create : my-server/routes/user.js create : my-server/views create : my-server/views/layout.jade create : my-server/views/index.jade install dependencies: $ cd my-server && npm install run the app: $ DEBUG=my-server ./bin/www This will create a folder called my-server and put in a bunch of files inside the folder. The package.json file is created, which contains the skeleton of your app. Open it and ensure the name says my-server; also, note the dependencies listed. Now, to install ExpressJS along with the dependencies, first change into the my-server directory and run the following command in the terminal: cd my-server npm install Now, in the terminal, type in the following command: npm start Open your browser and type http://localhost:3000 in the address bar. You'll get a nice ExpressJS welcome message. Now to test our Address Book App, we will copy our index.html, scripts.js, and styles.css into the public folder located within my-server. I'm not copying the angular.js file because we'll use the CDN version of the AngularJS library. Open up the index.html file and replace the following code: <script src= "angular.min.js" type="text/javascript"> </script> With the CDN version of AngularJS as follows: <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.17/angular.min.js"></script> A question might arise, as to what if the CDN is unreachable. In such cases, we can add a fall back to use a local version of the AngularJS library. We do this by adding the following script after the CDN link is called: <script>window.angular || document.write('<script src="lib/angular/angular.min.js"></script>');</script> Save the file in the browser and enter localhost:3000/index.html. Your Address Book app is now running from a server and taking advantage of Google's CDN to serve the AngularJS file. Referencing the files using only // is also called the protocol independent absolute path. This means that the files are requested using the same protocol that is being used to call the parent page. For example, if the page you are loading is via https://, then the CDN link will also be called via HTTPS. This also means that when using // instead of http:// during development, you will need to run your app from within a server instead of a filesystem. Setting up Grunt Grunt is a JavaScript-based task runner. It is primarily used for automating tasks such as running unit tests, concatenating, merging, and minifying JS and CSS files. You can also run shell commands. This makes it super easy to perform server cleanups and deploy code. Essentially, Grunt is to JavaScript what Rake would be to Ruby or Ant/Maven would be to Java. Installing Grunt-cli Installing Grunt-cli is slightly different from installing other Node.js modules. We first need to install the Grunt's Command Line Interface (CLI) by firing the following command in the terminal: npm install -g grunt-cli Mac or Linux users can also directly run the following command: sudo npm install –g grunt-cli Make sure you have administrative privileges. Use sudo if you are on a Mac or Linux system. If you are on Windows, right-click and open the command prompt with administrative rights. An important thing to note is that installing Grunt-cli doesn't automatically install Grunt and its dependencies. Grunt-cli merely invokes the version of Grunt installed along with the Grunt file. While this may seem a little complicated at start, the reason it works this way is so that we can run different versions of Grunt from the same machine. This comes in handy when your project has dependencies on a specific version of Grunt. Creating the package.json file To install Grunt first, let's create a folder called my-project and create a file called package.json with the following content: { "name": "My-Project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.10.0", "grunt-contrib-concat": "~0.4.0", "grunt-contrib-uglify": "~0.5.0", "grunt-shell": "~0.7.0" } } Save the file. The package.json is where you define the various parameters of your app; for example, the name of your app, the version number, and the list of dependencies needed for the app. Here we are calling our app My-Project with Version 0.1.0, and listing out the following dependencies that need to be installed as a part of this app: grunt (v0.4.5): This is the main Grunt application grunt-contrib-jshint (v0.10.0): This is used for code analysis grunt-contrib-concat (v0.4.0): This is used to merge two or more files into one grunt-contrib-uglify (v0.5.0): This is used to minify the JS file grunt-shell (v0.7.0): This is the Grunt shell used for running shell commands Visit http://gruntjs.com/plugins to get a list of all the plugins available for Grunt and also their exact names and version numbers. You may also choose to create a default package.json file by running the following command and answering the questions: npm init Open the package.json file and add the dependencies as mentioned earlier. Now that we have the package.json file, load the terminal and navigate into the my-project folder. To install Grunt and the modules specified in the file, type in the following command: npm install --save-dev You'll see a series of lines getting printed in the console, let that continue for a while and wait until it returns to the command prompt. Ensure that the last line printed by the previous command ends with OK code 0. Once Grunt is installed, a quick version check command will ensure that Grunt is installed. The command is as follows: grunt –-version There is a possibility that you got a bunch of errors and it ended with a not ok code 0 message. There could be multiple reasons why that would have happened, ranging from errors in your code to a network connection issue or something changing at Grunt's end due to a new version update. If grunt --version throws up an error, it means Grunt wasn't installed properly. To reinstall Grunt, enter the following commands in the terminal: rm –rf node_modules npm cache clean npm install Windows users may manually delete the node_modules folder from Windows Explorer, before running the cache clean command in the command prompt. Refer to http://www.gruntjs.com to troubleshoot the problem. Creating your Grunt tasks To run our Grunt tasks, we'll need a JavaScript file. So, let's copy our scritps.js file and place it into the my-projects folder. The next step is to create a Grunt file that will list out the tasks that we need Grunt to perform. For now, we will ask it to do four simple tasks, first check if our JS code is clean using JSHint, then we will merge three JS files into one and then minify the JS file, and finally we will run some shell commands to clean up. Until Version 0.3, the init command was a part of the Grunt tool and one could create a blank project using grunt-init. With Version 0.4, init is now available as a separate tool called grunt-init and needs to be installed using the npm install –g grunt-init command line. Also note that the structure of the grunt.js file from Version 0.4 onwards is fairly different from the earlier versions you've used. For now, we will resort to creating the Grunt file manually. Refer to the following screenshot: In the same location as where you have your package.json, create a file called gruntfile.js as shown earlier and type in the following code: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] } }); grunt.loadNpmTasks('grunt-contrib-jshint'); // Default task. grunt.registerTask('default', ['jshint']); }; To start, we will add only one task which is jshint and specify scripts.js in the list of files that need to be linted. In the next line, we specify grunt-contrib-jshint as the npm task that needs to be loaded. In the last line, we define the jshint as the task to be run when Grunt is running in default mode. Save the file and in the terminal run the following command: grunt You would probably get to see the following message in the terminal: So JSHint is saying that we are missing a semicolon on lines 18 and 24. Oh! Did I mention that JSHint is like your very strict math teacher from high school. Let's open up scripts.js and put in those semicolons and rerun Grunt. Now you should get a message in green saying 1 file lint free. Done without errors. Let's add some more tasks to Grunt. We'll now ask it to concatenate and minify a couple of JS files. Since we currently have just one file, let's go and create two dummy JS files called scripts1.js and scripts2.js. In scripts1.js we'll simply write an empty function as follows: // This is from script 1 function Script1Function(){ //------// } Similarly, in scripts2.js we'll write the following: // This is from script 2 function Script2Function(){ //------// } Save these files in the same folder where you have scripts.js. Grunt tasks to merge and concatenate files Now, let's open our Grunt file and add the code for both the tasks—to merge the JS file, and minify them as follows: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify']); }; As you can see from the preceding code, after the jshint task, we added the concat task. Under the src attribute, we define the files separated by a comma that need to be concatenated. And in the dest attribute, we specify the name of the merged JS file. It is very important that the files are entered in the same sequence as they need to be merged. If the sequence of the files entered is incorrect, the merged JS file will cause errors in your app. The uglify task is used to minify the JS file and the structure is very similar to the concat task. We add the merged.js file to the src attribute and in the dest attribute, we will place the merged.min.js file into a folder called build. Grunt will auto create the build folder. After defining the tasks, we will load the necessary plugins, namely the grunt-contrib-concat and the grunt-contrib-uglify, and finally we will register the concat and uglify tasks to the default task. Save the file and run Grunt. And if all goes well, you should see Grunt running these tasks and informing the status of each of the tasks. If you get the final message saying, Done, without any errors, it means things went well, and this was your lucky day! If you now open your my-project folder in the file manager, you should see a new file called merged.js. Open it in the text editor and you'll notice that all the three files have been merged into this. Also, go into the build/merged.min.js file and verify whether the file is minified. Running shell commands via Grunt Another really helpful plugin in Grunt is grunt-shell. This allows us to effectively run clean-up activities such as deleting .tmp files and moving files from one folder to another. Let's see how to add the shell tasks to our Grunt file. Add the following highlighted piece of code to your Grunt file: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } , shell: { multiple: { command: [ 'rm -rf merged.js', 'mkdir deploy', 'mv build/merged.min.js deploy/merged.min.js' ].join('&&') } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-shell'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify', 'shell' ]); }; As you can see from the code we added, we are first deleting the merged.js file, then creating a new folder called deploy and moving our merged.min.js file into it. Windows users would need to use the appropriate DOS commands for deleting and copying the files. Note that .join('&&') is used when you want Grunt to run multiple shell commands. The next steps are to load the npm tasks and add shell to the default task list. To see Grunt perform all these tasks, run the Grunt command in the terminal. Once it's done, open up the filesystem and verify whether Grunt has done what you had asked it to do. Just like we used the preceding four plugins, there are numerous other plugins that you can use with Grunt to automate your tasks. A point to note is while the default Grunt command will run all the tasks mentioned in the grunt.registerTask statement, if you would need to run a specific task instead of all of them, then you can simply type the following in the command line: grunt jshint Alternatively, you can type the following command: grunt concat Alternatively, you can type the following command: grunt ugligy At times if you'd like to run just two of the three tasks, then you can register them separately as another bundled task in the Grunt file. Open up the gruntfile.js file, and just after the line where you have registered the default task, add the following code: grunt.registerTask('concat-min', ['concat','uglify']); This will register a new task called concat-min and will run only the concat and uglify tasks. In the terminal run the following command: grunt concat-min Verify whether Grunt only concatenated and minified the file and didn't run JSHint or your shell commands. You can run grunt --help to see a list of all the tasks available in your Grunt file.
Read more
  • 0
  • 0
  • 2115

article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 6345
Modal Close icon
Modal Close icon