Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-deep-learning-and-regression-analysis
Packt
09 Jan 2017
6 min read
Save for later

Deep learning and regression analysis

Packt
09 Jan 2017
6 min read
In this article by Richard M. Reese and Jennifer L. Reese, authors of the book, Java for Data Science, We will discuss neural networks can be used to perform regression analysis. However, other techniques may offer a more effective solution. With regression analysis, we want to predict a result based on several input variables (For more resources related to this topic, see here.) We can perform regression analysis using an output layer that consists of a single neuron that sums the weighted input plus bias of the previous hidden layer. Thus, the result is a single value representing the regression. Preparing the data We will use a car evaluation database to demonstrate how to predict the acceptability of a car based on a series of attributes. The file containing the data we will be using can be downloaded from: http://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data. It consists of car data such as price, number of passengers, and safety information, and an assessment of its overall quality. It is this latter element that we will try to predict. The comma-delimited values in each attribute are shown next, along with substitutions. The substitutions are needed because the model expects numeric data: Attribute Original value Substituted value Buying price vhigh, high, med, low 3,2,1,0 Maintenance price vhigh, high, med, low 3,2,1,0 Number of doors 2, 3, 4, 5-more 2,3,4,5 Seating 2, 4, more 2,4,5 Cargo space small, med, big 0,1,2 Safety low, med, high 0,1,2 There are 1,728 instances in the file. The cars are marked with four classes: Class Number of instances Percentage of instances Original value Substituted value Unacceptable 1210 70.023% unacc 0 Acceptable 384 22.222% acc 1 Good 69 3.99% good 2 Very good 65 3.76% v-good 3 Setting up the class We start with the definition of a CarRegressionExample class, as shown next: public class CarRegressionExample { public CarRegressionExample() { try { ... } catch (IOException | InterruptedException ex) { // Handle exceptions } } public static void main(String[] args) { new CarRegressionExample(); } } Reading and preparing the data The first task is to read in the data. We will use the CSVRecordReader class to get the data: RecordReader recordReader = new CSVRecordReader(0, ","); recordReader.initialize(new FileSplit(new File("car.txt"))); DataSetIterator iterator = new RecordReaderDataSetIterator(recordReader, 1728, 6, 4); With this dataset, we will split the data into two sets. Sixty five percent of the data is used for training and the rest for testing: DataSet dataset = iterator.next(); dataset.shuffle(); SplitTestAndTrain testAndTrain = dataset.splitTestAndTrain(0.65); DataSet trainingData = testAndTrain.getTrain(); DataSet testData = testAndTrain.getTest(); The data now needs to be normalized: DataNormalization normalizer = new NormalizerStandardize(); normalizer.fit(trainingData); normalizer.transform(trainingData); normalizer.transform(testData); We are now ready to build the model. Building the model A MultiLayerConfiguration instance is created using a series of NeuralNetConfiguration.Builder methods. The following is the dice used. We will discuss the individual methods following the code. Note that this configuration uses two layers. The last layer uses the softmax activation function, which is used for regression analysis: MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder() .iterations(1000) .activation("relu") .weightInit(WeightInit.XAVIER) .learningRate(0.4) .list() .layer(0, new DenseLayer.Builder() .nIn(6).nOut(3) .build()) .layer(1, new OutputLayer .Builder(LossFunctions.LossFunction .NEGATIVELOGLIKELIHOOD) .activation("softmax") .nIn(3).nOut(4).build()) .backprop(true).pretrain(false) .build(); Two layers are created. The first is the input layer. The DenseLayer.Builder class is used to create this layer. The DenseLayer class is a feed-forward and fully connected layer. The created layer uses the six car attributes as input. The output consists of three neurons that are fed into the output layer and is duplicated here for your convenience: .layer(0, new DenseLayer.Builder() .nIn(6).nOut(3) .build()) The second layer is the output layer created with the OutputLayer.Builder class. It uses a loss function as the argument of its constructor. The softmax activation function is used since we are performing regression as shown here: .layer(1, new OutputLayer .Builder(LossFunctions.LossFunction .NEGATIVELOGLIKELIHOOD) .activation("softmax") .nIn(3).nOut(4).build()) Next, a MultiLayerNetwork instance is created using the configuration. The model is initialized, its listeners are set, and then the fit method is invoked to perform the actual training. The ScoreIterationListener instance will display information as the model trains which we will see shortly in the output of this example. Its constructor argument specifies the frequency that information is displayed: MultiLayerNetwork model = new MultiLayerNetwork(conf); model.init(); model.setListeners(new ScoreIterationListener(100)); model.fit(trainingData); We are now ready to evaluate the model. Evaluating the model In the next sequence of code, we evaluate the model against the training dataset. An Evaluation instance is created using an argument specifying that there are four classes. The test data is fed into the model using the output method. The eval method takes the output of the model and compares it against the test data classes to generate statistics. The getLabels method returns the expected values: Evaluation evaluation = new Evaluation(4); INDArray output = model.output(testData.getFeatureMatrix()); evaluation.eval(testData.getLabels(), output); out.println(evaluation.stats()); The output of the training follows, which is produced by the ScoreIterationListener class. However, the values you get may differ due to how the data is selected and analyzed. Notice that the score improves with the iterations but levels out after about 500 iterations: 12:43:35.685 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 0 is 1.443480901811554 12:43:36.094 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 100 is 0.3259061845624861 12:43:36.390 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 200 is 0.2630572026049783 12:43:36.676 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 300 is 0.24061281470878784 12:43:36.977 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 400 is 0.22955121170274934 12:43:37.292 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 500 is 0.22249920540161677 12:43:37.575 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 600 is 0.2169898450109222 12:43:37.872 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 700 is 0.21271599814600958 12:43:38.161 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 800 is 0.2075677126088741 12:43:38.451 [main] INFO o.d.o.l.ScoreIterationListener - Score at iteration 900 is 0.20047317735870715 This is followed by the results of the stats method as shown next. The first part reports on how examples are classified and the second part displays various statistics: Examples labeled as 0 classified by model as 0: 397 times Examples labeled as 0 classified by model as 1: 10 times Examples labeled as 0 classified by model as 2: 1 times Examples labeled as 1 classified by model as 0: 8 times Examples labeled as 1 classified by model as 1: 113 times Examples labeled as 1 classified by model as 2: 1 times Examples labeled as 1 classified by model as 3: 1 times Examples labeled as 2 classified by model as 1: 7 times Examples labeled as 2 classified by model as 2: 21 times Examples labeled as 2 classified by model as 3: 14 times Examples labeled as 3 classified by model as 1: 2 times Examples labeled as 3 classified by model as 3: 30 times ==========================Scores======================================== Accuracy: 0.9273 Precision: 0.854 Recall: 0.8323 F1 Score: 0.843 ======================================================================== The regression model does a reasonable job with this dataset. Summary In this article, we examined deep learning and regression analysis. We showed how to prepare the data and class, build the model, and evaluate the model. We used sample data and displayed output statistics to demonstrate the relative effectiveness of our model. Resources for Article: Further resources on this subject: KnockoutJS Templates [article] The Heart of It All [article] Bringing DevOps to Network Operations [article]
Read more
  • 0
  • 0
  • 5104

article-image-exploring-structure-motion-using-opencv
Packt
09 Jan 2017
20 min read
Save for later

Exploring Structure from Motion Using OpenCV

Packt
09 Jan 2017
20 min read
In this article by Roy Shilkrot, coauthor of the book Mastering OpenCV 3, we will discuss the notion of Structure from Motion (SfM), or better put, extracting geometric structures from images taken with a camera under motion, using OpenCV's API to help us. First, let's constrain the otherwise very broad approach to SfM using a single camera, usually called a monocular approach, and a discrete and sparse set of frames rather than a continuous video stream. These two constrains will greatly simplify the system we will sketch out in the coming pages and help us understand the fundamentals of any SfM method. In this article, we will cover the following: Structure from Motion concepts Estimating the camera motion from a pair of images (For more resources related to this topic, see here.) Throughout the article, we assume the use of a calibrated camera—one that was calibrated beforehand. Calibration is a ubiquitous operation in computer vision, fully supported in OpenCV using command-line tools. We, therefore, assume the existence of the camera's intrinsic parameters embodied in the K matrix and the distortion coefficients vector—the outputs from the calibration process. To make things clear in terms of language, from this point on, we will refer to a camera as a single view of the scene rather than to the optics and hardware taking the image. A camera has a position in space and a direction of view. Between two cameras, there is a translation element (movement through space) and a rotation of the direction of view. We will also unify the terms for the point in the scene, world, real, or 3D to be the same thing, a point that exists in our real world. The same goes for points in the image or 2D, which are points in the image coordinates, of some real 3D point that was projected on the camera sensor at that location and time. Structure from Motion concepts The first discrimination we should make is the difference between stereo (or indeed any multiview), 3D reconstruction using calibrated rigs, and SfM. A rig of two or more cameras assumes we already know what the "motion" between the cameras is, while in SfM, we don't know what this motion is and we wish to find it. Calibrated rigs, from a simplistic point of view, allow a much more accurate reconstruction of 3D geometry because there is no error in estimating the distance and rotation between the cameras—it is already known. The first step in implementing an SfM system is finding the motion between the cameras. OpenCV may help us in a number of ways to obtain this motion, specifically using the findFundamentalMat and findEssentialMat functions. Let's think for one moment of the goal behind choosing an SfM algorithm. In most cases, we wish to obtain the geometry of the scene, for example, where objects are in relation to the camera and what their form is. Having found the motion between the cameras picturing the same scene, from a reasonably similar point of view, we would now like to reconstruct the geometry. In computer vision jargon, this is known as triangulation, and there are plenty of ways to go about it. It may be done by way of ray intersection, where we construct two rays: one from each camera's center of projection and a point on each of the image planes. The intersection of these rays in space will, ideally, intersect at one 3D point in the real world that was imaged in each camera, as shown in the following diagram: In reality, ray intersection is highly unreliable. This is because the rays usually do not intersect, making us fall back to using the middle point on the shortest segment connecting the two rays. OpenCV contains a simple API for a more accurate form of triangulation, the triangulatePoints function, so this part we do not need to code on our own. After you have learned how to recover 3D geometry from two views, we will see how you can incorporate more views of the same scene to get an even richer reconstruction. At that point, most SfM methods try to optimize the bundle of estimated positions of our cameras and 3D points by means of Bundle Adjustment. OpenCV contains means for Bundle Adjustment in its new Image Stitching Toolbox. However, the beauty of working with OpenCV and C++ is the abundance of external tools that can be easily integrated into the pipeline. We will, therefore, see how to integrate an external bundle adjuster, the Ceres non-linear optimization package. Now that we have sketched an outline of our approach to SfM using OpenCV, we will see how each element can be implemented. Estimating the camera motion from a pair of images Before we set out to actually find the motion between two cameras, let's examine the inputs and the tools we have at hand to perform this operation. First, we have two images of the same scene from (hopefully not extremely) different positions in space. This is a powerful asset, and we will make sure that we use it. As for tools, we should take a look at mathematical objects that impose constraints over our images, cameras, and the scene. Two very useful mathematical objects are the fundamental matrix (denoted by F) and the essential matrix (denoted by E). They are mostly similar, except that the essential matrix is assuming usage of calibrated cameras; this is the case for us, so we will choose it. OpenCV allows us to find the fundamental matrix via the findFundamentalMat function and the essential matrix via the findEssentialMatrix function. Finding the essential matrix can be done as follows: Mat E = findEssentialMat(leftPoints, rightPoints, focal, pp); This function makes use of matching points in the "left" image, leftPoints, and "right" image, rightPoints, which we will discuss shortly, as well as two additional pieces of information from the camera's calibration: the focal length, focal, and principal point, pp. The essential matrix, E, is a 3 x 3 matrix, which imposes the following constraint on a point in one image and a point in the other image: x'K­TEKx = 0, where x is a point in the first image one, x' is the corresponding point in the second image, and K is the calibration matrix. This is extremely useful, as we are about to see. Another important fact we use is that the essential matrix is all we need in order to recover the two cameras' positions from our images, although only up to an arbitrary unit of scale. So, if we obtain the essential matrix, we know where each camera is positioned in space and where it is looking. We can easily calculate the matrix if we have enough of those constraint equations, simply because each equation can be used to solve for a small part of the matrix. In fact, OpenCV internally calculates it using just five point-pairs, but through the Random Sample Consensus algorithm (RANSAC), many more pairs can be used and make for a more robust solution. Point matching using rich feature descriptors Now we will make use of our constraint equations to calculate the essential matrix. To get our constraints, remember that for each point in image A, we must find a corresponding point in image B. We can achieve such a matching using OpenCV's extensive 2D feature-matching framework, which has greatly matured in the past few years. Feature extraction and descriptor matching is an essential process in computer vision and is used in many methods to perform all sorts of operations, for example, detecting the position and orientation of an object in the image or searching a big database of images for similar images through a given query. In essence, feature extraction means selecting points in the image that would make for good features and computing a descriptor for them. A descriptor is a vector of numbers that describes the surrounding environment around a feature point in an image. Different methods have different lengths and data types for their descriptor vectors. Descriptor Matching is the process of finding a corresponding feature from one set in another using its descriptor. OpenCV provides very easy and powerful methods to support feature extraction and matching. Let's examine a very simple feature extraction and matching scheme: vector<KeyPoint> keypts1, keypts2; Mat desc1, desc2; // detect keypoints and extract ORB descriptors Ptr<Feature2D> orb = ORB::create(2000); orb->detectAndCompute(img1, noArray(), keypts1, desc1); orb->detectAndCompute(img2, noArray(), keypts2, desc2); // matching descriptors Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming"); vector<DMatch> matches; matcher->match(desc1, desc2, matches); You may have already seen similar OpenCV code, but let's review it quickly. Our goal is to obtain three elements: feature points for two images, descriptors for them, and a matching between the two sets of features. OpenCV provides a range of feature detectors, descriptor extractors, and matchers. In this simple example, we use the ORB class to get both the 2D location of Oriented BRIEF (ORB) (where Binary Robust Independent Elementary Features (BRIEF)) feature points and their respective descriptors. We use a brute-force binary matcher to get the matching, which is the most straightforward way to match two feature sets by comparing each feature in the first set to each feature in the second set (hence the phrasing "brute-force"). In the following image, we will see a matching of feature points on two images from the Fountain-P11 sequence found at http://cvlab.epfl.ch/~strecha/multiview/denseMVS.html: Practically, raw matching like we just performed is good only up to a certain level, and many matches are probably erroneous. For that reason, most SfM methods perform some form of filtering on the matches to ensure correctness and reduce errors. One form of filtering, which is built into OpenCV's brute-force matcher, is cross-check filtering. That is, a match is considered true if a feature of the first image matches a feature of the second image, and the reverse check also matches the feature of the second image with the feature of the first image. Another common filtering mechanism, used in the provided code, is to filter based on the fact that the two images are of the same scene and have a certain stereo-view relationship between them. In practice, the filter tries to robustly calculate the fundamental or essential matrix and retain those feature pairs that correspond to this calculation with small errors. An alternative to using rich features, such as ORB, is to use optical flow. The following information box provides a short overview of optical flow. It is possible to use optical flow instead of descriptor matching to find the required point matching between two images, while the rest of the SfM pipeline remains the same. OpenCV recently extended its API for getting the flow field from two images, and now it is faster and more powerful. Optical flow It is the process of matching selected points from one image to another, assuming that both images are part of a sequence and relatively close to one another. Most optical flow methods compare a small region, known as the search window or patch, around each point from image A to the same area in image B. Following a very common rule in computer vision, called the brightness constancy constraint (and other names), the small patches of the image will not change drastically from one image to other, and therefore the magnitude of their subtraction should be close to zero. In addition to matching patches, newer methods of optical flow use a number of additional methods to get better results. One is using image pyramids, which are smaller and smaller resized versions of the image, which allow for working from coarse to-fine—a very well-used trick in computer vision. Another method is to define global constraints on the flow field, assuming that the points close to each other move together in the same direction. Finding camera matrices Now that we have obtained matches between keypoints, we can calculate the essential matrix. However, we must first align our matching points into two arrays, where an index in one array corresponds to the same index in the other. This is required by the findEssentialMat function as we've seen in the Estimating Camera Motion section. We would also need to convert the KeyPoint structure to a Point2f structure. We must pay special attention to the queryIdx and trainIdx member variables of DMatch, the OpenCV struct that holds a match between two keypoints, as they must align with the way we used the DescriptorMatcher::match() function. The following code section shows how to align a matching into two corresponding sets of 2D points, and how these can be used to find the essential matrix: vector<KeyPoint> leftKpts, rightKpts; // ... obtain keypoints using a feature extractor vector<DMatch> matches; // ... obtain matches using a descriptor matcher //align left and right point sets vector<Point2f> leftPts, rightPts; for (size_t i = 0; i < matches.size(); i++) { // queryIdx is the "left" image leftPts.push_back(leftKpts[matches[i].queryIdx].pt); // trainIdx is the "right" image rightPts.push_back(rightKpts[matches[i].trainIdx].pt); } //robustly find the Essential Matrix Mat status; Mat E = findEssentialMat( leftPts, //points from left image rightPts, //points from right image focal, //camera focal length factor pp, //camera principal point cv::RANSAC, //use RANSAC for a robust solution 0.999, //desired solution confidence level 1.0, //point-to-epipolar-line threshold status); //binary vector for inliers We may later use the status binary vector to prune those points that align with the recovered essential matrix. Refer to the following image for an illustration of point matching after pruning. The red arrows mark feature matches that were removed in the process of finding the matrix, and the green arrows are feature matches that were kept: Now we are ready to find the camera matrices; however, the new OpenCV 3 API makes things very easy for us by introducing the recoverPose function. First, we will briefly examine the structure of the camera matrix we will use: This is the model for our camera; it consists of two elements, rotation (denoted as R) and translation (denoted as t). The interesting thing about it is that it holds a very essential equation: x = PX, where x is a 2D point on the image and X is a 3D point in space. There is more to it, but this matrix gives us a very important relationship between the image points and the scene points. So, now that we have a motivation for finding the camera matrices, we will see how it can be done. The following code section shows how to decompose the essential matrix into the rotation and translation elements: Mat E; // ... find the essential matrix Mat R, t; //placeholders for rotation and translation //Find Pright camera matrix from the essential matrix //Cheirality check is performed internally. recoverPose(E, leftPts, rightPts, R, t, focal, pp, mask); Very simple. Without going too deep into mathematical interpretation, this conversion of the essential matrix to rotation and translation is possible because the essential matrix was originally composed by these two elements. Strictly for satisfying our curiosity, we can look at the following equation for the essential matrix, which appears in the literature:. We see that it is composed of (some form of) a translation element, t, and a rotational element, R. Note that a cheirality check is internally performed in the recoverPose function. The cheirality check makes sure that all triangulated 3D points are in front of the reconstructed camera. Camera matrix recovery from the essential matrix has in fact four possible solutions, but the only correct solution is the one that will produce triangulated points in front of the camera, hence the need for a cheirality check. Note that what we just did only gives us one camera matrix, and for triangulation, we require two camera matrices. This operation assumes that one camera matrix is fixed and canonical (no rotation and no translation): The other camera that we recovered from the essential matrix has moved and rotated in relation to the fixed one. This also means that any of the 3D points that we recover from these two camera matrices will have the first camera at the world origin point (0, 0, 0). One more thing we can think of adding to our method is error checking. Many times, the calculation of an essential matrix from point matching is erroneous, and this affects the resulting camera matrices. Continuing to triangulate with faulty camera matrices is pointless. We can install a check to see if the rotation element is a valid rotation matrix. Keeping in mind that rotation matrices must have a determinant of 1 (or -1), we can simply do the following: bool CheckCoherentRotation(const cv::Mat_<double>& R) { if (fabsf(determinant(R)) - 1.0 > 1e-07) { cerr << "rotation matrix is invalid" << endl; return false; } return true; } We can now see how all these elements combine into a function that recovers the P matrices. First, we will introduce some convenience data structures and type short hands: typedef std::vector<cv::KeyPoint> Keypoints; typedef std::vector<cv::Point2f> Points2f; typedef std::vector<cv::Point3f> Points3f; typedef std::vector<cv::DMatch> Matching; struct Features { //2D features Keypoints keyPoints; Points2f points; cv::Mat descriptors; }; struct Intrinsics { //camera intrinsic parameters cv::Mat K; cv::Mat Kinv; cv::Mat distortion; }; Now, we can write the camera matrix finding function: void findCameraMatricesFromMatch( constIntrinsics& intrin, constMatching& matches, constFeatures& featuresLeft, constFeatures& featuresRight, cv::Matx34f& Pleft, cv::Matx34f& Pright) { { //Note: assuming fx = fy const double focal = intrin.K.at<float>(0, 0); const cv::Point2d pp(intrin.K.at<float>(0, 2), intrin.K.at<float>(1, 2)); //align left and right point sets using the matching Features left; Features right; GetAlignedPointsFromMatch( featuresLeft, featuresRight, matches, left, right); //find essential matrix Mat E, mask; E = findEssentialMat( left.points, right.points, focal, pp, RANSAC, 0.999, 1.0, mask); Mat_<double> R, t; //Find Pright camera matrix from the essential matrix recoverPose(E, left.points, right.points, R, t, focal, pp, mask); Pleft = Matx34f::eye(); Pright = Matx34f(R(0,0), R(0,1), R(0,2), t(0), R(1,0), R(1,1), R(1,2), t(1), R(2,0), R(2,1), R(2,2), t(2)); } At this point, we have the two cameras that we need in order to reconstruct the scene. The canonical first camera, in the Pleft variable, and the second camera we calculated, form the essential matrix in the Pright variable. Choosing the image pair to use first Given we have more than just two image views of the scene, we must choose which two views we will start the reconstruction from. In their paper, Snavely et al. suggest that we pick the two views that have the least number of homography inliers. A homography is a relationship between two images or sets of points that lie on a plane; the homography matrix defines the transformation from one plane to another. In case of an image or a set of 2D points, the homography matrix is of size 3 x 3. When Snavely et al. look for the lowest inlier ratio, they essentially suggest to calculate the homography matrix between all pairs of images and pick the pair whose points mostly do not correspond with the homography matrix. This means the geometry of the scene in these two views is not planar or at least not the same plane in both views, which helps when doing 3D reconstruction. For reconstruction, it is best to look at a complex scene with non-planar geometry, with things closer and farther away from the camera. The following code snippet shows how to use OpenCV's findHomography function to count the number of inliers between two views whose features were already extracted and matched: int findHomographyInliers( const Features& left, const Features& right, const Matching& matches) { //Get aligned feature vectors Features alignedLeft; Features alignedRight; GetAlignedPointsFromMatch(left, right, matches, alignedLeft, alignedRight); //Calculate homography with at least 4 points Mat inlierMask; Mat homography; if(matches.size() >= 4) { homography = findHomography(alignedLeft.points, alignedRight.points, cv::RANSAC, RANSAC_THRESHOLD, inlierMask); } if(matches.size() < 4 or homography.empty()) { return 0; } return countNonZero(inlierMask); } The next step is to perform this operation on all pairs of image views in our bundle and sort them based on the ratio of homography inliers to outliers: //sort pairwise matches to find the lowest Homography inliers map<float, ImagePair> pairInliersCt; const size_t numImages = mImages.size(); //scan all possible image pairs (symmetric) for (size_t i = 0; i < numImages - 1; i++) { for (size_t j = i + 1; j < numImages; j++) { if (mFeatureMatchMatrix[i][j].size() < MIN_POINT_CT) { //Not enough points in matching pairInliersCt[1.0] = {i, j}; continue; } //Find number of homography inliers const int numInliers = findHomographyInliers( mImageFeatures[i], mImageFeatures[j], mFeatureMatchMatrix[i][j]); const float inliersRatio = (float)numInliers / (float)(mFeatureMatchMatrix[i][j].size()); pairInliersCt[inliersRatio] = {i, j}; } } Note that the std::map<float, ImagePair> will internally sort the pairs based on the map's key: the inliers ratio. We then simply need to traverse this map from the beginning to find the image pair with least inlier ratio, and if that pair cannot be used, we can easily skip ahead to the next pair. Summary In this article, we saw how OpenCV v3 can help us approach Structure from Motion in a manner that is both simple to code and to understand. OpenCV v3's new API contains a number of useful functions and data structures that make our lives easier and also assist in a cleaner implementation. However, the state-of-the-art SfM methods are far more complex. There are many issues we choose to disregard in favor of simplicity, and plenty more error examinations that are usually in place. Our chosen methods for the different elements of SfM can also be revisited. Some methods even use the N-view triangulation once they understand the relationship between the features in multiple images. If we would like to extend and deepen our familiarity with SfM, we will certainly benefit from looking at other open source SfM libraries. One particularly interesting project is libMV, which implements a vast array of SfM elements that may be interchanged to get the best results. There is a great body of work from University of Washington that provides tools for many flavors of SfM (Bundler and VisualSfM). This work inspired an online product from Microsoft, called PhotoSynth, and 123D Catch from Adobe. There are many more implementations of SfM readily available online, and one must only search to find quite a lot of them. Resources for Article: Further resources on this subject: Basics of Image Histograms in OpenCV [article] OpenCV: Image Processing using Morphological Filters [article] Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos [article]
Read more
  • 0
  • 1
  • 57530

article-image-writing-reddit-reader-rxphp
Packt
09 Jan 2017
9 min read
Save for later

Writing a Reddit Reader with RxPHP

Packt
09 Jan 2017
9 min read
In this article by Martin Sikora, author of the book, PHP Reactive Programming, we will cover writing a CLI Reddit reader app using RxPHP, and we will see how Disposables are used in the default classes that come with RxPHP, and how these are going to be useful for unsubscribing from Observables in our app. (For more resources related to this topic, see here.) Examining RxPHP's internals As we know, Disposables as a means for releasing resources used by Observers, Observables, Subjects, and so on. In practice, a Disposable is returned, for example, when subscribing to an Observable. Consider the following code from the default RxObservable::subscribe() method: function subscribe(ObserverI $observer, $scheduler = null) { $this->observers[] = $observer; $this->started = true; return new CallbackDisposable(function () use ($observer) { $this->removeObserver($observer); }); } This method first adds the Observer to the array of all subscribed Observers. It then marks this Observable as started and, at the end, it returns a new instance of the CallbackDisposable class, which takes a Closure as an argument and invokes it when it's disposed. This is probably the most common use case for Disposables. This Disposable just removes the Observer from the array of subscribers and therefore, it receives no more events emitted from this Observable. A closer look at subscribing to Observables It should be obvious that Observables need to work in such way that all subscribed Observables iterate. Then, also unsubscribing via a Disposable will need to remove one particular Observer from the array of all subscribed Observables. However, if we have a look at how most of the default Observables work, we find out that they always override the Observable::subscribe() method and usually completely omit the part where it should hold an array of subscribers. Instead, they just emit all available values to the subscribed Observer and finish with the onComplete() signal immediately after that. For example, we can have a look at the actual source code of the subscribe() method of the RxReturnObservable class: function subscribe(ObserverI $obs, SchedulerI $sched = null) { $value = $this->value; $scheduler = $scheduler ?: new ImmediateScheduler(); $disp = new CompositeDisposable(); $disp->add($scheduler->schedule(function() use ($obs, $val) { $obs->onNext($val); })); $disp->add($scheduler->schedule(function() use ($obs) { $obs->onCompleted(); })); return $disp; } The ReturnObservable class takes a single value in its constructor and emits this value to every Observer as they subscribe. The following is a nice example of how the lifecycle of an Observable might look: When an Observer subscribes, it checks whether a Scheduler was also passed as an argument. Usually, it's not, so it creates an instance of ImmediateScheduler. Then, an instance of CompositeDisposable is created, which is going to keep an array of all Disposables used by this method. When calling CompositeDisposable::dispose(), it iterates all disposables it contains and calls their respective dispose() methods. Right after that we start populating our CompositeDisposable with the following: $disposable->add($scheduler->schedule(function() { ... })); This is something we'll see very often. SchedulerInterface::schedule() returns a DisposableInterface, which is responsible for unsubscribing and releasing resources. In this case, when we're using ImmediateScheduler, which has no other logic, it just evaluates the Closure immediately: function () use ($obs, $val) { $observer->onNext($val); } Since ImmediateScheduler::schedule() doesn't need to release any resources (it didn't use any), it just returns an instance of RxDisposableEmptyDisposable that does literally nothing. Then the Disposable is returned, and could be used to unsubscribe from this Observable. However, as we saw in the preceding source code, this Observable doesn't let you unsubscribe, and if we think about it, it doesn't even make sense because ReturnObservable class's value is emitted immediately on subscription. The same applies to other similar Observables, such as IteratorObservable, RangeObservable or ArrayObservable. These just contain recursive calls with Schedulers but the principle is the same. A good question is, why on Earth is this so complicated? All the preceding code does could be stripped into the following three lines (assuming we're not interested in using Schedulers): function subscribe(ObserverI $observer) { $observer->onNext($this->value); $observer->onCompleted(); } Well, for ReturnObservable this might be true, but in real applications, we very rarely use any of these primitive Observables. It's true that we usually don't even need to deal with Schedulers. However, the ability to unsubscribe from Observables or clean up any resources when unsubscribing is very important and we'll use it in a few moments. A closer look at Operator chains Before we start writing our Reddit reader, we should talk briefly about an interesting situation that might occur, so it doesn't catch us unprepared later. We're also going to introduce a new type of Observable, called ConnectableObservable. Consider this simple Operator chain with two subscribers: // rxphp_filters_observables.php use RxObservableRangeObservable; use RxObservableConnectableObservable; $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObs = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { return $val % 2;, }); $disposable1 = $filteredObs->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $filteredObs->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); The ConnectableObservable class is a special type of Observable that behaves similarly to Subject (in fact, internally, it really uses an instance of the Subject class). Any other Observable emits all available values right after you subscribe to it. However, ConnectableObservable takes another Observable as an argument and lets you subscribe Observers to it without emitting anything. When you call ConnectableObservable::connect(), it connects Observers with the source Observables, and all values go one by one to all subscribers. Internally, it contains an instance of the Subject class and when we called subscribe(), it just subscribed this Observable to its internal Subject. Then when we called the connect() method, it subscribed the internal Subject to the source Observable. In the $filteredObs variable we keep a reference to the last Observable returned from filter() call, which is an instance of AnnonymousObservable where, on next few lines, we subscribe both Observers. Now, let's see what this Operator chain prints: $ php rxphp_filters_observables.php S1: 1 S2: 1 S1: 9 S2: 9 S1: 25 S2: 25 As we can see, each value went through both Observers in the order they were emitted. Just out of curiosity, we can also have a look at what would happen if we didn't use ConnectableObservable, and used just the RangeObservable instead: $ php rxphp_filters_observables.php S1: 1 S1: 9 S1: 25 S2: 1 S2: 9 S2: 25 This time, RangeObservable emitted all values to the first Observer and then, again, all values to the second Observer. Right now, we can tell that the Observable had to generate all the values twice, which is inefficient, and with a large dataset, this might cause a performance bottleneck. Let's go back to the first example with ConnectableObservable, and modify the filter() call so it prints all the values that go through: $filteredObservable = $connObservable ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }); Now we run the code again and see what happens: $ php rxphp_filters_observables.php Filter: 0 Filter: 0 Filter: 1 S1: 1 Filter: 1 S2: 1 Filter: 4 Filter: 4 Filter: 9 S1: 9 Filter: 9 S2: 9 Filter: 16 Filter: 16 Filter: 25 S1: 25 Filter: 25 S2: 25 Well, this is unexpected! Each value is printed twice. This doesn't mean that the Observable had to generate all the values twice, however. It's not obvious at first sight what happened, but the problem is that we subscribed to the Observable at the end of the Operator chain. As stated previously, $filteredObservable is an instance of AnnonymousObservable that holds many nested Closures. By calling its subscribe() method, it runs a Closure that's created by its predecessor, and so on. This leads to the fact that every call to subscribe() has to invoke the entire chain. While this might not be an issue in many use cases, there are situations where we might want to do some special operation inside one of the filters. Also, note that calls to the subscribe() method might be out of our control, performed by another developer who wanted to use an Observable we created for them. It's good to know that such a situation might occur and could lead to unwanted behavior. It's sometimes hard to see what's going on inside Observables. It's very easy to get lost, especially when we have to deal with multiple Closures. Schedulers are prime examples. Feel free to experiment with the examples shown here and use debugger to examine step-by-step what code gets executed and in what order. So, let's figure out how to fix this. We don't want to subscribe at the end of the chain multiple times, so we can create an instance of Subject class, where we'll subscribe both Observers, and the Subject class itself will subscribe to the AnnonymousObservable as discussed a moment ago: // ... use RxSubjectSubject; $subject = new Subject(); $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObservable = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }) ->subscribe($subject); $disposable1 = $subject->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $subject->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); Now we can run the script again and see that it does what we wanted it to do: $ php rxphp_filters_observables.php Filter: 0 Filter: 1 S1: 1 S2: 1 Filter: 4 Filter: 9 S1: 9 S2: 9 Filter: 16 Filter: 25 S1: 25 S2: 25 This might look like an edge case, but soon we'll see that this issue, left unhandled, could lead to some very unpredictable behavior. We'll bring out both these issues (proper usage of Disposables and Operator chains) when we start writing our Reddit reader. Summary In this article, we looked in more depth at how to use Disposables and Operators, how these work internally, and what it means for us. We also looked at a couple of new classes from RxPHP, such as ConnectableObservable, and CompositeDisposable. Resources for Article: Further resources on this subject: Working with JSON in PHP jQuery [article] Working with Simple Associations using CakePHP [article] Searching Data using phpMyAdmin and MySQL [article]
Read more
  • 0
  • 0
  • 14589

article-image-data-storage-forcecom
Packt
09 Jan 2017
14 min read
Save for later

Data Storage in Force.com

Packt
09 Jan 2017
14 min read
In this article by Andrew Fawcett, author of the book Force.com Enterprise Architecture - Second Edition, we will discuss how it is important to consider your customers' storage needs and use cases around their data creation and consumption patterns early in the application design phase. This ensures that your object schema is the most optimum one with respect to large data volumes, data migration processes (inbound and outbound), and storage cost. In this article, we will extend the Custom Objects in the FormulaForce application as we explore how the platform stores and manages data. We will also explore the difference between your applications operational data and configuration data and the benefits of using Custom Metadata Types for configuration management and deployment. (For more resources related to this topic, see here.) You will obtain a good understanding of the types of storage provided and how the costs associated with each are calculated. It is also important to understand the options that are available when it comes to reusing or attempting to mirror the Standard Objects such as Account, Opportunity, or Product, which extend the discussion further into license cost considerations. You will also become aware of the options for standard and custom indexes over your application data. Finally, we will have some insight into new platform features for consuming external data storage from within the platform. In this article, we will cover the following topics: Mapping out end user storage requirements Understanding the different storage types Reusing existing Standard Objects Importing and exporting application data Options for replicating and archiving data External data sources Mapping out end user storage requirements During the initial requirements and design phase of your application, the best practice is to create user categorizations known as personas. Personas consider the users' typical skills, needs, and objectives. From this information, you should also start to extrapolate their data requirements, such as the data they are responsible for creating (either directly or indirectly, by running processes) and what data they need to consume (reporting). Once you have done this, try to provide an estimate of the number of records that they will create and/or consume per month. Share these personas and their data requirements with your executive sponsors, your market researchers, early adopters, and finally the whole development team so that they can keep them in mind and test against them as the application is developed. For example, in our FormulaForce application, it is likely that managers will create and consume data, whereas race strategists will mostly consume a lot of data. Administrators will also want to manage your applications configuration data. Finally, there will likely be a background process in the application, generating a lot of data, such as the process that records Race Data from the cars and drivers during the qualification stages and the race itself, such as sector (a designated portion of the track) times. You may want to capture your conclusions regarding personas and data requirements in a spreadsheet along with some formulas that help predict data storage requirements. This will help in the future as you discuss your application with Salesforce during the AppExchange Listing process and will be a useful tool during the sales cycle as prospective customers wish to know how to budget their storage costs with your application installed. Understanding the different storage types The storage used by your application records contributes to the most important part of the overall data storage allocation on the platform. There is also another type of storage used by the files uploaded or created on the platform. From the Storage Usage page under the Setup menu, you can see a summary of the records used, including those that reside in the Salesforce Standard Objects. Later in this article, we will create a Custom Metadata Type object to store configuration data. Storage consumed by this type of object is not reflected on the Storage Usage page and is managed and limited in a different way. The preceding page also shows which users are using the most amount of storage. In addition to the individual's User details page, you can also locate the Used Data Space and Used File Space fields; next to these are the links to view the users' data and file storage usage. The limit shown for each is based on a calculation between the minimum allocated data storage depending on the type of organization or the number of users multiplied by a certain number of MBs, which also depends on the organization type; whichever is greater becomes the limit. For full details of this, click on the Help for this Page link shown on the page. Data storage Unlike other database platforms, Salesforce typically uses a fixed 2 KB per record size as part of its storage usage calculations, regardless of the actual number of fields or the size of the data within them on each record. There are some exceptions to this rule, such as Campaigns that take up 8 KB and stored Email Messages use up the size of the contained e-mail, though all Custom Object records take up 2 KB. Note that this record size also applies even if the Custom Object uses large text area fields. File storage Salesforce has a growing number of ways to store file-based data, ranging from the historic Document tab, to the more sophisticated Content tab, to using the Files tab, and not to mention Attachments, which can be applied to your Custom Object records if enabled. Each has its own pros and cons for end users and file size limits that are well defined in the Salesforce documentation. From the perspective of application development, as with data storage, be aware of how much your application is generating on behalf of the user and give them a means to control and delete that information. In some cases, consider if the end user would be happy to have the option to recreate the file on demand (perhaps as a PDF) rather than always having the application to store it. Reusing the existing Standard Objects When designing your object model, a good knowledge of the existing Standard Objects and their features is the key to knowing when and when not to reference them. Keep in mind the following points when considering the use of Standard Objects: From a data storage perspective: Ignoring Standard Objects creates a potential data duplication and integration effort for your end users if they are already using similar Standard Objects as pre-existing Salesforce customers. Remember that adding additional custom fields to the Standard Objects via your package will not increase the data storage consumption for those objects. From a license cost perspective: Conversely, referencing some Standard Objects might cause additional license costs for your users, since not all are available to the users without additional licenses from Salesforce. Make sure that you understand the differences between Salesforce (CRM) and Salesforce Platform licenses with respect to the Standard Objects available. Currently, the Salesforce Platform license provides Accounts and Contacts; however, to use the Opportunity or Product objects, a Salesforce (CRM) license is needed by the user. Refer to the Salesforce documentation for the latest details on these. Use your user personas to define what Standard Objects your users use and reference them via lookups, Apex code, and Visualforce accordingly. You may wish to use extension packages and/or dynamic Apex and SOQL to make these kind of references optional. Since Developer Edition orgs have all these licenses and objects available (although in a limited quantity), make sure that you review your Package dependencies before clicking on the Upload button each time to check for unintentional references. Importing and exporting data Salesforce provides a number of its own tools for importing and exporting data as well as a number of third-party options based on the Salesforce APIs; these are listed on AppExchange. When importing records with other record relationships, it is not possible to predict and include the IDs of related records, such as the Season record ID when importing Race records; in this section, we will present a solution to this. Salesforce provides Data Import Wizard, which is available under the Setup menu. This tool only supports Custom Objects and Custom Settings. Custom Metadata Type records are essentially considered metadata by the platform, and as such, you can use packages, developer tools, and Change Sets to migrate these records between orgs. There is an open source CSV data loader for Custom Metadata Types at https://github.com/haripriyamurthy/CustomMetadataLoader. It is straightforward to import a CSV file with a list of race Season since this is a top-level object and has no other object dependencies. However, to import the Race information (which is a child object related to Season), the Season and Fasted Lap By record IDs are required, which will typically not be present in a Race import CSV file by default. Note that IDs are unique across the platform and cannot be shared between orgs. External ID fields help address this problem by allowing Salesforce to use the existing values of such fields as a secondary means to associate records being imported that need to reference parent or related records. All that is required is that the related record Name or, ideally, a unique external ID be included in the import data file. This CSV file includes three columns: Year, Name, and Fastest Lap By (of the driver who performed the fastest lap of that race, indicated by their Twitter handle). You may remember that a Driver record can also be identified by this since the field has been defined as an External ID field. Both the 2014 Season record and the Lewis Hamilton Driver record should already be present in your packaging org. Now, run Data Import Wizard and complete the settings as shown in the following screenshot: Next, complete the field mappings as shown in the following screenshot: Click on Start Import and then on OK to review the results once the data import has completed. You should find that four new Race records have been created under 2014 Season, with the Fasted Lap By field correctly associated with the Lewis Hamilton Driver record. Note that these tools will also stress your Apex Trigger code for volumes, as they typically have the bulk mode enabled and insert records in chunks of 200 records. Thus, it is recommended that you test your triggers to at least this level of record volumes. Options for replicating and archiving data Enterprise customers often have legacy and/or external systems that are still being used or that they wish to phase out in the future. As such, they may have requirements to replicate aspects of the data stored in the Salesforce platform to another. Likewise, in order to move unwanted data off the platform and manage their data storage costs, there is a need to archive data. The following lists some platform and API facilities that can help you and/or your customers build solutions to replicate or archive data. There are, of course, a number of AppExchange solutions listed that provide applications that use these APIs already: Replication API: This API exists in both the web service SOAP and Apex form. It allows you to develop a scheduled process to query the platform for any new, updated, or deleted records between a given time period for a specific object. The getUpdated and getDeleted API methods return only the IDs of the records, requiring you to use the conventional Salesforce APIs to query the remaining data for the replication. The frequency in which this API is called is important to avoid gaps. Refer to the Salesforce documentation for more details. Outbound Messaging: This feature offers a more real-time alternative to the replication API. An outbound message event can be configured using the standard workflow feature of the platform. This event, once configured against a given object, provides a Web Service Definition Language (WSDL) file that describes a web service endpoint to be called when records are created and updated. It is the responsibility of a web service developer to create the end point based on this definition. Note that there is no provision for deletion with this option. Bulk API: This API provides a means to move up to 5000 chunks of Salesforce data (up to 10 MB or 10,000 records per chunk) per rolling 24-hour period. Salesforce and third-party data loader tools, including the Salesforce Data Loader tool, offer this as an option. It can also be used to delete records without them going into the recycle bin. This API is ideal for building solutions to archive data. Heroku Connect is a seamless data synchronization solution between Salesforce and Heroku Postgres. For further information, refer to https://www.heroku.com/connect. External data sources One of the downsides of moving data off the platform in an archive use case or with not being able to replicate data onto the platform is that the end users have to move between applications and logins to view data; this causes an overhead as the process and data is not connected. The Salesforce Connect (previously known as Lightning Connect) is a chargeable add-on feature of the platform is the ability to surface external data within the Salesforce user interface via the so-called External Objects and External Data Sources configurations under Setup. They offer a similar functionality to Custom Objects, such as List views, Layouts, and Custom Buttons. Currently, Reports and Dashboards are not supported, though it is possible to build custom report solutions via Apex, Visualforce or Lightning Components. External Data Sources can be connected to existing OData-based end points and secured through OAuth or Basic Authentication. Alternatively, Apex provides a Connector API whereby developers can implement adapters to connect to other HTTP-based APIs. Depending on the capabilities of the associated External Data Source, users accessing External Objects using the data source can read and even update records through the standard Salesforce UIs such as Salesforce Mobile and desktop interfaces. Summary This article explored the declarative aspects of developing an application on the platform that applies to how an application is stored and how relational data integrity is enforced through the use of the lookup field deletion constraints and applying unique fields. Upload the latest version of the FormulaForce package and install it into your test org. The summary page during the installation of new and upgraded components should look something like the following screenshot. Note that the permission sets are upgraded during the install. Once you have installed the package in your testing org, visit the Custom Metadata Types page under Setup and click on Manage Records next to the object. You will see that the records are shown as managed and cannot be deleted. Click on one of the records to see that the field values themselves cannot also be edited. This is the effect of the Field Manageability checkbox when defining the fields. The Namespace Prefix shown here will differ from yours. Try changing or adding the Track Lap Time records in your packaging org, for example, update a track time on an existing record. Upload the package again then upgrade your test org. You will see the records are automatically updated. Conversely, any records you created in your test org will be retained between upgrades. In this article, we have now covered some major aspects of the platform with respect to packaging, platform alignment, and how your application data is stored as well as the key aspects of your application's architecture. Resources for Article: Further resources on this subject: Process Builder to the Rescue [article] Custom Coding with Apex [article] Building, Publishing, and Supporting Your Force.com Application [article]
Read more
  • 0
  • 0
  • 11434

article-image-professional-environment-react-native-part-1
Pierre Monge
09 Jan 2017
5 min read
Save for later

A Professional Environment for React Native, Part 1

Pierre Monge
09 Jan 2017
5 min read
React Native, a new framework, allows you to build mobile apps using JavaScript. It uses the same design as React.js, letting you compose a rich mobile UI from declarative components. Although many developers are talking about this technology, React Native is not yet approved by most professionals for several reasons: React Native isn’t fully stable yet. At the time of writing this, we are at version 0.40 It can be scary to use a web technology in a mobile application It’s hard to find good React Native developers because knowing the React.js stack is not enough to maintain a mobile React Native app from A to Z! To confront all these prejudices, this series will act as a guide, detailing how we see things in my development team. We will cover the entire React Native environment as well as discuss how to maintain a React Native application. This series may be of interest to companies who want to implement a React Native solution and also of interest to anyone who is looking for the perfect tools to maintain a mobile application in React Native. Let’s start here in part 1 by exploring the React Native environment. The environment The React Native environment is pretty consistent. To manage all of the parts of such an application, you will need a native stack, a JavaScript stack, and specific components from React Native. Let's examine all the aspects of the React Native environment: The Native part consists of two important pieces of software: Android Studio (Android) and Xcode (iOS). Both the pieces of software are provided with their emulators, so there is no need for a physical device! The negative point of Android Studio, however, is that you need to download the SDK, and you will have to find the good versions and download them all. In addition, these two programs take up a lot of room on your hard disk! The JavaScript part naturally consists of Node.js, but we must add Watchman to that to check the changes in a file in real time. The React Native CLI will automate the linking of all software. You only have to run react-native init helloworld to create a project and react-native run-ios --scheme 'Dev' to launch the project on an iOS simulator in debug mode. The supplied react-native controls will load almost everything! You have, no doubt, come to our first conclusion. React Native has a lot of prerequisites, and although it makes sense to have as much dependency as possible, you will have to master them all, which can take some time. And also a lot of space on your hard drive! Try this as your starting point if you want more information on getting started with React Native. Atom, linter, and flow A developer never starts coding without his text editor and his little tricks, just as a woodcutter never goes out into the forest without his ax! More than 80% of people around me use Atom as a text editor. And they are not wrong! React Native is 100% OpenSource, and Atom is also open source. And it is full of plug-ins and themes of all kinds. I personally use a lot of plug-ins, such as color-picker, file-icons, indent-guide-improved, minimap, etc., but there are some plug-ins that should be essential for every JavaScript coder, especially for your React Native application. linter-eslint To work alone or in a group, you must have a common syntax for all your files. To do this, we use linter-eslint with the fbjs configurations. This plugin provides the following: Good indentation Good syntax on how to define variables, objects, classes, etc. Indications on non-existent, unused variables and functions And many other great benefits. Flow One question you may be thinking is what is the biggest problem with using JavaScript? One issue with using JavaScript has always been that it's a language that has no type. In fact, there are types, such as String, Number, Boolean, Function, etc., but that's just not enough. There is no static type. To deal with this, we use Flow, which allows you to perform type check before run-time. This is, of course, useful for predicting bugs! There is even a plug-in version for Atom: linter-flow. Conclusion At this point, you should have everything you need to create your first React Native mobile applications. Here are some great examples of apps that are out there already. Check out part 2 in this series where I cover the tools that can help you maintain your React Native apps. About the author Pierre Monge (liroo.pierre@gmail.com) is a 21 year old student. He is a developer in C, JavaScript, and all things related to web development, and he has recently been creating mobile applications. He is currently working as an intern at a company named Azendoo, where he is developing a 100% React Native application.
Read more
  • 0
  • 0
  • 6723

article-image-abstract-terrain-shader-duality
Lőrinc Serfőző
06 Jan 2017
5 min read
Save for later

Abstract terrain shader in Duality

Lőrinc Serfőző
06 Jan 2017
5 min read
This post guides you through the creation process of abstract-looking terrain shaders in the Duality 2D game engine. The basics of the engine are not presented here, but if you are familiar with game technology, it should not be too difficult to follow along. If something does not make sense at first, take a look at the official documentation on GitHub. Alternatively, there are two tutorials with more of an introductory flair. In addition, the concepts described here can be easily adapted to other game engines and frameworks as well. Required tools Duality can be downloaded from the official site. A C# compiler and a text editor are also needed. Visual Studio 2013 or higher is recommended, but other IDEs, like MonoDevelop also work. Creating the required resources Open up a new project in Dualitor! First, we have to create several new resources. The following list describes the required resources. Create and name them accordingly. VertexShader encapsulates a GLSL vertex shader. We need this, because the vertex coordinates should be converted to world space, in order to achieve the desired terrain effect. More on that later. FragmentShader encapsulates a GLSL fragment shader, the 'creative' part of our processing. ShaderProgram binds a VertexShader and a FragmentShader together. DrawTechnique provides attributes (such as blending mode, etc.) to the ShaderProgram to be able to send it to the GPU. Material establishes the connection between a DrawTechnique and one or several textures and other numerical data. It can be attached to the GameObject's renderers in the scene. The vertex shader Let's start with implementing the vertex shader. Unlike most game engines, Duality handles some of the vertex transformations on the CPU, in order to achieve a parallax scaling effect. Thus, the vertex array passed to the GPU is already scaled. However, we do not need that precalculation for our terrain shader, so this transformation has to be undone in the vertex shader. Double click the VertexShader resource to open it in an external text editor. It should contain the following: void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; gl_FrontColor = gl_Color; } To perform the inverse transformation, the camera data should be passed to the shader. This is done automatically by Duality via pre-configured uniform variables: CameraFocusDist, CameraParallax and CameraPosition. The result worldPosition is passed to the fragment shader via a varying variable. // vertex shader varying vec3 worldPosition; uniform float CameraFocusDist; uniform bool CameraParallax; uniform vec3 CameraPosition; vec3 reverseParallaxTransform () { // Duality uses software pre-transformation of vertices // gl_Vertex is already in parallax (scaled) view space when arriving here. vec4 vertex = gl_Vertex; // Reverse-engineer the scale that was previously applied to the vertex float scale = 1.0; if (CameraParallax) { scale = CameraFocusDist / vertex.z; } else { // default focus dist is 500 scale = CameraFocusDist / 500.0; } return vec3 (vertex.xyz + vec3 (CameraPosition.xy, 0)) / scale; } void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; gl_FrontColor = gl_Color; worldPosition = reverseParallaxTransform (); } The fragment shader Next, implement the fragment shader. Various effects can be achieved using textures and mathematical functions creatively. Here a simple method is presented: the well-known XOR texture generation. It is based on calculating the binary exclusive or product of the integer world coordinates (operator ^ in GLSL). To control its parameters, two uniform variables, scale and repeat, are introduced in addition to the varying one from the vertex shader. A texture named mainTex is also used to alpha-mask the product. // fragment shader varying vec3 worldPosition; uniform float scale; uniform int repeat; uniform sampler2D mainTex; void main() { vec4 texSample = texture2D(mainTex, gl_TexCoord[0].st); int x = (int)(worldPosition.x * scale) % repeat; int y = (int)(worldPosition.y * scale) % repeat; vec3 color = gl_Color.rgb * (x ^ y) / (float)repeat; gl_FragColor = vec4(color, 1.0) * texSample.a; } Assign the VertexShader and FragmentShader resources to the ShaderProgram resource, and that to the DrawTechnique resource. The latter one, as mentioned, determines the blending mode. This time it has to be set to Mask in order to make the alpha masking work. The DrawTechnique should be assigned to the Material resource. The material is used to control the custom uniform parameters. The following values yield correct results: MainColor: Anything but black or white for testing purposes. mainTex: A texture with alpha mask. For example, this rounded block shape. scale: 1.0. repeat: 256. Populating the scene Create a SpriteRenderer in the scene, and assign the new material to it. Because we used world coordinates in the fragment shader, the texture stays fixed relative to the world, and the alpha mask functions as a “window” to it. The effect can be perceived by repositioning the sprite in the game world. Duplicate the sprite GameObject several times and move them around. When they intersect, the texture should be perfectly continuous. You may notice that the texture behaves incorrectly while moving the camera in Scene Editor view. The reason behind is that in that view mode, the camera is different than the one the shader calculates against. For inspecting the final look, use the Game View. Summary This technique can be used to quickly build continuous-looking terrains using a small number of alpha masks in your top-down or sidescroller game projects. Of course, the fragment shader could be extended with additional logic and textures. Experimenting with them often yields usable results. I hope you enjoyed this post. In case you have any questions, feel free to post them below, or on the Duality forums. About the author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics. It’s an interdisciplinary field between the more traditional mechanical engineering, electrical engineering and informatics, and Lőrinc has quickly grown a passion toward software development. He is a supporter of open source software and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 4726
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-hello-world-xamarinforms
Packt
06 Jan 2017
16 min read
Save for later

Creating Hello World in Xamarin.Forms_sample

Packt
06 Jan 2017
16 min read
Since the beginning of Xamarin's life as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem: a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this chapter, we will do the following: Create Hello World in Xamarin.Forms Discuss the Xamarin.Forms architecture Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms Creating Hello World in Xamarin.Forms To understand how a Xamarin.Forms application is put together, let's begin by creating a simple Hello World application. OpenXamarin Studio and perform the following steps: Create a new Multiplatform | App | Forms App project from the new solution dialog. Name your solution something appropriate, such as HelloForms. Make sure Use Portable Class Library is selected. Click Next, then click Create. Notice the three new projects that were successfully created: HelloForms HelloForms.Android HelloForms.iOS In Xamarin.Forms applications, the bulk of your code will be shared, and each platform-specific project is just a small amount of code that starts up the Xamarin.Forms framework. Let's examine theminimum parts of a Xamarin.Forms application: App.xaml and App.xaml.cs in the HelloForms PCL library -- this class is the main starting point of the Xamarin.Forms application. A simple property, MainPage, is set to the first page in the application. In the default project template, HelloFormsPage is created with a single label that will be rendered as a UILabel on iOS and a TextView on Android. MainActivity.cs in the HelloForms.Android Android project -- the main launcher activity of the Android application. The important parts for Xamarin.Forms here is the call to Forms.Init(this, bundle), which initializes the Android-specific portion of the Xamarin.Forms framework. Next is a call to LoadApplication(new App()), which starts our Xamarin.Forms application. AppDelegate.cs in the HelloForms.iOS iOS project -- very similar to Android, except iOS applications start up using a UIApplicationDelegate class. Forms.Init() will initialized the iOS-specific parts of Xamarin.Forms, and just as Android's LoadApplication(new App()), will start the Xamarin.Forms application. Go ahead and run the iOS project; you should see something similar to the following screenshot: If you run theAndroid project, you will get a UI verysimilar to the iOS one shown in the following screenshot, but using native Android controls: Even though it's not covered in this book, Xamarin.Forms also supports Windows Phone, WinRT, and UWP applications. However, a PC running Windows and Visual Studio is required to develop for Windows platforms. If you can get a Xamarin.Forms application working on iOS and Android, then getting a Windows Phone version working should be a piece of cake. Understanding the architecture behind Xamarin.Forms Getting started with Xamarin.Forms is very easy, but it is always good to look behind the scenes to understand how everything is put together. In the earlier chapters of this book, we created a cross-platform application using native iOS and Android APIs directly. Certain applications are much more suited for this development approach, so understanding the difference between a Xamarin.Forms application and a classic Xamarin application is important when choosing what framework is best suited for your app. Xamarin.Forms is an abstraction over the native iOS and Android APIs that you can call directly from C#. So, Xamarin.Forms is using the same APIs you would in a classic Xamarin application, while providing a framework that allows you to define your UIs in a cross-platform way. An abstraction layer such as this is in many ways a very good thing, because it gives you the benefit of sharing the code driving your UI as well as any backend C# code that could also have been shared in a standard Xamarin app. The main disadvantage, however, is a slight hit in performance that might make it more difficult to create a perfect, buttery-smooth experience. Xamarin.Forms gives the option of writing renderers and effects that allow you to override your UI in a platform-specific way. This gives you the ability to drop down to native controls where needed. Have a look at the differences between a Xamarin.Forms application and a traditional Xamarin app in the following diagram: In both applications, the business logic and backend code of the application can be shared, but Xamarin.Forms gives an enormous benefit by allowing your UI code to be shared as well. Additionally, Xamarin.Forms applications have two projecttemplates to choose from, so let's cover each option: Xamarin.Forms Shared: Creates a shared project with all of your Xamarin.Forms code, an iOS project, and an Android project Xamarin.Forms Portable: Creates a Portable Class Library (PCL) containing all shared Xamarin.Forms code, an iOS project, and an Android project Both options will work well for any application, in general. Shared projects are basically a collection of code files that get added automatically by another project referencing it. Using a shared project allows you to use preprocessor statements to implement platform-specific code. PCL projects, on the other hand, create a portable .NET assembly that can be used on iOS, Android, and various other platforms. PCLs can't use preprocessor statements, so you generally set up platform-specific code with interface or abstract/base classes. In most cases, I think a PCL is a better option, since it inherently encourages better programming practices. See Chapter 3, Code Sharing between iOS and Android, for details on the advantages and disadvantages of these two code-sharing techniques. Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in Extensible Application Markup Language (XAML). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with different types of Xamarin.Forms controls on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tags: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS; your application will look something like the following screenshot: On Android, the application looks identical to iOS, except it is using native Android controls instead of the iOS counterparts: In our XAML, we created a StackLayout control, which is a container for other controls. It can lay out controls either vertically or horizontally one by one, as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values, delimited by commas. We also usedseveral of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the chapter. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers using two plus and minus buttons. On iOS, this maps to a UIStepper, while on Android, Xamarin.Forms implements this functionality with two buttons. These are just some of the controls provided by Xamarin.Forms. There are also more complicated controls, such as the ListView and TableView, which you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So, you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs than C#. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should begrasping the basics of Xamarin.Forms, but are wondering how theMVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is set up with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern we covered earlier in the book. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data-bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can set up a data-binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So, for example, the click event of a Button can be data-bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be set up with C# code in Xamarin.Forms using the Binding class. However, it is generally much easier to set up bindings with XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and partially convert our XamSnap sample application from earlier in the book to use Xamarin.Forms. For the most part, we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data-binding with XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL, named XamSnap: First, create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models classes from the XamSnap application from earlier chapters; these are found in the XamSnap project. Build the project, just to make sure everything is saved. You will get a few compiler errors, which we will resolve shortly. The first class we will need to edit is the BaseViewModel class; open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides its own IoC container called the DependencyService. It functions very similarly to the container we built in the previous chapters, except it only has one method, Get<T>, and registrations are set up via an assembly attribute that we will set up shortly. Additionally, we removed the IsBusyChanged event in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's set up the services we need with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data-binding the command of a Button. One last change in the view model layer is to set up INotifyPropertyChanged for MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file named LoginPage in the Views folder. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We set up the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside ContentPage: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will set up the basics of two text fields, a button, and a spinner, complete with all the bindings to make everything work. Since we set up BindingContext from the LoginPage code-behind file, all the properties are bound to LoginViewModel. Next, create ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code-behind file: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used ListView to data-bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data-bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if youcompile and run the application, you will get afunctional iOS and Android application that can log in and view a list of conversations: Summary Xamarin.Forms In this chapter, we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dived into the concepts of data-binding and how to use the MVVM design pattern with Xamarin.Forms. Last but not least, we began porting the XamSnap application from earlier in the book to Xamarin.Forms, and were able to reuse a lot of our existing code. In the next chapter, we will cover the process of submitting applications to the iOS App Store and Google Play. Getting your app into the store can be a time-consuming process, but guidance from the next chapter will give you a head start.
Read more
  • 0
  • 0
  • 23875

Packt
05 Jan 2017
17 min read
Save for later

Data Types – Foundational Structures

Packt
05 Jan 2017
17 min read
This article by William Smith, author of the book Everyday Data Structures reviews the most common and most important fundamental data types from the 10,000-foot view. Calling data types foundational structures may seem like a bit of a misnomer but not when you consider that developers use data types to build their classes and collections. So, before we dive into examining proper data structures, it's a good idea to quickly review data types, as these are the foundation of what comes next. In this article, we will briefly explain the following topics: Numeric data types Casting,Narrowing, and Widening 32-bit and 64-bit architecture concerns Boolean data types Logic operations Order of operations Nesting operations Short-circuiting String data types Mutability of strings (For more resources related to this topic, see here.) Numeric data types A detailed description of all the numeric data types in each of these four languages namely, C#, Java, Objective C, and Swift, could easily encompass a book of its own. The simplest way to evaluate these types is based on the underlying size of the data, using examples from each language as a framework for the discussion. When you are developing applications for multiple mobile platforms, you should be aware that the languages you use could share a data type identifier or keyword, but under the hood, those identifiers may not be equal in value. Likewise, the same data type in one language may have a different identifier in another. For example, examine the case of the 16 bit unsigned integer, sometimes referred to as an unsigned short. Well, it's called an unsigned short in Objective-C. In C#, we are talking about a ushort, while Swift calls it a UInt16. Java, on the other hand, uses a char for this data type. Each of these data types represents a 16 bit unsigned integer; they just use different names. This may seem like a small point, but if you are developing apps for multiple devices using each platform's native language, for the sake of consistency, you will need to be aware of these differences. Otherwise, you may risk introducing platform-specific bugs that are extremely difficult to detect and diagnose. Integer types The integer data types are defined as representing whole numbers and can be either signed (negative, zero, or positive values) or unsigned (zero or positive values). Each language uses its own identifiers and keywords for the integer types, so it is easiest to think in terms of memory length. For our purpose, we will only discuss the integer types representing 8, 16, 32, and 64 bit memory objects. 8 bit data types, or bytes as they are more commonly referred to, are the smallest data types that we will examine. If you have brushed up on your binary math, you will know that an 8 bit memory block can represent 28, or 256 values. Signed bytes can range in values from -128 to 127, or -(27) to (27) - 1. Unsigned bytes can range in values from 0 to 255, or 0 to (28) -1. A 16 bit data type is often referred to as a short, although that is not always the case. These types can represent 216, or 65,536 values. Signed shorts can range in values from -32,768 to 32,767, or -(215) to (215) - 1. Unsigned shorts can range in values from 0 to 65,535, or 0 to (216) - 1. A 32 bit data type is most commonly identified as an int, although it is sometimes identified as a long. Integer types can represent 232, or 4,294,967,296 values. Signed ints can range in values from -2,147,483,648 to 2,147,483,647, or -(231) to (231) - 1. Unsigned ints can range in values from 0 to 4,294,967,295, or 0 to (232) - 1. Finally, a 64 bit data type is most commonly identified as a long, although Objective-C identifies it as a long long. Long types can represent 264, or 18,446,744,073,709,551,616 values. Signed longs can range in values from −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, or -(263) to (263) - 1. Unsigned longs can range in values from 0 to 18,446,744,073,709,551,615, or 0 to (263) - 1. Note that these values happen to be consistent across the four languages we will work with, but some languages will introduce slight variations. It is always a good idea to become familiar with the details of a language's numeric identifiers. This is especially true if you expect to be working with cases that involve the identifier's extreme values. Single precision float Single precision floating point numbers, or floats as they are more commonly referred to, are 32 bit floating point containers that allow for storing values with much greater precision than the integer types, typically 6 or 7 significant digits. Many languages use the float keyword or identifier for single precision float values, and that is the case for each of the four languages we are discussing. You should be aware that floating point values are subject to rounding errors because they cannot represent base-10 numbers exactly. The arithmetic of floating point types is a fairly complex topic, the details of which will not be pertinent to the majority of developers on any given day. However, it is still a good practice to familiarize yourself with the particulars of the underlying science as well as the implementation in each language. Double precision float Double precision floating point numbers, or doubles as they are more commonly referred to, are 64 bit floating point values that allow for storing values with much greater precision than the integer types, typically to 15 significant digits. Many languages use the double identifier for double precision float values and that is also the case for each of the four languages: C#, Objective C, Java, and Swift. In most circumstances, it will not matter whether you choose float over double, unless memory space is a concern in which case you will want to choose float whenever possible. Many argue that float is more performant than double under most conditions, and generally speaking, this is the case. However, there are other conditions where double will be more performant than float. The reality is the efficiency of each type is going to vary from case to case, based on a number of criteria that are too numerous to detail in the context of this discussion. Therefore, if your particular application requires truly peak efficiency, you should research the requirements and environmental factors carefully and decide what is best for your situation. Otherwise, just use whichever container will get the job done and move on. Currency Due to the inherent inaccuracy found in floating point arithmetic, grounded in the fact that they are based on binary arithmetic, floats, and doubles cannot accurately represent the base-10 multiples we use for currency. Representing currency as a float or double may seem like a good idea at first as the software will round off the tiny errors in your arithmetic. However, as you begin to perform more and complex arithmetic operations on these inexact results, your precision errors will begin to add up and result in serious inaccuracies and bugs that can be very difficult to track down. This makes float and double data types insufficient for working with currency where perfect accuracy for multiples of 10 is essential. Typecasting In the realm of computer science, type conversion or typecasting means to convert an instance of one object or data type into another. This can be done through either implicit conversion, sometimes called coercion, or explicit conversion, otherwise known as casting. To fully appreciate casting, we also need to understand the difference between static and dynamic languages. Statically versus dynamically typed languages A statically typed language will perform its type checking at compile time. This means that when you try to build your solution, the compiler will verify and enforce each of the constraints that apply to the types in your application. If they are not enforced, you will receive an error and the application will not build. C#, Java, and Swift are all statically typed languages. Dynamically typed languages, on the other hand, do the most or all of their type checking at run time. This means that the application could build just fine, but experience a problem while it is actually running if the developer wasn't careful in how he wrote the code. Objective-C is a dynamically typed language because it uses a mixture of statically typed objects and dynamically typed objects. The Objective-C classes NSNumber and NSDecimalNumber are both examples of dynamically typed objects. Consider the following code example in Objective-C: double myDouble = @"chicken"; NSNumber *myNumber = @"salad"; The compiler will throw an error on the first line, stating Initializing 'double' with an expression of incompatible type 'NSString *'. That's because double is a plain C object, and it is statically typed. The compiler knows what to do with this statically typed object before we even get to the build, so your build will fail. However, the compiler will only throw a warning on the second line, stating Incompatible pointer types initializing 'NSNumber *' with an expression of type 'NSString *'. That's because NSNumber is an Objective-C class, and it is dynamically typed. The compiler is smart enough to catch your mistake, but it will allow the build to succeed (unless you have instructed the compiler to treat warnings as errors in your build settings). Although the forthcoming crash at runtime is obvious in the previous example, there are cases where your app will function perfectly fine despite the warnings. However, no matter what type of language you are working with, it is always a good idea to consistently clean up your code warnings before moving on to new code. This helps keep your code clean and avoids any bugs that can be difficult to diagnose. On those rare occasions where it is not prudent to address the warning immediately, you should clearly document your code and explain the source of the warning so that other developers will understand your reasoning. As a last resort, you can take advantage of macros or pre-processor (pre-compiler) directives that can suppress warnings on a line by line basis. Implicit and explicit casting Implicit casting does not require any special syntax in your source code. This makes implicit casting somewhat convenient. However, since implicit casts do not define their types manually, the compiler cannot always determine which constraints apply to the conversion and therefore will not be able to check these constraints until runtime. This makes the implicit cast also somewhat dangerous. Consider the following code example in C#: double x = "54"; This is an implicit conversion because you have not told the compiler how to treat the string value. In this case, the conversion will fail when you try to build the application, and the compiler will throw an error for this line, stating Cannot implicitly convert type 'string' to 'double'. Now, consider the explicitly cast version of this example: double x = double.Parse("42"); Console.WriteLine("40 + 2 = {0}", x); /* Output 40 + 2 = 42 */ This conversion is explicit and therefore type safe, assuming that the string value is parsable. Widening and narrowing When casting between two types, an important consideration is whether the result of the change is within the range of the target data type. If your source data type supports more bytes than your target data type, the cast is considered to be a narrowing conversion. Narrowing conversions are either casts that cannot be proven to always succeed or casts that are known to possibly lose information. For example, casting from a float to an integer will result in loss of information (precision in this case), as the result will be rounded off to the nearest whole number. In most statically typed languages, narrowing casts cannot be performed implicitly. Here is an example by borrowing from the C# single precision: //C# piFloat = piDouble; In this example, the compiler will throw an error, stating Cannot implicitly convert type 'double' to 'float'. And explicit conversion exists (Are you missing a cast?). The compiler sees this as a narrowing conversion and treats the loss of precision as an error. The error message itself is helpful and suggests an explicit cast as a potential solution for our problem: //C# piFloat = (float)piDouble; We have now explicitly cast the double value piDouble to a float, and the compiler no longer concerns itself with loss of precision. If your source data type supports fewer bytes than your target data type, the cast is considered to be a widening conversion. Widening conversions will preserve the source object's value, but may change its representation in some way. Most statically typed languages will permit implicit widening casts. Let's borrow again from our previous C# example: //C# piDouble = piFloat; In this example, the compiler is completely satisfied with the implicit conversion and the app will build. Let's expand the example further: //C# piDouble = (double)piFloat; This explicit cast improves readability, but does not change the nature of the statement in any way. The compiler also finds this format to be completely acceptable, even if it is somewhat more verbose. Beyond improved readability, explicit casting when widening adds nothing to your application. Therefore, it is your preference if you want to use explicit casting when widening is a matter of personal preference. Boolean data type Boolean data types are intended to symbolize binary values, usually denoted by 1 and 0, true and false, or even YES and NO. Boolean types are used to represent truth logic, which is based on Boolean algebra. This is just a way of saying that Boolean values are used in conditional statements, such as if or while, to evaluate logic or repeat an execution conditionally. Equality operations include any operations that compare the value of any two entities. The equality operators are: == implies equal to != implies not equal to Relational operations include any operations that test a relation between two entities. The relational operators are: > implies greater than >= implies greater than or equal to < implies less than <= implies less than or equal to Logic operations include any operations in your program that evaluate and manipulate Boolean values. There are three primary logic operators, namely AND, OR, and NOT. Another, slightly less commonly used operator, is the exclusive or, or XOR operator.  All Boolean functions and statements can be built with these four basic operators. The AND operator is the most exclusive comparator. Given two Boolean variables A and B, AND will return true if and only if both A and B are true. Boolean variables are often visualized using tools called truth tables. Consider the following truth table for the AND operator: A B A ^ B 0 0 0 0 1 0 1 0 0 1 1 1 This table demonstrates the AND operator.  When evaluating a conditional statement, 0 is considered to be false, while any other value is considered to be true. Only when the value of both A and B is true, is the resulting comparison of A ^ B also true. The OR operator is the inclusive operator. Given two Boolean variables A and B, OR will return true if either A or B are true, including the case when both A and B are true. Consider the following truth table for the OR operator: A B A v B 0 0 0 0 1 1 1 0 1 1 1 1 Next, the NOT A operator is true when A is false, and false when A is true. Consider the following truth table for the NOT operator: A !A 0 1 1 0 Finally, the XOR operator is true when either A or B is true, but not both. Another way to say it is, XOR is true when A and B are different. There are many occasions where it is useful to evaluate an expression in this manner, so most computer architectures include it. Consider the following truth table for XOR: A B A xor B 0 0 0 0 1 1 1 0 1 1 1 0 Operator precedence Just as with arithmetic, comparison and Boolean operations have operator precedence. This means the architecture will give a higher precedence to one operator over another. Generally speaking, the Boolean order of operations for all languages is as follows: Parenthesis Relational operators Equality operators Bitwise operators (not discussed) NOT AND OR XOR Ternary operator Assignment operators It is extremely important to understand operator precedence when working with Boolean values, because mistaking how the architecture will evaluate complex logical operations will introduce bugs in your code that you will not understand how to sort out. When in doubt, remember that as in arithmetic parenthesis, take the highest precedence and anything defined within them will be evaluated first. Short-circuiting As you recall, AND only returns true when both of the operands are true, and OR returns true as soon as one operand is true. These characteristics sometimes make it possible to determine the outcome of an expression by evaluating only one of the operands. When your applications stops evaluation immediately upon determining the overall outcome of an expression, it is called short-circuiting. There are three main reasons why you would want to use short-circuiting in your code. First, short-circuiting can improve your application's performance by limiting the number of operations your code must perform. Second, when later operands could potentially generate errors based on the value of a previous operand, short-circuiting can halt execution before the higher risk operand is reached. Finally, short-circuiting can improve the readability and complexity of your code by eliminating the need for nested logical statements. Strings Strings data types are simply objects whose value is text. Under the hood, strings contain a sequential collection of read-only char objects. This read-only nature of a string object makes strings immutable, which means the objects cannot be changed once they have been created in memory. It is important to understand that changing any immutable object, not just a string, means your program is actually creating a new object in memory and discarding the old one. This is a more intensive operation than simply changing the value of an address in memory and requires more processing. Merging two strings together is called concatenation, and this is an even more costly procedure as you are disposing of two objects before creating a new one. If you find that you are editing your string values frequently, or frequently concatenating strings together, be aware that your program is not as efficient as it could be. Strings are strictly immutable in C#, Java, and Objective-C. It is interesting to note that the Swift documentation refers to strings as mutable. However, the behavior is similar to Java, in that, when a string is modified, it gets copied on assignment to another object. Therefore, although the documentation says otherwise, strings are effectively immutable in Swift as well. Summary In this article, you learned about the basic data types available to a programmer in each of the four most common mobile development languages. Numeric and floating point data type characteristics and operations are as much dependent on the underlying architecture as on the specifications of the language. You also learned about casting objects from one type to another and how the type of cast is defined as either a widening cast or a narrowing cast depending on the size of the source and target data types in the conversion. Next, we discussed Boolean types and how they are used in comparators to affect program flow and execution. In this, we discussed order of precedence of operator and nested operations. You also learned how to use short-circuiting to improve your code's performance. Finally, we examined the String data type and what it means to work with mutable objects. Resources for Article: Further resources on this subject: Why Bother? – Basic [article] Introducing Algorithm Design Paradigms [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 2320

article-image-adding-life-your-chatbot
Ben James
05 Jan 2017
5 min read
Save for later

Adding Life to your Chatbot

Ben James
05 Jan 2017
5 min read
In the previous post we looked at getting your new bot off the ground with the SuperScript package. Today, we'll take this a step further and write your own personal assistant to find music videos, complete with its own voice using IVONA's text-to-speech platform. Giving your bot a voice To get started with IVONA, visit here and go to Speech Cloud > Sign Up. After a quick sign-up process, you'll be pointed to your dashboard, where you'll be able to get the API key needed to use their services. Go ahead and do so, and ensure you download the key file, as we'll need it later. We'll need to add a couple more packages to your SuperScript bot to integrate IVONA into it, so run the following: npm install --save ivona-node npm install --save play-sound ivona-node is a library for easily interfacing with the IVONA API without having to set things like custom headers yourself, while play-sound will let you play sound directly from your terminal, so you can hear what your bot says without having to locate the mp3 file and play it yourself! Now we need to write some code to get these two things working together. Open up src/server.js in your SuperScript directory, and at the top, add: import Ivona from 'ivona-node'; import Player from 'play-sound'; import fs from 'fs'; We'll need fs to be able to write the voice files to our system. Now, find your IVONA access and secret keys, and set up a new IVONA instance by adding the following: const ivona = new Ivona({ accessKey: 'YOUR_ACCESS_KEY', secretKey: 'YOUR_SECRET_KEY', }); We also need to create an instance of the player: const player = Player(); Great! We can double-check that we can access the IVONA servers by asking for a full list of voices that IVONA provides. ivona.listVoices() .on('complete', (voices) => { console.log(voices); }); These are available to sample on the IVONA home page, so if you haven't already, go and check it out. And find one you like! Now it's time for the magic to happen. Inside the bot.reply callback, we need to ask IVONA to turn our bot response into a speech before outputting it in our terminal. We can do that in just a few lines: bot.reply(..., (err, reply) => { // ... Other code to output text to the terminal // const stream = fs.createWriteStream('text.mp3'); ivona.createVoice(reply.string, { body: { voice: { name: 'Justin', language: 'en-US', gender: 'Male', }, }, }).pipe(stream); stream.on('finish', () => { player.play('text.mp3', (err) => { if (err) { console.error(err); } }); }); }); Run your bot again by running npm run start, and watch the magic unfurl as your bot speaks to you! Getting your bot to do your bidding Now that your bot has a human-like voice, it's time to get it to do something useful for you. After all, you are its master. We're going to write a simple script to find music videos for you. So let's open up chat/main.ss and add an additional trigger: + find a music video for (*) by (*) - Okay, here's your music video for <cap1> by <cap2>. ^findMusicVideo(<cap1>, <cap2>) Here, whenever we ask the bot for a music video, we just go off to our function findMusicVideo that finds a relevant video on YouTube. We'll write that SuperScript plugin now. First, we'll need to install the request library to make HTTP requests to YouTube. npm install --save request You'll also need to get a Google API key to search YouTube and get back some results in JSON form. To do this, you can go to here and follow the instructions to get a new key for the 'YouTube Data API'. Then, inside plugins/musicVideo.js, we can write: import request from 'request'; const YOUTUBE_API_BASE = 'https://www.googleapis.com/youtube/v3/search'; const GOOGLE_API_KEY = 'YOUR_KEY_HERE'; const findMusicVideo = function findMusicVideo(song, artist, callback) { request({ url: YOUTUBE_API_BASE, qs: { part: 'snippet', key: GOOGLE_API_KEY, q: `${song} ${artist}`, }, }, (error, response, body) => { if (!error && response.statusCode === 200) { try { const parsedJSON = JSON.parse(body); if (parsedJSON.items[0]) { return callback(`https://youtu.be/${parsedJSON.items[0].id.videoId}`); } } catch (err) { console.error(err); } return callback(''); } return callback(''); }); }; All we're doing here is making a request to the YouTube API for the relevant song and artist. We then take the first one that YouTube found, and stick it in a nice link to give back to the user. Now, parse and run your bot again, and you'll see that not only does your bot talk to you with a voice, but now you can ask it to find a YouTube video for you. About the author Ben is currently the technical director at To Play For, creating games, interactive stories and narratives using artificial intelligence. Follow him at @ToPlayFor.
Read more
  • 0
  • 0
  • 2307

article-image-learning-basic-powercli-concepts
Packt
05 Jan 2017
7 min read
Save for later

Learning Basic PowerCLI Concepts

Packt
05 Jan 2017
7 min read
In this article, by Robert van den Nieuwendijk, author of the book Learning PowerCLI - Second Edition, you will learn some basic PowerShell and PowerCLI concepts. Knowing these concepts will make it easier for you to learn the advanced topics. We will cover the Get-Command, Get-Help, and Get-Member cmdlets in this article. (For more resources related to this topic, see here.) Using the Get-Command, Get-Help, and Get-Member cmdlets There are some PowerShell cmdlets that everyone should know. Knowing these cmdlets will help you discover other cmdlets, their functions, parameters, and returned objects. Using Get-Command The first cmdlet that you should know is Get-Command. This cmdlet returns all the commands that are installed on your computer. The Get-Command cmdlet has the following syntax: Get-Command [[-ArgumentList] <Object[]>] [-All] [-ListImported] [-Module <String[]>] [-Noun <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [-Verb <String[]>] [<CommonParameters>] Get-Command [[-Name] <String[]>] [[-ArgumentList] <Object[]>] [-All] [-CommandType <CommandTypes>] [-ListImported] [-Module <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [<CommonParameters>] The first parameter set is named CmdletSet, and the second parameter set is named AllCommandSet. If you type the following command, you will get a list of commands installed on your computer, including cmdlets, aliases, functions, workflows, filters, scripts, and applications: PowerCLI C:> Get-Command You can also specify the name of a specific cmdlet to get information about that cmdlet, as shown in the following command: PowerCLI C:> Get-Command –Name Get-VM This will return the following information about the Get-VM cmdlet: CommandType Name ModuleName ----------- ---- ---------- Cmdlet Get-VM VMware.VimAutomation.Core You see that the command returns the command type and the name of the module that contains the Get-VM cmdlet. CommandType, Name, and ModuleName are the properties that the Get-VM cmdlet returns by default. You will get more properties if you pipe the output to the Format-List cmdlet. The following screenshot will show you the output of the Get-Command –Name Get-VM | Format-List * command: You can use the Get-Command cmdlet to search for cmdlets. For example, if necessary, search for the cmdlets that are used for vSphere hosts. Type the following command: PowerCLI C:> Get-Command -Name *VMHost* If you are searching for the cmdlets to work with networks, use the following command: PowerCLI C:> Get-Command -Name *network* Using Get-VICommand PowerCLI has a Get-VICommand cmdlet that is similar to the Get-Command cmdlet. The Get-VICommand cmdlet is actually a function that creates a filter on the Get-Command output, and it returns only PowerCLI commands. Type the following command to list all the PowerCLI commands: PowerCLI C:> Get-VICommand The Get-VICommand cmdlet has only one parameter –Name. So, you can also type, for example, the following command to get information only about the Get-VM cmdlet: PowerCLI C:> Get-VICommand –Name Get-VM Using Get-Help To discover more information about cmdlets, you can use the Get-Help cmdlet. For example: PowerCLI C:> Get-Help Get-VM This will display the following information about the Get-VM cmdlet: The Get-Help cmdlet has some parameters that you can use to get more information. The –Examples parameter shows examples of the cmdlet. The –Detailed parameter adds parameter descriptions and examples to the basic help display. The –Full parameter displays all the information available about the cmdlet. And the –Online parameter retrieves online help information available about the cmdlet and displays it in a web browser. Since PowerShell V3, there is a new Get-Help parameter –ShowWindow. This displays the output of Get-Help in a new window. The Get-Help -ShowWindow command opens the following screenshot: Using Get-PowerCLIHelp The PowerCLI Get-PowerCLIHelp cmdlet opens a separate help window for PowerCLI cmdlets, PowerCLI objects, and articles. This is a very useful tool if you want to browse through the PowerCLI cmdlets or PowerCLI objects. The following screenshot shows the window opened by the Get-PowerCLIHelp cmdlet: Using Get-PowerCLICommunity If you have a question about PowerCLI and you cannot find the answer in this article, use the Get-PowerCLICommunity cmdlet to open the VMware vSphere PowerCLI section of the VMware VMTN Communities. You can log in to the VMware VMTN Communities using the same My VMware account that you used to download PowerCLI. First, search the community for an answer to your question. If you still cannot find the answer, go to the Discussions tab and ask your question by clicking on the Start a Discussion button, as shown later. You might receive an answer to your question in a few minutes. Using Get-Member In PowerCLI, you work with objects. Even a string is an object. An object contains properties and methods, which are called members in PowerShell. To see which members an object contains, you can use the Get-Member cmdlet. To see the members of a string, type the following command: PowerCLI C:> "Learning PowerCLI" | Get-Member Pipe an instance of a PowerCLI object to Get-Member to retrieve the members of that PowerCLI object. For example, to see the members of a virtual machine object, you can use the following command: PowerCLI C:> Get-VM | Get-Member TypeName: VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl Name MemberType Definition ---- ---------- ---------- ConvertToVersion Method T VersionedObjectInterop.Conver... Equals Method bool Equals(System.Object obj) GetConnectionParameters Method VMware.VimAutomation.ViCore.Int... GetHashCode Method int GetHashCode() GetType Method type GetType() IsConvertableTo Method bool VersionedObjectInterop.IsC... LockUpdates Method void ExtensionData.LockUpdates() ObtainExportLease Method VMware.Vim.ManagedObjectReferen... ToString Method string ToString() UnlockUpdates Method void ExtensionData.UnlockUpdates() CDDrives Property VMware.VimAutomation.ViCore.Typ... Client Property VMware.VimAutomation.ViCore.Int... CustomFields Property System.Collections.Generic.IDic... DatastoreIdList Property string[] DatastoreIdList {get;} Description Property string Description {get;} DrsAutomationLevel Property System.Nullable[VMware.VimAutom... ExtensionData Property System.Object ExtensionData {get;} FloppyDrives Property VMware.VimAutomation.ViCore.Typ... Folder Property VMware.VimAutomation.ViCore.Typ... FolderId Property string FolderId {get;} Guest Property VMware.VimAutomation.ViCore.Typ... GuestId Property string GuestId {get;} HAIsolationResponse Property System.Nullable[VMware.VimAutom... HardDisks Property VMware.VimAutomation.ViCore.Typ... HARestartPriority Property System.Nullable[VMware.VimAutom... Host Property VMware.VimAutomation.ViCore.Typ... HostId Property string HostId {get;} Id Property string Id {get;} MemoryGB Property decimal MemoryGB {get;} MemoryMB Property decimal MemoryMB {get;} Name Property string Name {get;} NetworkAdapters Property VMware.VimAutomation.ViCore.Typ... Notes Property string Notes {get;} NumCpu Property int NumCpu {get;} PersistentId Property string PersistentId {get;} PowerState Property VMware.VimAutomation.ViCore.Typ... ProvisionedSpaceGB Property decimal ProvisionedSpaceGB {get;} ResourcePool Property VMware.VimAutomation.ViCore.Typ... ResourcePoolId Property string ResourcePoolId {get;} Uid Property string Uid {get;} UsbDevices Property VMware.VimAutomation.ViCore.Typ... UsedSpaceGB Property decimal UsedSpaceGB {get;} VApp Property VMware.VimAutomation.ViCore.Typ... Version Property VMware.VimAutomation.ViCore.Typ... VMHost Property VMware.VimAutomation.ViCore.Typ... VMHostId Property string VMHostId {get;} VMResourceConfiguration Property VMware.VimAutomation.ViCore.Typ... VMSwapfilePolicy Property System.Nullable[VMware.VimAutom... The command returns the full type name of the VirtualMachineImpl object and all its methods and properties. Remember that the properties are objects themselves. You can also use Get-Member to get the members of the properties. For example, the following command line will give you the members of the VMGuestImpl object: PowerCLI C:> $VM = Get-VM –Name vCenter PowerCLI C:> $VM.Guest | Get-Member Summary In this article, you looked at the Get-Help, Get-Command, and Get-Member cmdlets. Resources for Article: Further resources on this subject: Enhancing the Scripting Experience [article] Introduction to vSphere Distributed switches [article] Virtualization [article]
Read more
  • 0
  • 0
  • 27437
article-image-test-driven-development
Packt
05 Jan 2017
19 min read
Save for later

Test-Driven Development

Packt
05 Jan 2017
19 min read
In this article by Md. Ziaul Haq, the author of the book Angular 2 Test-Driven Development, introduces you to the fundamentals of test-driven development with AngularJS, including: An overview of test-driven development (TDD) The TDD life cycle: test first, make it run, and make it better Common testing techniques (For more resources related to this topic, see here.) Angular2 is at the forefront of client-side JavaScript testing. Every Angular2 tutorial includes an accompanying test, and event test modules are a part of the core AngularJS package. The Angular2 team is focused on making testing fundamental to web development. An overview of TDD Test-driven development (TDD) is an evolutionary approach to development, where you write a test before you write just enough production code to fulfill that test and its refactoring. The following section will explore the fundamentals of TDD and how they are applied by a tailor. Fundamentals of TDD Get the idea of what to write in your code before you start writing it. This may sound cliched, but this is essentially what TDD gives you. TDD begins by defining expectations, then makes you meet the expectations, and finally, forces you to refine the changes after the expectations are met. Some of the clear benefits that can be gained by practicing TDD are as follows: No change is small: Small changes can cause a hell lot of breaking issues in the entire project. Only practicing TDD can help out, as after any change, test suit will catch the breaking points and save the project and the life of developers. Specifically identify the tasks: A test suit provides a clear vision of the tasks specifically and provides the workflow step-by-step in order to be successful. Setting up the tests first allows you to focus on only the components that have been defined in the tests. Confidence in refactoring: Refactoring involves moving, fixing, and changing a project. Tests protect the core logic from refactoring by ensuring that the logic behaves independently of the code structure. Upfront investment, benefits in future: Initially, it looks like testing kills the extra time, but it actually pays off later, when the project becomes bigger, it gives confidence to extend the feature as just running the test will get the breaking issues, if any. QA resource might be limited: In most cases, there are some limitations on QA resources as it always takes extra time for everything to be manually checked by the QA team, but writing some test case and by running them successfully will save some QA time definitely. Documentation: Tests define the expectations that a particular object or function must meet. An expectation acts as a contract and can be used to see how a method should or can be used. This makes the code readable and easier to understand. Measuring the success with different eyes TDD is not just a software development practice. The fundamental principles are shared by other craftsmen as well. One of these craftsmen is a tailor, whose success depends on precise measurements and careful planning. Breaking down the steps Here are the high-level steps a tailor takes to make a suit: Test first: Determining the measurements for the suit Having the customer determine the style and material they want for their suit Measuring the customer's arms, shoulders, torso, waist, and legs Making the cuts: Measuring the fabric and cutting it Selecting the fabric based on the desired style Measuring the fabric based on the customer's waist and legs Cutting the fabric based on the measurements Refactoring: Comparing the resulting product to the expected style, reviewing, and making changes Comparing the cut and look to the customer's desired style Making adjustments to meet the desired style Repeating: Test first: Determining the measurements for the pants Making the cuts: Measuring the fabric and making the cuts Refactor: Making changes based on the reviews The preceding steps are an example of a TDD approach. The measurements must be taken before the tailor can start cutting up the raw material. Imagine, for a moment, that the tailor didn't use a test-driven approach and didn't use a measuring tape (testing tool). It would be ridiculous if the tailor started cutting before measuring. As a developer, do you "cut before measuring"? Would you trust a tailor without a measuring tape? How would you feel about a developer who doesn't test? Measure twice, cut once The tailor always starts with measurements. What would happen if the tailor made cuts before measuring? What would happen if the fabric was cut too short? How much extra time would go into the tailoring? Measure twice, cut once. Software developers can choose from an endless amount of approaches to use before starting developing. One common approach is to work off a specification. A documented approach may help in defining what needs to be built; however, without tangible criteria for how to meet a specification, the actual application that gets developed may be completely different from the specification. With a TDD approach (test first, make it run, and make it better), every stage of the process verifies that the result meets the specification. Think about how a tailor continues to use a measuring tape to verify the suit throughout the process. TDD embodies a test-first methodology. TDD gives developers the ability to start with a clear goal and write code that will directly meet a specification. Develop like a professional and follow the practices that will help you write quality software. Practical TDD with JavaScript Let's dive into practical TDD in the context of JavaScript. This walk through will take you through the process of adding the multiplication functionality to a calculator. Just keep the TDD life cycle, as follows, in mind: Test first Make it run Make it better Point out the development to-do list A development to-do list helps to organize and focus on tasks specifically. It also helps to provide a surface to list down the ideas during the development process, which could be a single feature later on. Let's add the first feature in the development to-do list—add multiplication functionality: 3 * 3 = 9. The preceding list describes what needs to be done. It also provides a clear example of how to verify multiplication—3 * 3 = 9. Setting up the test suit To set up the test, let's create the initial calculator in a file, called calculator.js, and is initialized as an object as follows: var calculator = {}; The test will be run through a web browser as a simple HTML page. So, for that, let's create an HTML page and import calculator.js to test it and save the page as testRunner.html. To run the test, open the testRunner.html file in your web browser. The testRunner.html file will look as follows: <!DOCTYPE html> <html> <head> <title>Test Runner</title> </head> <body> <script src="calculator.js"></script> </body> </html> The test suit is ready for the project and the development to-do list for feature is ready as well. The next step is to dive into the TDD life cycle based on the feature list one by one. Test first Though it's easy to write a multiplication function and it will work as its pretty simple feature, as a part of practicing TDD, it's time to follow the TDD life cycle. The first phase of the life cycle is to write a test based on the development to-do list. Here are the steps for the first test: Open calculator.js. Create a new function to test multiplying 3 * 3: function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; The test calls a multiply function, which still needs to be defined. It then asserts that the results are as expected, by displaying a pass or fail message. Keep in mind that in TDD, you are looking at the use of the method and explicitly writing how it should be used. This allows you to define the interface through a use case, as opposed to only looking at the limited scope of the function being developed. The next step in the TDD life cycle is focused on making the test run. Make it run In this step, we will run the test, just as the tailor did with the suit. The measurements were taken during the test step, and now the application can be molded to fit the measurements. The following are the steps to run the test: Open testRunner.html on a web browser. Open the JavaScript developer Console window in the browser. Test will throw an error, which will be visible in the browser's developer console, as shown in the following screenshot: The thrown error is about the undefined function, which is expected as the calculator application calls a function that hasn't been created yet—calculator.multiply. In TDD, the focus is on adding the easiest change to get a test to pass. There is no need to actually implement the multiplication logic. This may seem unintuitive. The point is that once a passing test exists, it should always pass. When a method contains fairly complex logic, it is easier to run a passing test against it to ensure that it meets the expectations. What is the easiest change that can be made to make the test pass? By returning the expected value of 9, the test should pass. Although this won't add the multiply function, it will confirm the application wiring. In addition, after you have passed the test, making future changes will be easy as you have to simply keep the test passing! Now, add the multiply function and have it return the required value of 9, as illustrated: var calculator = { multiply : function() { return 9; } }; Now, let's refresh the page to rerun the test and look at the JavaScript console. The result should be as shown in the following screenshot: Yes! No more errors, there's a message showing that test has been passed. Now that there is a passing test, the next step will be to remove the hardcoded value in the multiply function. Make it better The refactoring step needs to remove the hardcoded return value of the multiply function that we added as the easiest solution to pass the test and will add the required logic to get the expected result. The required logic is as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; Now, let's refresh the browser to rerun the tests, it will pass the test as it did before. Excellent! Now the multiply function is complete. The full code of the calculator.js file for the calculator object with its test will look as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; multipleTest1(); Mechanism of testing To be a proper TDD following developer, it is important to understand some fundamental mechanisms of testing, techniques, and approaches to testing. In this section, we will walk you through a couple of examples of techniques and mechanisms of the tests that will be leveraged in this article. This will mostly include the following points: Testing doubles with Jasmine spies Refactoring the existing tests Building patterns In addition, here are the additional terms that will be used: Function under test: This is the function being tested. It is also referred to as system under test, object under test, and so on. The 3 A's (Arrange, Act, and Assert): This is a technique used to set up tests, first described by Bill Wake (http://xp123.com/articles/3a-arrange-act-assert/). Testing with a framework We have already seen a quick and simple way to perform tests on calculator application, where we have set the test for the multiply method. But in real life, it will be more complex and a way larger application, where the earlier technique will be too complex to manage and perform. In that case, it will be very handy and easier to use a testing framework. A testing framework provides methods and structures to test. This includes a standard structure to create and run tests, the ability to create assertions/expectations, the ability to use test doubles, and more. The following example code is not exactly how it runs with the Jasmine test/spec runner, it's just about the idea of how the doubles work, or how these doubles return the expected result. Testing doubles with Jasmine spies A test double is an object that acts and is used in place of another object. Jasmine has a test double function that is known as spies. Jasmine spy is used with the spyOn()method. Take a look at the following testableObject object that needs to be tested. Using a test double, you can determine the number of times testableFunction gets called. The following is an example of Test double: var testableObject = { testableFunction : function() { } }; jasmine.spyOn(testableObject, 'testableFunction'); testableObject.testableFunction(); testableObject.testableFunction(); testableObject.testableFunction(); console.log(testableObject.testableFunction.count); The preceding code creates a test double using a Jasmine spy (jasmine.spyOn). The test double is then used to determine the number of times testableFunction gets called. The following are some of the features that a Jasmine test double offers: The count of calls on a function The ability to specify a return value (stub a return value) The ability to pass a call to the underlying function (pass through) Stubbing return value The great thing about using a test double is that the underlying code of a method does not have to be called. With a test double, you can specify exactly what a method should return for a given test. Consider the following example of an object and a function, where the function returns a string: var testableObject = { testableFunction : function() { return 'stub me'; } }; The preceding object (testableObject) has a function (testableFunction) that needs to be stubbed. So, to stub the single return value, it will need to chain the and.returnValuemethod and will pass the expected value as param. Here is how to spy chain the single return value to stub it: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('stubbed value'); Now, when testableObject.testableFunction is called, a stubbed value will be returned. Consider the following example of the preceding single stubbed value: var testableObject = { testableFunction : function() { return 'stub me'; } }; //before the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stub me' jasmine.spyOn(testableObject,'testableFunction') .and .returnValue('stubbed value'); //After the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stubbed value' Similarly, we can pass multiple retuned values as the preceding example. To do so, it will chain the and.returnValuesmethod with the expected values as param, where the values will be separated by commas. Here is how to spy chain the multiple return values to stub them one by one: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValues('first stubbed value', 'second stubbed value', 'third stubbed value'); So, for every call of testableObject.testableFunction, it will return the stubbedvalue in order until reaches the end of the return value list. Consider the given example of the preceding multiple stubbed values: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('first stubbed value', 'second stubbed value', 'third stubbed value'); //After the is stubbed return values Console.log(testableObject.testableFunction()); //displays 'first stubbed value' Console.log(testableObject.testableFunction()); //displays 'second stubbed value' Console.log(testableObject.testableFunction()); //displays 'third stubbed value' Testing arguments A test double provides insights into how a method is used in an application. As an example, a test might want to assert what arguments a method was called with or the number of times a method was called. Here is an example function: var testableObject = { testableFunction : function(arg1, arg2) {} }; The following are the steps to test the arguments with which the preceding function is called: Create a spy so that the arguments called can be captured: jasmine.spyOn(testableObject, 'testableFunction'); Then, to access the arguments, do the following: //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Here is how the arguments can be displayed using console.log: var testableObject = { testableFunction : function(arg1, arg2) {} }; //create the spy jasmine.spyOn(testableObject, 'testableFunction'); //Call the method with specific arguments testableObject.testableFunction('param1', 'param2'); //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Refactoring Refactoring is the act of restructuring, rewriting, renaming, and removing code in order to improve the design, readability, maintainability, and overall aesthetics of a piece of code. The TDD life cycle step of "making it better" is primarily concerned with refactoring. This section will walk you through a refactoring example. Take a look at the following example of a function that needs to be refactored: var abc = function(z) { var x = false; if(z > 10) return true; return x; } This function works fine and does not contain any syntactical or logical issues. The problem is that the function is difficult to read and understand. Refactoring this function will improve the naming, structure, and definition. The exercise will remove the masquerading complexity and reveal the function's true meaning and intention. Here are the steps: Rename the function and variable names to be more meaningful, that is, rename x and z so that they make sense, as shown: var isTenOrGreater = function(value) { var falseValue = false; if(value > 10) return true; return falseValue; } Now, the function can easily be read and the naming makes sense. Remove unnecessary complexity. In this case, the if conditional statement can be removed completely, as follows: var isTenOrGreater = function(value) { return value > 10; }; Reflect on the result. At this point, the refactoring is complete, and the function's purpose should jump out at you. The next question that should be asked is "why does this method exist in the first place?". This example only provided a brief walk-through of the steps that can be taken to identify issues in code and how to improve them. Building with a builder These days, design pattern is almost a kind of common practice, and we follow design pattern to make life easier. For the same reason, the builder pattern will be followed here. The builder pattern uses a builder object to create another object. Imagine an object with 10 properties. How will test data be created for every property? Will the object have to be recreated in every test? A builder object defines an object to be reused across multiple tests. The following code snippet provides an example of the use of this pattern. This example will use the builder object in the validate method: var book = { id : null, author : null, dateTime : null }; The book object has three properties: id, author, and dateTime. From a testing perspective, you would want the ability to create a valid object, that is, one that has all the fields defined. You may also want to create an invalid object with missing properties, or you may want to set certain values in the object to test the validation logic, that is, dateTime is an actual date. Here are the steps to create a builder for the dateTime object: Create a builder function, as shown: var bookBuilder = function() {}; Create a valid object within the builder, as follows: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; } Create a function to return the built object, as given: var bookBuilder = function() { var _resultBook = { id: 1, author: "Any Author", dateTime: new Date() }; this.build = function() { return _resultBook; } } As illustrated, create another function to set the _resultBook author field: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author){ _resultBook.author = author; }; }; Make the function fluent, as follows, so that calls can be chained: this.setAuthor = function(author) { _resultBook.author = author; return this; }; A setter function will also be created for dateTime, as shown: this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; Now, bookBuilder can be used to create a new book, as follows: var bookBuilder = new bookBuilder(); var builtBook = bookBuilder.setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); console.log(builtBook.author); // Ziaul Haq The preceding builder can now be used throughout your tests to create a single consistent object. Here is the complete builder for your reference: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author) { _resultBook.author = author; return this; }; this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; }; Let's create the validate method to validate the created book object from builder. var validate = function(builtBookToValidate){ if(!builtBookToValidate.author) { return false; } if(!builtBookToValidate.dateTime) { return false; } return true; }; So, at first, let's create a valid book object with builder by passing all the required information, and if this is passed via the validate object, this should show a valid message: var validBuilder = new bookBuilder().setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); // Validate the object with validate() method if (validate(validBuilder)) { console.log('Valid Book created'); } In the same way, let's create an invalid book object via builder by passing some null value in the required information. And by passing the object to the validate method, it should show the message, why it's invalid. var invalidBuilder = new bookBuilder().setAuthor(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as author is null'); } var invalidBuilder = new bookBuilder().setDateTime(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as dateTime is null'); } Self-test questions Q1. A test double is another name for a duplicate test. True False Q2. TDD stands for test-driven development. True False Q3. The purpose of refactoring is to improve code quality. True False Q4. A test object builder consolidates the creation of objects for testing. True False Q5. The 3 A's are a sports team. True False Summary This article provided an introduction to TDD. It discussed the TDD life cycle (test first, make it run, and make it better) and showed how the same steps are used by a tailor. Finally, it looked over some of the testing techniques such as test doubles, refactoring, and building patterns. Although TDD is a huge topic, this article is solely focused on the TDD principles and practices to be used with AngularJS. Resources for Article: Further resources on this subject: Angular 2.0 [Article] Writing a Blog Application with Node.js and AngularJS [Article] Integrating a D3.js visualization into a simple AngularJS application [Article]
Read more
  • 0
  • 0
  • 1719

article-image-text-recognition
Packt
04 Jan 2017
7 min read
Save for later

Text Recognition

Packt
04 Jan 2017
7 min read
In this article by Fábio M. Soares and Alan M.F. Souza, the authors of the book Neural Network Programming with Java - Second Edition, we will cover pattern recognition, neural networks in pattern recognition, and text recognition (OCR). We all know that humans can read and recognize images faster than any supercomputer; however we have seen so far that neural networks show amazing capabilities of learning through data in both supervised and unsupervised way. In this article we present an additional case of pattern recognition involving an example of Optical Character Recognition (OCR). Neural networks can be trained to strictly recognize digits written in an image file. The topics of this article are: Pattern recognition Defined classes Undefined classes Neural networks in pattern recognition MLP Text recognition (OCR) Preprocessing and Classes definition (For more resources related to this topic, see here.) Pattern recognition Patterns are a bunch of data and elements that look similar to each other, in such a way that they can occur systematically and repeat from time to time. This is a task that can be solved mainly by unsupervised learning by clustering; however, when there are labelled data or defined classes of data, this task can be solved by supervised methods. We as humans perform this task more often than we can imagine. When we see objects and recognize them as belonging to a certain class, we are indeed recognizing a pattern. Also when we analyze charts discrete events and time series, we might find an evidence of some sequence of events that repeat systematically under certain conditions. In summary, patterns can be learned by data observations. Examples of pattern recognition tasks include, not liming to: Shapes recognition Objects classification Behavior clustering Voice recognition OCR Chemical reactions taxonomy Defined classes In the existence of a list of classes that has been predefined for a specific domain, then each class is considered to be a pattern, therefore every data record or occurrence is assigned one of these predefined classes. The predefinition of classes can usually be performed by an expert or based on a previous knowledge of the application domain. Also it is desirable to apply defined classes when we want the data to be classified strictly into one of the predefined classes. One illustrated example for pattern recognition using defined classes is animal recognition by image, shown in the next figure. The pattern recognizer however should be trained to catch all the characteristics that formally define the classes. In the example eight figures of animals are shown, belonging to two classes: mammals and birds. Since this is a supervised mode of learning, the neural network should be provided with a sufficient number of images that allow it to properly classify new images: Of course, sometimes the classification may fail, mainly due to similar hidden patterns in the images that neural networks may catch and also due to small nuances present in the shapes. For example, the dolphin has flippers but it is still a mammal. Sometimes in order to obtain a better classification, it is necessary to apply preprocessing and ensure that the neural network will receive the appropriate data that would allow for classification. Undefined classes When data are unlabeled and there is no predefined set of classes, it is a scenario for unsupervised learning. Shapes recognition are a good example since they may be flexible and have infinite number of edges, vertices or bindings: In the previous figure, we can see some sorts of shapes and we want to arrange them, whereby the similar ones can be grouped into the same cluster. Based on the shape information that is present in the images, it is likely for the pattern recognizer to classify the rectangle, the square and the rectangular triangle in into the same group. But if the information were presented to the pattern recognizer, not as an image, but as a graph with edges and vertices coordinates, the classification might change a little. In summary, the pattern recognition task may use both supervised and unsupervised mode of learning, basically depending of the objective of recognition. Neural networks in pattern recognition For pattern recognition the neural network architectures that can be applied are the MLPs (supervised) and the Kohonen network (unsupervised). In the first case, the problem should be set up as a classification problem, that is. the data should be transformed into the X-Y dataset, where for every data record in X there should be a corresponding class in Y. The output of the neural network for classification problems should have all of the possible classes, and this may require preprocessing of the output records. For the other case, the unsupervised learning, there is no need to apply labels on the output, however, the input data should be properly structured as well. To remind the reader the schema of both neural networks are shown in the next figure: Data pre-processing We have to deal with all possible types of data, that is., numerical (continuous and discrete) and categorical (ordinal or unscaled). But here we have the possibility to perform pattern recognition on multimedia content, such as images and videos. So how could multimedia be handled? The answer of this question lies in the way these contents are stored in files. Images, for example, are written with a representation of small colored points called pixels. Each color can be coded in an RGB notation where the intensity of red, green and blue define every color the human eye is able to see. Therefore an image of dimension 100x100 would have 10,000 pixels, each one having 3 values for red, green and blue, yielding a total of 30,000 points. That is the challenge for image processing in neural networks. Some methods, may reduce this huge number of dimensions. Afterwards an image can be treated as big matrix of numerical continuous values. For simplicity in this article we are applying only gray-scaled images with small dimension. Text recognition (OCR) Many documents are now being scanned and stored as images, making necessary the task of converting these documents back into text, for a computer to apply edition and text processing. However, this feature involves a number of challenges: Variety of text font Text size Image noise Manuscripts In the spite of that, humans can easily interpret and read even the texts written in a bad quality image. This can be explained by the fact that humans are already familiar with the text characters and the words in their language. Somehow the algorithm must become acquainted with these elements (characters, digits, signalization, and so on), in order to successfully recognize texts in images. Digits recognition Although there are a variety of tools available in the market for OCR, this remains still a big challenge for an algorithm to properly recognize texts in images. So we will be restricting our application in a smaller domain, so we could face simpler problems. Therefore, in this article we are going to implement a Neural Network to recognize digits from 0 to 9 represented on images. Also the images will have standardized and small dimensions, for simplicity purposes. Summary In this article we have covered pattern recognition, neural networks in pattern recognition, and text recognition (OCR). Resources for Article: Further resources on this subject: Training neural networks efficiently using Keras [article] Implementing Artificial Neural Networks with TensorFlow [article] Training and Visualizing a neural network with R [article]
Read more
  • 0
  • 0
  • 1859

article-image-hyper-v-architecture-and-components
Packt
04 Jan 2017
15 min read
Save for later

Hyper-V Architecture and Components

Packt
04 Jan 2017
15 min read
In this article by Charbel Nemnom and Patrick Lownds, the author of the book Windows Server 2016 Hyper-V Cookbook, Second Edition, we will see Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Virtualization is not a new feature or technology that everyone decided to have in their environment overnight. Actually, it's quite old. There are a couple of computers in the mid-60s that were using virtualization already, such as the IBM M44/44X, where you could run multiple VMs using hardware and software abstraction. It is known as the first virtualization system and the creation of the term virtual machine. Although Hyper-V is in its fifth version, Microsoft virtualization technology is very mature. Everything started in 1988 with a company named Connectix. It had innovative products such as Connectix Virtual PC and Virtual Server, an x86 software emulation for Mac, Windows, and OS/2. In 2003, Microsoft acquired Connectix and a year later released Microsoft Virtual PC and Microsoft Virtual Server 2005. After lots of improvements in the architecture during the project Viridian, Microsoft released Hyper-V in 2008, the second version in 2009 (Windows Server 2008 R2), the third version in 2012 (Windows Server 2012), a year later in 2013 the fourth version was released (Windows Server 2012 R2), the current and fifth version in 2016 (Windows Server 2016). In the past years, Microsoft has proven that Hyper-V is a strong and competitive solution for server virtualization and provides scalability, flexible infrastructure, high availability, and resiliency. To better understand the different virtualization models, and how the VMs are created and managed by Hyper-V, it is very important to know its core, architecture, and components. By doing so, you will understand how it works, you can compare with other solutions, and troubleshoot problems easily. Microsoft has long told customers that Azure datacenters are powered by Microsoft Hyper-V, and the forthcoming Azure Stack will actually allow us to run Azure in our own datacenters on top of Windows Server 2016 Hyper-V as well. For more information about Azure Stack, please refer to the following link: https://azure.microsoft.com/en-us/overview/azure-stack/ Microsoft Hyper-V proves over the years that it's a very scalable platform to virtualize any and every workload without exception. This appendix includes well-explained topics with the most important Hyper-V architecture components compared with other versions. (For more resources related to this topic, see here.) Understanding Hypervisors The Virtual Machine Manager (VMM), also known as Hypervisor, is the software application responsible for running multiple VMs in a single system. It is also responsible for creation, preservation, division, system access, and VM management running on the Hypervisor layer. These are the types of Hypervisors: VMM Type 2 VMM Hybrid VMM Type 1 VMM Type 2 This type runs Hypervisor on top of an OS, as shown in the following diagram, we have the hardware at the bottom, the OS and then the Hypervisor running on top. Microsoft Virtual PC and VMware Workstation is an example of software that uses VMM Type 2. VMs pass hardware requests to the Hypervisor, to the host OS, and finally reaching the hardware. That leads to performance and management limitation imposed by the host OS. Type 2 is common for test environments—VMs with hardware restrictions—to run on software applications that are installed in the host OS. VMM Hybrid When using the VMM Hybrid type, the Hypervisor runs on the same level as the OS, as shown in the following diagram. As both Hypervisor and the OS are sharing the same access to the hardware with the same priority, it is not as fast and safe as it could be. This is the type used by the Hyper-V predecessor named Microsoft Virtual Server 2005: VMM Type 1 VMM Type 1 is a type that has the Hypervisor running in a tiny software layer between the hardware and the partitions, managing and orchestrating the hardware access. The host OS, known as Parent Partition, run on the same level as the Child Partition, known as VMs, as shown in the next diagram. Due to the privileged access that the Hypervisor has on the hardware, it provides more security, performance, and control over the partitions. This is the type used by Hyper-V since its first release: Hyper-V architecture Knowing how Hyper-V works and how its architecture is constructed will make it easier to understand its concepts and operations. The following sections will explore the most important components in Hyper-V. Windows before Hyper-V Before we dive into the Hyper-V architecture details, it will be easy to understand what happens after Hyper-V is installed, by looking at Windows without Hyper-V, as shown in the following diagram: In a normal Windows installation, the instructions access is divided by four privileged levels in the processor called Rings. The most privileged level is Ring 0, with direct access to the hardware and where the Windows Kernel sits. Ring 3 is responsible for hosting the user level, where most common applications run and with the least privileged access. Windows after Hyper-V When Hyper-V is installed, it needs a higher privilege than Ring 0. Also, it must have dedicated access to the hardware. This is possible due to the capabilities of the new processor created by Intel and AMD, called Intel-VT and AMD-V respectively, that allows the creation of a fifth ring called Ring -1. Hyper-V uses this ring to add its Hypervisor, having a higher privilege and running under Ring 0, controlling all the access to the physical components, as shown in the following diagram: The OS architecture suffers several changes after Hyper-V installation. Right after the first boot, the Operating System Boot Loader file (winload.exe) checks the processor that is being used and loads the Hypervisor image on Ring -1 (using the files Hvix64.exe for Intel processors and Hvax64.exe for AMD processors). Then, Windows Server is initiated running on top of the Hypervisor and every VM that runs beside it. After Hyper-V installation, Windows Server has the same privilege level as a VM and is responsible for managing VMs using several components. Differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware There are four different versions of Hyper-V—the role that is installed on Windows Server 2016 (Core or Full Server), the role that can be installed on a Nano Server, its free version called Hyper-V Server and the Hyper-V that comes in Windows 10 called Hyper-V Client. The following sections will explain the differences between all the versions and a comparison between Hyper-V and its competitor, VMware. Windows Server 2016 Hyper-V Hyper-V is one of the most fascinating and improved role on Windows Server 2016. Its fifth version goes beyond virtualization and helps us deliver the correct infrastructure to host your cloud environment. Hyper-V can be installed as a role in both Windows Server Standard and Datacenter editions. The only difference in Windows Server 2012 and 2012 R2 in the Standard edition, two free Windows Server OSes are licensed whereas there are unlimited licenses in the Datacenter edition. However, in Windows Server 2016 there are significant changes between the two editions. The following table will show the difference between Windows Server 2016 Standard and Datacenter editions: Resource Windows Server 2016 Datacenter edition Windows Server 2016 Standard edition Core functionality of Windows Server Yes Yes OSes/Hyper-V Containers Unlimited 2 Windows Server Containers Unlimited Unlimited Nano Server Yes Yes Storage features for software-defined datacenter including Storage Spaces Direct and Storage Replica Yes N/A Shielded VMs Yes N/A Networking stack for software-defined datacenter Yes N/A Licensing Model Core + CAL Core + CAL As you can see in preceding table, the Datacenter edition is designed for highly virtualized private and hybrid cloud environments and Standard edition is for low density or non-virtualized (physical) environments. In Windows Server 2016, Microsoft is also changing the licensing model from a per-processor to per-core licensing for Standard and Datacenter editions. The following points will guide you in order to license Windows Server 2016 Standard and Datacenter edition: All physical cores in the server must be licensed. In other words, servers are licensed based on the number of processor cores in the physical server. You need a minimum of 16 core licenses for each server. You need a minimum of 8 core licenses for each physical processor. The core licenses will be sold in packs of two. Eight 2-core packs will be the minimum required to license each physical server. The 2-core pack for each edition is one-eighth the price of a 2-processor license for corresponding Windows Server 2012 R2 editions. The Standard edition provides rights for up to two OSEs or Hyper-V containers when all physical cores in the server are licensed. For every two additional VMs, all the cores in the server have to be licensed again. The price of 16-core licenses of Windows Server 2016 Datacenter and Standard edition will be the same price as the 2-processor license of the corresponding editions of the Windows Server 2012 R2 version. Existing customers' servers under Software Assurance agreement will receive core grants as required, with documentation. The following table illustrates the new licensing model based on number of 2-core pack licenses: Legend: Gray cells represent licensing costs White cells represent additional licensing is required Windows Server 2016 Standard edition may need additional licensing. Nano Server Nano Server is a new headless, 64-bit only installation option that installs "just enough OS" resulting in a dramatically smaller footprint that results in more uptime and a smaller attack surface. Users can choose to add server roles as needed, including Hyper-V, Scale out File Server, DNS Server and IIS server roles. User can also choose to install features, including Container support, Defender, Clustering, Desired State Configuration (DSC), and Shielded VM support. Nano Server is available in Windows Server 2016 for: Physical Machines Virtual Machines Hyper-V Containers Windows Server Containers Supports the following inbox optional roles and features: Hyper-V, including container and shielded VM support Datacenter Bridging Defender DNS Server Desired State Configuration Clustering IIS Network Performance Diagnostics Service (NPDS) System Center Virtual Machine Manager and System Center Operations Manager Secure Startup Scale out File Server, including Storage Replica, MPIO, iSCSI initiator, Data Deduplication The Windows Server 2016 Hyper-V role can be installed on a Nano Server; this is a key Nano Server role, shrinking the OS footprint and minimizing reboots required when Hyper-V is used to run virtualization hosts. Nano server can be clustered, including Hyper-V failover clusters. Hyper-V works the same on Nano Server including all features does in Windows Server 2016, aside from a few caveats: All management must be performed remotely, using another Windows Server 2016 computer. Remote management consoles such as Hyper-V Manager, Failover Cluster Manager, PowerShell remoting, and management tools like System Center Virtual Machine Manager as well as the new Azure web-based Server Management Tool (SMT) can all be used to manage a Nano Server environment. RemoteFX is not available. Microsoft Hyper-V Server 2016 Hyper-V Server 2016, the free virtualization solution from Microsoft has all the features included on Windows Server 2016 Hyper-V. The only difference is that Microsoft Hyper-V Server does not include VM licenses and a graphical interface. The management can be done remotely using PowerShell, Hyper-V Manager from another Windows Server 2016 or Windows 10. All the other Hyper-V features and limits in Windows Server 2016, including Failover Cluster, Shared Nothing Live Migration, RemoteFX, Discrete Device Assignment and Hyper-V Replica are included in the Hyper-V free version. Hyper-V Client In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. Windows Server 2016 Hyper-V X VMware vSphere 6.0 VMware is the existing competitor of Hyper-V and the current version 6.0 offers the VMware vSphere as a free and a standalone Hypervisor, vSphere Standard, Enterprise, and Enterprise Plus. The following list compares all the features existing in the free version of Hyper-V with VMware Sphere and Enterprise Plus: Feature Windows Server 2012 R2 Windows Server 2016 VMware vSphere 6.0 VMware vSphere 6.0 Enterprise Plus Logical Processors 320 512 480 480 Physical Memory 4TB 24TB 6TB 6TB/12TB Virtual CPU per Host 2,048 2,048 4,096 4,096 Virtual CPU per VM 64 240 8 128 Memory per VM 1TB 12TB 4TB 4TB Active VMs per Host 1,024 1,024 1,024 1,024 Guest NUMA Yes Yes Yes Yes Maximum Nodes 64 64 N/A 64 Maximum VMs per Cluster 8,000 8,000 N/A 8,000 VM Live Migration Yes Yes No Yes VM Live Migration with Compression Yes Yes N/A No VM Live Migration using RDMA Yes Yes N/A No 1GB Simultaneous Live Migrations Unlimited Unlimited N/A 4 10GB Simultaneous Live Migrations Unlimited Unlimited N/A 8 Live Storage Migration Yes Yes No Yes Shared Nothing Live Migration Yes Yes No Yes Cluster Rolling Upgrades Yes Yes N/A Yes VM Replica Hot/Add virtual Disk Yes Yes Yes Yes Native 4-KB Disk Support Yes Yes No No Maximum Virtual Disk Size 64TB 64TB 2TB 62TB Maximum Pass Through Disk Size 256TB or more 256TB or more 64TB 64TB Extensible Network Switch Yes Yes No Third party vendors   Network Virtualization Yes Yes No Requires vCloud networking and security IPsec Task Offload Yes Yes No No SR-IOV Yes Yes N/A Yes Virtual NICs per VM 12 12 10 10 VM NIC Device Naming No Yes N/A No Guest OS Application Monitoring Yes Yes No No Guest Clustering with Live Migration Yes Yes N/A No Guest Clustering with Dynamic Memory Yes Yes N/A No Shielded VMs No Yes N/A No Summary In this article, we have covered Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Resources for Article: Further resources on this subject: Storage Practices and Migration to Hyper-V 2016 [article] Proxmox VE Fundamentals [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article]
Read more
  • 0
  • 0
  • 33224
article-image-game-objective
Packt
04 Jan 2017
5 min read
Save for later

Game objective

Packt
04 Jan 2017
5 min read
In this article by Alan Thorn, author of the book Mastering Unity 5.x, we will see what the game objective is and asset preparation. Every game (except for experimental and experiential games) need an objective for the player; something they must strive to do, not just within specific levels, but across the game overall. This objective is important not just for the player (to make the game fun), but also for the developer, for deciding how challenge, diversity and interest can be added to the mix. Before starting development, have a clearly stated and identified objective in mind. Challenges are introduced primarily as obstacles to the objective, and bonuses are 'things' that facilitate the objective; that make it possible and easier to achieve. For Dead Keys, the primary objective is to survive and reach the level end. Zombies threaten that objective by attacking and damaging the player, and bonuses exist along the way to make things more interesting. I highly recommend using project management and team collaboration tools to chart, document and time-track tasks within your project. And you can do this for free too. Some online tools for this include Trello (https://trello.com), Bitrix 24 (https://www.bitrix24.com), BaseCamp (https://basecamp.com), FreedCamp (https://freedcamp.com), UnFuddle (https://unfuddle.com), BitBucket (https://bitbucket.org), Microsoft Visual Studio Team Services (https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx), Concord Contract Management (http://www.concordnow.com). Asset preparation When you've reached a clear decision on initial concept and design, you're ready to prototype! This means building a Unity project demonstrating the core mechanic and game rules in action; as a playable sample. After this, you typically refine the design more, and repeat prototyping until arriving at an artefact you want to pursue. From here, the art team must produce assets (meshes and textures) based on concept art, the game design, and photographic references. When producing meshes and textures for Unity, some important guidelines should be followed to achieve optimal graphical performance in-game. This is about structuring and building assets in a smart way, so they export cleanly and easily from their originating software, and can then be imported with minimal fuss, performing as best as they can at run-time. Let's see some of these guidelines for meshes and textures. Meshes - work only with good topology Good mesh topology consists in all polygons having only three or four sides in the model (not more). Additionally, Edge Loops should flow in an ordered, regular way along the contours of the model, defining its shape and form. Clean Topology Unity automatically converts, on import, any NGons (Polygons with more than four sides) into triangles, if the mesh has any. But, it's better to build meshes without NGons, as opposed to relying on Unity's automated methods. Not only does this cultivate good habits at the modelling phase, but it avoids any automatic and unpredictable retopology of the mesh, which affects how it's shaded and animated. Meshes - minimize polygon count Every polygon in a mesh entails a rendering performance hit insofar as a GPU needs time to process and render each polygon. Consequently, it's sensible to minimize the number of a polygons in a mesh, even though modern graphics hardware is adept at working with many polygons. It's good practice to minimize polygons where possible and to the degree that it doesn't detract from your central artistic vision and style. High-Poly Meshes! (Try reducing polygons where possible) There are many techniques available for reducing polygon counts. Most 3D applications (like 3DS Max, Maya and Blender) offer automated tools that decimate polygons in a mesh while retaining its basic shape and outline. However, these methods frequently make a mess of topology; leaving you with faces and edge loops leading in all directions. Even so, this can still be useful for reducing polygons in static meshes (Meshes that never animate), like statues or houses or chairs. However, it's typically bad for animated meshes where topology is especially important. Reducing Mesh Polygons with Automated Methods can produce messy topology! If you want to know the total vertex and face count of a mesh, you can use your 3D Software statistics. Blender, Maya, 3DS Max, and most 3D software, let you see vertex and face counts of selected meshes directly from the viewport. However, this information should only be considered a rough guide! This is because, after importing a mesh into Unity, the vertex count frequently turns out higher than expected! There are many reasons for this, explained in more depth online, here: http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html In short, use the Unity Vertex Count as the final word on the actual Vertex Count of your mesh. To view the vertex-count for an imported mesh in Unity, click the right-arrow on the mesh thumbnail in the Project Panel. This shows the Internal Mesh asset. Select this asset, and then view the Vertex Count from the Preview Pane in the Object Inspector. Viewing the Vertex and Face Count for meshes in Unity Summary In this article, we've learned about what are game objectives and about asset preparation.
Read more
  • 0
  • 0
  • 36834

article-image-bug-tracking
Packt
04 Jan 2017
11 min read
Save for later

Bug Tracking

Packt
04 Jan 2017
11 min read
In this article by Eduardo Freitas, the author of the book Building Bots with Node.js, we will learn about Internet Relay Chat (IRC). It enables us to communicate in real time in the form of text. This chat runs on a TCP protocol in a client server model. IRC supports group messaging which is called as channels and also supports private message. (For more resources related to this topic, see here.) IRC is organized into many networks with different audiences. IRC being a client server, users need IRC clients to connect to IRC servers. IRC Client software comes as a packaged software as well as web based clients. Some browsers are also providing IRC clients as add-ons. Users can either install on their systems and then can be used to connect to IRC servers or networks. While connecting these IRC Servers, users will have to provide unique nick or nickname and choose existing channel for communication or users can start a new channel while connecting to these servers. In this article, we are going to develop one of such IRC bots for bug tracking purpose. This bug tracking bot will provide information about bugs as well as details about a particular bug. All this will be done seamlessly within IRC channels itself. It's going to be one window operations for a team when it comes to knowing about their bugs or defects. Great!! IRC Client and server As mentioned in introduction, to initiate an IRC communication, we need an IRC Client and Server or a Network to which our client will be connected. We will be using freenode network for our client to connect to. Freenode is the largest free and open source software focused IRC network. IRC Web-based Client I will be using IRC web based client using URL(https://webchat.freenode.net/). After opening the URL, you will see the following screen, As mentioned earlier, while connecting, we need to provide Nickname: and Channels:. I have provided Nickname: as Madan and at Channels: as #BugsChannel. In IRC, channels are always identified with #, so I provided # for my bugs channel. This is the new channel that we will be starting for communication. All the developers or team members can similarly provide their nicknames and this channel name to join for communication. Now let's ensure Humanity: by selecting I'm not a robot and click button Connect. Once connected, you will see the following screen. With this, our IRC client is connected to freenode network. You can also see username on right hand side as @Madan within this #BugsChannel. Whoever is joining this channel using this channel name and a network, will be shown on right hand side. In the next article, we will ask our bot to join this channel and the same network and will see how it appears within the channel. IRC bots IRC bot is a program which connects to IRC as one of the clients and appears as one of the users in IRC channels. These IRC bots are used for providing IRC Services or to host chat based custom implementations which will help teams for efficient collaboration. Creating our first IRC bot using IRC and NodeJS Let's start by creating a folder in our local drive in order to store our bot program from the command prompt. mkdir ircbot cd ircbot Assuming we have Node.js and NPM installed and let's create and initialize our package.json, which will store our bot's dependencies and definitions. npm init Once you go through the npm init options (which are very easy to follow), you'll see something similar to this. On your project folder you'll see the result which is your package.json file. Let's install irc package from NPM. This can be located at https://www.npmjs.com/package/irc. In order to install it, run this npm command. npm install –-save irc You should then see something similar to this. Having done this, the next thing to do is to update your package.json in order to include the "engines" attribute. Open with a text editor the package.json file and update it as follows. "engines": { "node": ">=5.6.0" } Your package.json should then look like this. Let's create our app.js file which will be the entry point to our bot as mentioned while setting up our node package. Our app.js should like this. var irc = require('irc'); var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); Now let's run our Node.js program and at first see how our console looks. If everything works well, our console should show our bot as connected to the required network and also joined a channel. Console can be seen as the following, Now if you look at our channel #BugsChannel in our web client, you should see our bot has joined and also sent a welcome message as well. Refer the following screen: If you look at the the preceding screen, our bot program got has executed successfully. Our bot BugTrackerIRCBot has joined the channel #BugsChannel and also bot sent an introduction message to all whoever is on channel. If you look at the right side of the screen under usernames, we are seeing BugTrackerIRCBot below @Madan Code understanding of our basic bot After seeing how our bot looks in IRC client, let's look at basic code implementation from app.js. We used irc library with the following lines, var irc = require('irc'); Using irc library, we instantiated client to connect one of the IRC networks using the following code snippet, var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); Here we connected to network irc.freenode.net and provided a nickname as BugTrackerIRCBot. This name has been given as I would like my bot to track and report the bugs in future. Now we ask client to connect and join a specific channel using the following code snippet, client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); In preceeding code snippet, once client is connected, we get reply from server. This reply we are showing on a console. Once successfully connected, we ask bot to join a channel using the following code lines: client.join('#BugsChannel', function(input) { Remember, #BugsChannel is where we have joined from web client at the start. Now using client.join(), I am asking my bot to join the same channel. Once bot is joined, bot is saying a welcome message in the same channel using function client.say(). Hope this has given some basic understanding of our bot and it's code implementations. In the next article, we will enhance our bot so that our teams can have effective communication experience while chatting itself. Enhancing our BugTrackerIRCBot Having built a very basic IRC bot, let's enhance our BugTrackerIRCBot. As developers, we always would like to know how our programs or a system is functioning. To do this typically our testing teams carry out testing of a system or a program and log their bugs or defects into a bug tracking software or a system. We developers later can take a look at those bugs and address them as a part of our development life cycle. During this journey, developers will collaborate and communicate over messaging platforms like IRC. We would like to provide unique experience during their development by leveraging IRC bots. So here is what exactly we are doing. We are creating a channel for communication all the team members will be joined and our bot will also be there. In this channel, bugs will be reported and communicated based on developers' request. Also if developers need some additional information about a bug, chat bot can also help them by providing a URL from the bug tracking system. Awesome!! But before going in to details, let me summarize using the following steps about how we are going to do this, Enhance our basic bot program for more conversational experience Bug tracking system or bug storage where bugs will be stored and tracked for developers Here we mentioned about bug storage system. In this article, I would like to explain DocumentDB which is a NoSQL JSON based cloud storage system. What is DocumentDB? I have already explained NoSQLs. DocumentDB is also one of such NoSQLs where data is stored in JSON documents and offered by Microsoft Azure platform. Details of DocumentDB can be referred from (https://azure.microsoft.com/en-in/services/documentdb/) Setting up a DocumentDB for our BugTrackerIRCBot Assuming you already have a Microsoft Azure subscription follow these steps to configure DocumentDB for your bot. Create account ID for DocumentDB Let's create a new account called botdb using the following screenshot from Azure portal. Select NoSQL API as of DocumentDB. Select appropriate subscription and resources. I am using existing resources for this account. You can also create a new dedicated resource for this account. Once you enter all the required information, hit Create button at the bottom to create new account for DocumentDB. Newly created account botdb can be seen as the following, Create collection and database Select a botdb account from account lists shown precedingly. This will show various menu options like Properties, Settings, Collections etc. Under this account we need to create a collection to store bugs data. To create a new collection, click on Add Collection option as shown in the following screenshot, On click of Add Collection option, following screen will be shown on right side of the screen. Please enter the details as shown in the following screenshot: In the preceding screen, we are creating a new database along with our new collection Bugs. This new database will be named as BugDB. Once this database is created, we can add other bugs related collections in future in the same database. This can be done in future using option Use existing from the preceding screen. Once you enter all the relevant data, click OK to create database as well as collection. Refer the following screenshot: From the preceding screen, COLLECTION ID and DATABASE shown will be used during enhancing our bot. Create data for our BugTrackerIRCBot Now we have BugsDB with Bugs collection which will hold all the data for bugs. Let's add some data into our collection. To add a data let's use menu option Document Explorer shown in the following screenshot: This will open up a screen showing list of Databases and Collections created so far. Select our database as BugDB and collection as Bugs from the available list. Refer the following screenshot: To create a JSON document for our Bugs collection, click on Create option. This will open up a New Document screen to enter JSON based data. Please enter a data as per the following screenshot: We will be storing id, status, title, description, priority,assignedto, url attributes for our single bug document which will get stored in Bugs collection. To save JOSN document in our collection click Save button. Refer the following screenshot: This way we can create sample records in bugs collection which will be later wired up in NodeJS program. Sample list of bugs can be seen in the following screenshot: Summary Every development team needs bug tracking and reporting tools. There are typical needs of bug reporting and bug assignment. In case of critical projects these needs become also very critical for project timelines. This article showed us how we can provide a seamless experience to developers while they are communicating with peers within a channel. To summarize so far, we understood how to use DocumentDB from Microsoft Azure. Using DocumentDB, we created a new collection along with new database to store bugs data. We also added some sample JSON documents in Bugs collection. In today's world of collaboration, development teams who would be using such integrations and automations would be efficient and effective while delivering their quality products. Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 7809
Modal Close icon
Modal Close icon