Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-offloading-work-ui-thread-android
Packt
13 Oct 2016
8 min read
Save for later

Offloading work from the UI Thread on Android

Packt
13 Oct 2016
8 min read
In this article by Helder Vasconcelos, the author of Asynchronous Android Programming book - Second Edition, will present the most common asynchronous techniques techniques used on Android to develop an application, that using the multicore CPUs available on the most recent devices, is able to deliver up to date results quickly, respond to user interactions immediately and produce smooth transitions between the different UI states without draining the device battery. (For more resources related to this topic, see here.) Several reports have shown that an efficient application that provides a great user experience have better review ratings, higher user engagement and are able to achieve higher revenues. Why do we need Asynchronous Programming on Android? The Android system, by default, executes the UI rendering pipeline, base components (Activity, Fragment, Service, BroadcastReceiver, ...) lifecycle callback handling and UI interaction processing on a single thread, sometimes known as UI thread or main thread. The main thread handles his work sequentially collecting its work from a queue of tasks (Message Queue) that are queued to be processed by a particular application component. When any unit of work, such as a I/O operation, takes a significant period of time to complete, it will block and prevent the main thread from handling the next tasks waiting on the main thread queue to processed. Besides that, most Android devices refresh the screen 60 times per second, so every 16 milliseconds (1s/60 frames) a UI rendering task has to be processed by the main thread in order to draw and update the device screen. The next figure shows up a typical main thread timeline that runs the UI rendering task every 16ms. When a long lasting operation, prevents the main thread from executing frame rendering in time, the current frame drawing is deferred or some frame drawings are missed generating a UI glitch noticeable to the application user. A typical long lasting operation could be: Network data communication HTTP REST Request SOAP Service Access File Upload or Backup Reading or writing of files to the filesystem Shared Preferences Files File Cache Access Internal Database reading or writing Camera, Image, Video, Binary file processing. A user Interface glitch produced by dropped frames on Android is known on Android as jank. The Android SDK command systrace (https://developer.android.com/studio/profile/systrace.html) comes with the ability to measure the performance of UI rendering and then diagnose and identify problems that may arrive from various threads that are running on the application process. In the next image we illustrate a typical main thread timeline when a blocking operation dominates the main thread for more than 2 frame rendering windows: As you can perceive, 2 frames are dropped and the 1 UI Rendering frame was deferred because our blocking operation took approximately 35ms, to finish: When the long running operation that runs on the main thread does not complete within 5 seconds, the Android System displays an “Application not Responding” (ANR) dialog to the user giving him the option to close the application. Hence, in order to execute compute-intensive or blocking I/O operations without dropping a UI frame, generate UI glitches or degrade the application responsiveness, we have to offload the task execution to a background thread, with less priority, that runs concurrently and asynchronously in an independent line of execution, like shown on the following picture. Although, the use of asynchronous and multithreaded techniques always introduces complexity to the application, Android SDK and some well known open source libraries provide high level asynchronous constructs that allow us to perform reliable asynchronous work that relieve the main thread from the hard work. Each asynchronous construct has advantages and disadvantages and by choosing the right construct for your requirements can make your code more reliable, simpler, easier to maintain and less error prone. Let’s enumerate the most common techniques that are covered in detail in the “Asynchronous Android Programming” book. AsyncTask AsyncTask is simple construct available on the Android platform since Android Cupcake (API Level 3) and is the most widely used asynchronous construct. The AsyncTask was designed to run short-background operations that once finished update the UI. The AsyncTask construct performs the time consuming operation, defined on the doInBackground function, on a global static thread pool of background threads. Once doInBackground terminates with success, the AsyncTask construct switches back the processing to the main thread (onPostExecute) delivering the operation result for further processing. This technique if it is not used properly can lead to memory leaks or inconsistent results. HandlerThread The HandlerThread is a Threat that incorporates a message queue and an Android Looper that runs continuously waiting for incoming operations to execute. To submit new work to the Thread we have to instantiate a Handler that is attached to HandlerThread Looper. public class DownloadImageTask extends AsyncTask<URL, Integer, Bitmap> { protected Long doInBackground(URL... urls) {} protected void onProgressUpdate(Integer... progress) {} protected void onPostExecute(Bitmap image) {} } HandlerThread handlerThread = new HandlerThread("MyHandlerThread"); handlerThread.start(); Looper looper = handlerThread.getLooper(); Handler handler = new Handler(looper); handler.post(new Runnable(){ @Override public void run() {} }); The Handler interface allow us to submit a Message or a Runnable subclass object that could aggregate data and a chunk of work to be executed. Loader The Loader construct allow us to run asynchronous operations that load content from a content provider or a data source, such as an Internal Database or a HTTP service. The API can load data asynchronously, detect data changes, cache data and is aware of the Fragment and Activity lifecycle. The Loader API was introduced to the Android platform at API level 11, but are available for backwards compatibility through the Android Support libraries. public static class TextLoader extends AsyncTaskLoader<String> { @Override public String loadInBackground() { // Background work } @Override public void deliverResult(String text) {} @Override protected void onStartLoading() {} @Override protected void onStopLoading() {} @Override public void onCanceled(String text) {} @Override protected void onReset() {} } IntentService The IntentService class is a specialized subclass of Service that implements a background work queue using a single HandlerThread. When work is submitted to an IntentService, it is queued for processing by a HandlerThread, and processed in order of submission. public class BackupService extends IntentService { @Override protected void onHandleIntent(Intent workIntent) { // Background Work } } JobScheduler The JobScheduler API allow us to execute jobs in background when several prerequisites are fulfilled and taking into the account the energy and network context of the device. This technique allows us to defer and batch job executions until the device is charging or an unmetered network is available. JobScheduler scheduler = (JobScheduler) getSystemService( Context.JOB_SCHEDULER_SERVICE ); JobInfo.Builder builder = new JobInfo.Builder(JOB_ID, serviceComponent); builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_UNMETERED); builder.setRequiresCharging(true); scheduler.schedule(builder.build()); RxJava RxJava is an implementation of the Reactive Extensions (ReactiveX) on the JVM, that was developed by Netflix, used to compose asynchronous event processing that react to an observable source of events. The framework extends the Observer pattern by allowing us to create a stream of events, that could be intercepted by operators (input/output) that modify the original stream of events and deliver the result or an error to a final Observer. The RxJava Schedulers allow us to control in which thread our Observable will begin operating on and in which thread the event is delivered to the final Observer or Subscriber. Subscription subscription = getPostsFromNetwork() .map(new Func1<Post, Post>() { @Override public Post call(Post Post) { ... return result; } }) .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Observer<Post>() { @Override public void onCompleted() {} @Override public void onError() {} @Override public void onNext(Post post) { // Process the result } }); Summary As you have seen, there are several high level asynchronous constructs available to offload the long computing or blocking tasks from the UI thread and it is up to developer to choose right construct for each situation because there is not an ideal choice that could be applied everywhere. Asynchronous multithreaded programming, that produces reliable results, is difficult and error prone task so using a high level technique tends to simplify the application source code and the multithreading processing logic required to scale the application over the available CPU cores. Remember that keeping your application responsive and smooth is essential to delight your users and increase your chances to create a notorious mobile application. The techniques and asynchronous constructs summarized on the previous paragraphs are covered in detail in the Asynchronous Android Programming book published by Packt Publishing. Resources for Article: Further resources on this subject: Getting started with Android Development [article] Practical How-To Recipes for Android [article] Speeding up Gradle builds for Android [article]
Read more
  • 0
  • 0
  • 14408

article-image-reconstructing-3d-scenes
Packt
13 Oct 2016
25 min read
Save for later

Reconstructing 3D Scenes

Packt
13 Oct 2016
25 min read
In this article by Robert Laganiere, the author of the book OpenCV 3 Computer Vision Application Programming Cookbook Third Edition, has covered the following recipes: Calibrating a camera Recovering camera pose (For more resources related to this topic, see here.) Digital image formation Let us now redraw a new version of the figure describing the pin-hole camera model. More specifically, we want to demonstrate the relation between a point in 3D at position (X,Y,Z) and its image (x,y), on a camera specified in pixel coordinates. Note the changes that have been made to the original figure. First, a reference frame was positioned at the center of the projection; then, the Y-axis was alligned to point downwards to get a coordinate system that is compatible with the usual convention, which places the image origin at the upper-left corner of the image. Finally, we have identified a special point on the image plane, by considering the line coming from the focal point that is orthogonal to the image plane. The point (u0,v0) is the pixel position at which this line pierces the image plane and is called the principal point. It would be logical to assume that this principal point is at the center of the image plane, but in practice, this one might be off by few pixels depending on the precision of the camera. Since we are dealing with digital images, the number of pixels on the image plane (its resolution) is another important characteristic of a camera. We learned previously that a 3D point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by the pixel width (px) and then the height (py). Note that by dividing the focal length given in world units (generally given in millimeters) by px, we obtain the focal length expressed in (horizontal) pixels. We will then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel unit. The complete projective equation is therefore as shown: We know that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. Also, the physical size of a pixel can be obtained by dividing the size of the image sensor (generally in millimeters) by the number of pixels (horizontally or vertically). In modern sensor, pixels are generally of square shape, that is, they have the same horizontal and vertical size. The preceding equations can be rewritten in matrix form. Here is the complete projective equation in its most general form: Calibrating a camera Camera calibration is the process by which the different camera parameters (that is, the ones appearing in the projective equation) are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. However, accurate calibration information can be obtained by undertaking an appropriate camera calibration step. An active camera calibration procedure will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process that has been made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is to show it a set of scene points for which the 3D positions are known. Then, you need to observe where these points project on the image. With the knowledge of a sufficient number of 3D points and associated 2D image points, the exact camera parameters can be inferred from the projective equation. Obviously, for accurate results, we need to observe as many points as possible. One way to achieve this would be to take a picture of a scene with known 3D points, but in practice, this is rarely feasible. A more convenient way is to take several images of a set of 3D points from different viewpoints. This approach is simpler, but it requires you to compute the position of each camera view in addition to the computation of the internal camera parameters, which is fortunately feasible. OpenCV proposes that you use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0, with the X and Y axes well-aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. The following is an example of a calibration pattern image made of 7x5 inner corners as captured during the calibration step: The good thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (the number of horizontal and vertical inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false, as shown: //output vectors of image points std::vector<cv::Point2f> imageCorners; //number of inner corners on the chessboard cv::Size boardSize(7,5); //Get the chessboard corners bool found = cv::findChessboardCorners(image, // image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners The output parameter, imageCorners, will simply contain the pixel coordinates of the detected inner corners of the shown pattern. Note that this function accepts additional parameters if you need to tune the algorithm, which are not discussed here. There is also a special function that draws the detected corners on the chessboard image, with lines connecting them in a sequence: //Draw the corners cv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The following image is obtained: The lines that connect the points show the order in which the points are listed in the vector of detected image points. To perform a calibration, we now need to specify the corresponding 3D points. You can specify these points in the units of your choice (for example, in centimeters or in inches); however, the simplest is to assume that each square represents one unit. In that case, the coordinates of the first point would be (0,0,0) (assuming that the board is located at a depth of Z=0), the coordinates of the second point would be (1,0,0), and so on, the last point being located at (6,4,0). There are a total of 35 points in this pattern, which is too less to obtain an accurate calibration. To get more points, you need to show more images of the same calibration pattern from various points of view. To do so, you can either move the pattern in front of the camera or move the camera around the board; from a mathematical point of view, this is completely equivalent. The OpenCV calibration function assumes that the reference frame is fixed on the calibration pattern and will calculate the rotation and translation of the camera with respect to the reference frame. Let‘s now encapsulate the calibration process in a CameraCalibrator class. The attributes of this class are as follows: // input points: // the points in world coordinates // (each square is one unit) std::vector<std::vector<cv::Point3f>> objectPoints; // the image point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; Note that the input vectors of the scene and image points are in fact made of std::vector of point instances; each vector element is a vector of the points from one view. Here, we decided to add the calibration points by specifying a vector of the chessboard image filename as input, the method will take care of extracting the point coordinates from the images: // Open chessboard images and extract corner points int CameraCalibrator::addChessboardPoints(const std::vector<std::string>& filelist, // list of filenames cv::Size & boardSize) { // calibration noard size // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners(image, //image of chessboard pattern boardSize, // size of pattern imageCorners); // list of detected corners // Get subpixel accuracy on the corners if (found) { cv::cornerSubPix(image, imageCorners, cv::Size(5, 5), // half size of serach window cv::Size(-1, -1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS,30,// max number of iterations 0.1)); // min accuracy // If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes; } The first loop inputs the 3D coordinates of the chessboard and the corresponding image points are the ones provided by the cv::findChessboardCorners function; this is done for all the available viewpoints. Moreover, in order to obtain a more accurate image point location, the cv::cornerSubPix function can be used, and as the name suggests, the image points will then be localized at subpixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines the maximum number of iterations and the minimum accuracy in subpixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners have been successfully detected, these points are added to the vectors of the image and scene points using our addPoints method. Once a sufficient number of chessboard images have been processed (and consequently, a large number of 3D scene point / 2D image point correspondences are available), we can initiate the computation of the calibration parameters as shown: // Calibrate the camera // returns the re-projection error double CameraCalibrator::calibrate(cv::Size &imageSize){ //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix, // output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs, // Rs, Ts flag); // set options } In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. These will be described in the next section. How it works... In order to explain the result of the calibration, we need to go back to the projective equation presented in the introduction of this article. This equation describes the transformation of a 3D point into a 2D point through the successive application of two matrices. The first matrix includes all of the camera parameters, which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that explicitly returns the value of the intrinsic parameters given by a calibration matrix. The second matrix is there to have the input points expressed into camera-centric coordinates. It is composed of a rotation vector (a 3x3 matrix) and a translation vector (a 3x1 matrix). Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (made of a rotation component represented by the matrix entries r1 to r9 and a translation represented by t1, t2, and t3) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The calibration results provided by the cv::calibrateCamera are obtained through an optimization process. This process aims to find the intrinsic and extrinsic parameters that minimizes the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all the points specified during the calibration is called the re-projection error. The intrinsic parameters of our test camera obtained from a calibration based on the 27 chessboard images are fx=409 pixels; fy=408 pixels; u0=237; and v0=171. Our calibration images have a size of 536x356 pixels. From the calibration results, you can see that, as expected, the principal point is close to the center of the image, but yet off by few pixels. The calibration images were taken using a Nikon D500 camera with an 18mm lens. Looking at the manufacturer specifitions, we find that the sensor size of this camera is 23.5mm x 15.7mm which gives us a pixel size of 0.0438mm. The estimated focal length is expressed in pixels, so multiplying the result by the pixel size gives us an estimated focal length of 17.8mm, which is consistent with the actual lens we used. Let us now turn our attention to the distortion parameters. So far, we have mentioned that under the pin-hole camera model, we can neglect the effect of the lens. However, this is only possible if the lens that is used to capture an image does not introduce important optical distortions. Unfortunately, this is not the case with lower quality lenses or with lenses that have a very short focal length. Even the lens we used in this experiment introduced some distortion, that is, the edges of the rectangular board are curved in the image. Note that this distortion becomes more important as we move away from the center of the image. This is a typical distortion observed with a fish-eye lens and is called radial distortion. It is possible to compensate for these deformations by introducing an appropriate distortion model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation, which will correct the distortions, can be obtained together with the other camera parameters during the calibration phase. Once this is done, any image from the newly calibrated camera will be undistorted. Therefore, we have added an additional method to our calibration class. //remove distortion in an image (after calibration) cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { //called once per calibration cv::initUndistortRectifyMap(cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted; } Running this code on one of our calibration image results in the following undistorted image: To correct the distortion, OpenCV uses a polynomial function that is applied to the image points in order to move them at their undistorted position. By default, five coefficients are used; a model made of eight coefficients is also available. Once these coefficients are obtained, it is possible to compute two cv::Mat mapping functions (one for the x coordinate and one for the y coordinate) that will give the new undistorted position of an image point on a distorted image. This is computed by the cv::initUndistortRectifyMap function, and the cv::remap function remaps all the points of an input image to a new image. Note that because of the nonlinear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There‘s more... More options are available when it comes to camera calibration. Calibration with known intrinsic parameters When a good estimate of the camera’s intrinsic parameters is known, it could be advantageous to input them in the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the cv::CALIB_USE_INTRINSIC_GUESS flag and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (cv::CALIB_FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (cv::CALIB_FIX_RATIO); in which case, you assume that the pixels have a square shape. Using a grid of circles for calibration Instead of the usual chessboard pattern, OpenCV also offers the possibility to calibrate a camera by using a grid of circles. In this case, the centers of the circles are used as calibration points. The corresponding function is very similar to the function we used to locate the chessboard corners, for example: cv::Size boardSize(7,7); std::vector<cv::Point2f> centers; bool found = cv:: findCirclesGrid(image, boardSize, centers); See also The A flexible new technique for camera calibration article by Z. Zhang  in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, 2000, is a classic paper on the problem of camera calibration Recovering camera pose When a camera is calibrated, it becomes possible to relate the captured with the outside world. If the 3D structure of an object is known, then one can predict how the object will be imaged on the sensor of the camera. The process of image formation is indeed completely described by the projective equation that was presented at the beginning of this article. When most of the terms of this equation are known, it becomes possible to infer the value of the other elements (2D or 3D) through the observation of some images. In this recipe, we will look at the camera pose recovery problem when a known 3D structure is observed. How to do it... Lets consider a simple object here, a bench in a park. We took an image of it using the camera/lens system calibrated in the previous recipe. We have manually identified 8 distinct image points on the bench that we will use for our camera pose estimation. Having access to this object makes it possible to make some physical measurements. The bench is composed of a seat of size 242.5cmx53.5cmx9cm and a back of size 242.5cmx24cmx9cm that is fixed 12cm over the seat. Using this information, we can then easily derive the 3D coordinates of the eight identified points in an object-centric reference frame (here we fixed the origin at the left extremity of the intersection between the two planes). We can then create a vector of cv::Point3f containing these coordinates. //Input object points std::vector<cv::Point3f> objectPoints; objectPoints.push_back(cv::Point3f(0, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 45, 0)); objectPoints.push_back(cv::Point3f(242.5, 21, 0)); objectPoints.push_back(cv::Point3f(0, 21, 0)); objectPoints.push_back(cv::Point3f(0, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, -9)); objectPoints.push_back(cv::Point3f(242.5, 9, 44.5)); objectPoints.push_back(cv::Point3f(0, 9, 44.5)); The question now is where the camera was with respect to these points when the shown picture was taken. Since the coordinates of the image of these known points on the 2D image plane are also known, it becomes easy to answer this question using the cv::solvePnP function. Here, the correspondence between the 3D and the 2D points has been established manually, but as a reader of this book, you should be able to come up with methods that would allow you to obtain this information automatically. //Input image points std::vector<cv::Point2f> imagePoints; imagePoints.push_back(cv::Point2f(136, 113)); imagePoints.push_back(cv::Point2f(379, 114)); imagePoints.push_back(cv::Point2f(379, 150)); imagePoints.push_back(cv::Point2f(138, 135)); imagePoints.push_back(cv::Point2f(143, 146)); imagePoints.push_back(cv::Point2f(381, 166)); imagePoints.push_back(cv::Point2f(345, 194)); imagePoints.push_back(cv::Point2f(103, 161)); // Get the camera pose from 3D/2D points cv::Mat rvec, tvec; cv::solvePnP(objectPoints, imagePoints, // corresponding 3D/2D pts cameraMatrix, cameraDistCoeffs, // calibration rvec, tvec); // output pose // Convert to 3D rotation matrix cv::Mat rotation; cv::Rodrigues(rvec, rotation); This function computes the rigid transformation (rotation and translation) that brings the object coordinates in the camera-centric reference frame (that is, the ones that has its origin at the focal point). It is also important to note that the rotation computed by this function is given in the form of a 3D vector. This is a compact representation in which the rotation to apply is described by a unit vector (an axis of rotation) around which the object is rotated by a certain angle. This axis-angle representation is also called the Rodrigues’ rotation formula. In OpenCV, the angle of rotation corresponds to the norm of the output rotation vector, which is later aligned with the axis of rotation. This is why the cv::Rodrigues function is used to obtain the 3D matrix of rotation that appears in our projective equation. The pose recovery procedure described here is simple, but how do we know we obtained the right camera/object pose information. We can visually assess the quality of the results using the cv::viz module that gives us the ability to visualize 3D information. The use of this module is explained in the last section of this recipe. For now, lets display a simple 3D representation of our object and the camera that captured it: It might be difficult to judge of the quality of the pose recovery just by looking at this image but if you test the example of this recipe on your computer, you will have the possibility to move this representation in 3D using your mouse which should give you a better sense of the solution obtained. How it works... In this recipe, we assumed that the 3D structure of the object was known as well as the correspondence between sets of object points and image points. The camera’s intrinsic parameters were also known through calibration. If you look at our projective equation presented at the end of the Digital image formation section of the introduction of this article, this means that we have points for which coordinates (X,Y,Z) and (x,y) are known. We also have the elements of first matrix known (the intrinsic parameters). Only the second matrix is unknown; this is the one that contains the extrinsic parameters of the camera that is the camera/object pose information. Our objective is to recover these unknown parameters from the observation of 3D scene points. This problem is known as the Perspective-n-Point problem or PnP problem. Rotation has three degrees of freedom (for example, angle of rotation around the three axes) and translation also has three degrees of freedom. We therefore have a total of 6 unknowns. For each object point/image point correspondence, the projective equation gives us three algebraic equations but since the projective equation is up to a scale factor, we only have 2 independent equations. A minimum of three points is therefore required to solve this system of equations. Obviously, more points provide a more reliable estimate. In practice, many different algorithms have been proposed to solve this problem and OpenCV proposes a number of different implementation in its cv::solvePnP function. The default method consists in optimizing what is called the reprojection error. Minimizing this type of error is considered to be the best strategy to get accurate 3D information from camera images. In our problem, it corresponds to finding the optimal camera position that minimizes the 2D distance between the projected 3D points (as obtained by applying the projective equation) and the observed image points given as input. Note that OpenCV also has a cv::solvePnPRansac function. As the name suggest, this function uses the RANSAC algorithm in order to solve the PnP problem. This means that some of the object points/image points correspondences may be wrong and the function will returns which ones have been identified as outliers. This is very useful when these correspondences have been obtained through an automatic process that can fail for some points. There‘s more... When working with 3D information, it is often difficult to validate the solutions obtained. To this end, OpenCV offers a simple yet powerful visualization module that facilitates the development and debugging of 3D vision algorithms. It allows inserting points, lines, cameras, and other objects in a virtual 3D environment that you can interactively visualize from various points of views. cv::Viz, a 3D Visualizer module cv::Viz is an extra module of the OpenCV library that is built on top of the VTK open source library. This Visualization Tooolkit (VTK) is a powerful framework used for 3D computer graphics. With cv::viz, you create a 3D virtual environment to which you can add a variety of objects. A visualization window is created that displays the environment from a given point of view. You saw in this recipe an example of what can be displayed in a cv::viz window. This window responds to mouse events that are used to navigate inside the environment (through rotations and translations). This section describes the basic use of the cv::viz module. The first thing to do is to create the visualization window. Here we use a white background: // Create a viz window cv::viz::Viz3d visualizer(“Viz window“); visualizer.setBackgroundColor(cv::viz::Color::white()); Next, you create your virtual objects and insert them into the scene. There is a variety of predefined objects. One of them is particularly useful for us; it is the one that creates a virtual pin-hole camera: // Create a virtual camera cv::viz::WCameraPosition cam(cMatrix, // matrix of intrinsics image, // image displayed on the plane 30.0, // scale factor cv::viz::Color::black()); // Add the virtual camera to the environment visualizer.showWidget(“Camera“, cam); The cMatrix variable is a cv::Matx33d (that is,a cv::Matx<double,3,3>) instance containing the intrinsic camera parameters as obtained from calibration. By default this camera is inserted at the origin of the coordinate system. To represent the bench, we used two rectangular cuboid objects. // Create a virtual bench from cuboids cv::viz::WCube plane1(cv::Point3f(0.0, 45.0, 0.0), cv::Point3f(242.5, 21.0, -9.0), true, // show wire frame cv::viz::Color::blue()); plane1.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); cv::viz::WCube plane2(cv::Point3f(0.0, 9.0, -9.0), cv::Point3f(242.5, 0.0, 44.5), true, // show wire frame cv::viz::Color::blue()); plane2.setRenderingProperty(cv::viz::LINE_WIDTH, 4.0); // Add the virtual objects to the environment visualizer.showWidget(“top“, plane1); visualizer.showWidget(“bottom“, plane2); This virtual bench is also added at the origin; it then needs to be moved at its camera-centric position as found from our cv::solvePnP function. It is the responsibility of the setWidgetPose method to perform this operation. This one simply applies the rotation and translation components of the estimated motion. cv::Mat rotation; // convert vector-3 rotation // to a 3x3 rotation matrix cv::Rodrigues(rvec, rotation); // Move the bench cv::Affine3d pose(rotation, tvec); visualizer.setWidgetPose(“top“, pose); visualizer.setWidgetPose(“bottom“, pose); The final step is to create a loop that keeps displaying the visualization window. The 1ms pause is there to listen to mouse events. // visualization loop while(cv::waitKey(100)==-1 && !visualizer.wasStopped()) { visualizer.spinOnce(1, // pause 1ms true); // redraw } This loop will stop when the visualization window is closed or when a key is pressed over an OpenCV image window. Try to apply inside this loop some motion on an object (using setWidgetPose); this is how animation can be created. See also Model-based object pose in 25 lines of code by D. DeMenthon and L. S. Davis, in European Conference on Computer Vision, 1992, pp.335–343 is a famous method for recovering camera pose from scene points. Summary This article teaches us how, under specific conditions, the 3D structure of the scene and the 3D pose of the cameras that captured it can be recovered. We have seen how a good understanding of projective geometry concepts allows to devise methods enabling 3D reconstruction. Resources for Article: Further resources on this subject: OpenCV: Image Processing using Morphological Filters [article] Learn computer vision applications in Open CV [article] Cardboard is Virtual Reality for Everyone [article]
Read more
  • 0
  • 0
  • 7668

article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 6085

article-image-solving-nlp-problem-keras-part-1
Sasank Chilamkurthy
12 Oct 2016
5 min read
Save for later

Solving an NLP Problem with Keras, Part 1

Sasank Chilamkurthy
12 Oct 2016
5 min read
In a previous two-part post series on Keras, I introduced Convolutional Neural Networks(CNNs) and the Keras deep learning framework. We used them to solve a Computer Vision (CV) problem involving traffic sign recognition. Now, in this two-part post series, we will solve a Natural Language Processing (NLP) problem with Keras. Let’s begin. The Problem and the Dataset The problem we are going to tackle is Natural Language Understanding. The aim is to extract the meaning of speech utterances. This is still an unsolved problem. Therefore, we can break this problem into a solvable practical problem of understanding the speaker in a limited context. In particular, we want to identify the intent of a speaker asking for information about flights. The dataset we are going to use is Airline Travel Information System (ATIS). This dataset was collected by DARPA in the early 90s. ATIS consists of spoken queries on flight related information. An example utterance is I want to go from Boston to Atlanta on Monday. Understanding this is then reduced to identifying arguments like Destination and Departure Day. This task is called slot-filling. Here is an example sentence and its labels. You will observe that labels are encoded in an Inside Outside Beginning (IOB) representation. Let’s look at the dataset: |Words | Show | flights | from | Boston | to | New | York| today| |Labels| O | O | O |B-dept | O|B-arr|I-arr|B-date| The ATIS official split contains 4,978/893 sentences for a total of 56,590/9,198 words (average sentence length is 15) in the train/test set. The number of classes (different slots) is 128, including the O label (NULL). Unseen words in the test set are encoded by the <UNK> token, and each digit is replaced with string DIGIT;that is,20 is converted to DIGITDIGIT. Our approach to the problem is to use: Word embeddings Recurrent neural networks I'll talk about these briefly in the following sections. Word Embeddings Word embeddings map words to a vector in a high-dimensional space. These word embeddings can actually learn the semantic and syntactic information of words. For instance, they can understand that similar words are close to each other in this space and dissimilar words are far apart. This can be learned either using large amounts of text like Wikipedia, or specifically for a given problem. We will take the second approach for this problem. As an illustation, I have shown here the nearest neighbors in the word embedding space for some of the words. This embedding space was learned by the model that we’ll define later in the post: sunday delta california boston august time car wednesday continental colorado nashville september schedule rental saturday united florida toronto july times limousine friday american ohio chicago june schedules rentals monday eastern georgia phoenix december dinnertime cars tuesday northwest pennsylvania cleveland november ord taxi thursday us north atlanta april f28 train wednesdays nationair tennessee milwaukee october limo limo saturdays lufthansa minnesota columbus january departure ap sundays midwest michigan minneapolis may sfo later Recurrent Neural Networks Convolutional layers can be a great way to pool local information, but they do not really capture the sequentiality of data. Recurrent Neural Networks (RNNs) help us tackle sequential information like natural language. If we are going to predict properties of the current word, we better remember the words before it too. An RNN has such an internal state/memory that stores the summary of the sequence it has seen so far. This allows us to use RNNs to solve complicated word tagging problems such as Part Of Speech (POS) tagging or slot filling, as in our case. The following diagram illustrates the internals of RNN:  Source: Nature RNN Let's briefly go through the diagram: Is the input to the RNN.   x_1,x_2,...,x_(t-1),x_t,x_(t+1)... Is the hidden state of the RNN at the step.  st This is computed based on the state at the step. t-1 As st=f(Uxt+Ws(t-1)) Here f is a nonlinearity such astanh or ReLU. ot Is the output at the step. t Computed as:ot=f(Vst)U,V,W Are the learnable parameters of RNN. For our problem, we will pass a word embeddings’ sequence as the input to the RNN. Putting it all together Now that we've setup the problem and have an understanding of the basic blocks, let's code it up. Since we are using the IOB representation for labels, it's not simpleto calculate the scores of our model. We therefore use the conlleval perl script to compute the F1 Scores. I've adapted the code from here for the data preprocessing and score calculation. The complete code is available at GitHub: $ git clone https://github.com/chsasank/ATIS.keras.git $ cd ATIS.keras I recommend using jupyter notebook to run and experiment with the snippets from the tutorial. $ jupyter notebook Conclusion In part 2, we will load the data using data.load.atisfull(). We will also define the Keras model, and then we will train the model. To measure the accuracy of the model, we’ll use model.predict_on_batch() and metrics.accuracy.conlleval(). And finally, we will improve our model to achieve better results. About the author Sasank Chilamkurthy works at Fractal Analytics. His work involves deep learning on medical images obtained from radiology and pathology. He is mainly interested in computer vision.
Read more
  • 0
  • 0
  • 4638

article-image-sending-notifications-using-raspberry-pi-zero
Packt
12 Oct 2016
6 min read
Save for later

Sending Notifications using Raspberry Pi Zero

Packt
12 Oct 2016
6 min read
In this article by Marco Schwartz, author of Building Smart Homes with Raspberry Pi Zero, we are going to start diving into a very interesting field that will change the way we interact with our environment: the Internet of Things (IoT). The IoT basically proposes to connect every device around us to the Internet, so we can interact with them from anywhere in the world. (For more resources related to this topic, see here.) Within this context, a very important application is to receive notifications from your devices when they detect something in your home, for example a motion in your home or the current temperature. This is exactly what we are going to do in this article, we are going to learn how to make your Raspberry Pi Zero board send you notifications via text message, email, and push notifications. Let's start! Hardware and software requirements As always, we are going to start with the list of required hardware and software components for the project. Except Raspberry Pi Zero, you will need some additional components for each of the sections in this article. For the project of this article, we are going to use a simple PIR motion sensor to detect motion from your Pi. Then, for the last two projects of the article, we'll use the DHT11 sensor. Finally, you will need the usual breadboard and jumper wires. This is the list of components that you will need for this whole article, not including the Raspberry Pi Zero: PIR motion sensor (https://www.sparkfun.com/products/13285) DHT11 sensor + 4.7k Ohm resistor (https://www.adafruit.com/products/386) Breadboard (https://www.adafruit.com/products/64) Jumper wires (https://www.adafruit.com/products/1957) On the software side, you will need to create an account on IFTTT, which we will use in all the projects of this article. For that, simply go to: https://ifttt.com/ You should be redirected to the main page of IFTTT where you'll be able to create an account: Making a motion sensor that sends text messages For the first project of this chapter, we are going to attach a motion sensor to the Raspberry Pi board and make the Raspberry Pi Zero send us a text message whenever motion is detected. For that, we are going to use IFTTT to make the link between our Raspberry Pi and our phone. Indeed, whenever IFTTT will receive a trigger from the Raspberry Pi, it will automatically send us a text message. Lets first connect the PIR motion sensor to the Raspberry Pi. For that, simply connect the VCC pin of the sensor to a 3.3V pin of the Raspberry Pi, GND to GND, and the OUT pin of the sensor to GPIO18 of the Raspberry Pi. This is the final result: Let's now add our first channel to IFTTT, which will allow us later to interact with the Raspberry Pi and with web services. You can easily add new channels by clicking on the corresponding tab on the IFTTT website. First, add the Maker channel to your account: This will basically give you a key that you will need when writing the code for this project: After that, add the SMS channel to your IFTTT account. Now, you can actually create your first recipe. Select the Maker channel as the trigger channel: Then, select Receive a web request: As the name of this request, enter motion_detected: As the action channel, which is the channel that will be executed when a trigger is received, choose the SMS channel: For the action, choose Send me an SMS: You can now enter the message you want to see in the text messages: Finally, confirm the creation of the recipe: Now that our recipe is created and active, we can move on to actually configuring Raspberry Pi so it sends alerts whenever a motion is detected. As usual, we'll use Node.js to code this program. After a minute, pass your hand in front of the sensor: your Raspberry Pi should immediately send a command to IFTTT and after some seconds you should be able to receive a message on your mobile phone: Congratulations, you can now use your Raspberry Pi Zero to send important notifications, on your mobile phone! Note that for an actual use of this project in your home, you might want to limit the number of messages you are sending as IFTTT has a limit on the number of messages you can send (check the IFTTT website for the current limit). For example, you could use this for only very important alerts, like in case of an intruder coming in your home in your absence. Sending temperature alerts through email In the second project of the article, we are going to learn how to send automated email alerts based on data measured by the Raspberry Pi. Let's first assemble the project. Place the DHT11 sensor on the breadboard and then place the 4.7k Ohm resistor between pin 1 and 2 of the sensor. Then, connect pin 1 of the sensor to the 3.3V pin of the Raspberry Pi, pin 2 to GPIO18, and pin 4 to GND. This is the final result: Let us now see how to configure the project. Go over to IFTTT and create add the Email Channel to your account: After that, create a new recipe by choosing the Maker channel as the trigger: For the event, enter temperature_alert and then choose Email as the action channel: You will then be able to customize the text and subject of the email sent to Pi. As we want to send the emails whenever the temperature in your home gets too low, you can use a similar message: You can now finalize the creation of the recipe and close IFTTT. Let's now see how to configure the Raspberry Pi Zero. As the code for this project is quite similar to the one we saw in the previous section, I will only highlight the main differences here. It starts by including the required components: var request = require('request'); var sensorLib = require('node-dht-sensor');   Then, give the correct name to the event we'll use in the project: var eventName = 'temperature_low'; We also define the pin on which the sensor is connected: var sensorPin = 18; As we want to send alerts based on the measured temperature, we need to define a threshold. As it was quite warm when I made this project, I have assigned a high threshold at 30 degrees Celsius, but you can, of course, modify it: var threshold = 30; Summary In this article, we learned all the basics about sending automated notifications from your Raspberry Pi. We learned, for example, how to send notifications via email, text messages, and push notifications. This is really important to build a smart home, as you want to be able to get alerts in real-time from what's going on inside your home and also receive regular reports about the current status of your home. Resources for Article: Further resources on this subject: The Raspberry Pi and Raspbian [article] Building Our First Poky Image for the Raspberry Pi [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 17109

article-image-moving-windows-appliance
Packt
12 Oct 2016
7 min read
Save for later

Moving from Windows to Appliance

Packt
12 Oct 2016
7 min read
In this article by Daniel Langenhan, author of VMware vRealize Orchestrator Cookbook, Second Edition, will show you how to move an existing Windows Orchestrator installation to the appliance. With vRO 7 the Windows install of Orchestrator doesn't exist anymore. (For more resources related to this topic, see here.) Getting ready We need an Orchestrator installed on windows. Download the same version of the Orchestrator appliance as you have installed in the Windows version. If needed upgrade the Windows version to the latest possible one. How to do it... There are three ways, using the migration tool, repointing to an external database or export/import the packages. Migration tool There is a migration tool that comes with vRO7 that allows you to pack up your vRO5.5 or 6.x install and deploy it into a vRO7. The migration tool works on Windows and Linux. It collects the configuration, the plug-ins as well as their configuration certificates, and licensing into a file. Follow these steps to use the migration tool: Deploy a new vRO7 Appliance. Log in to your Windows Orchestrator OS. Stop the VMware vCenter Orchestrator Service (Windows services). Open a Web browser and log in to your new vRO7 - control center and then go to Export/Import Configuration. Select Migrate Configuration and click on the here link. The link points to: https://[vRO7]:8283/vco-controlcenter/api/server/migration-tool . Stop the vRO7 Orchestrator service. Unzip the migration-tool.zip and copy the subfolder called migration‑cli into the Orchestrator director, for example, C:Program FilesVMwareInfrastructureOrchestratormigration-clibin. Open a command prompt. If you have a Java install, make sure your path points to it. Try java ‑version. If that works continue, if not, do the following: Set the PATH environment variable to the Java install that comes with Orchestrator, set PATH=%PATH%;C:Program FilesVMwareInfrastructureOrchestratorUninstall_vCenter Orchestratoruninstall-jrebin CD to the directory ..Orchestratormigration-clibin. Execute the command vro-migrate.bat export. There may be errors showing about SLF4J, you can ignore those. In the main directory (..Orchestrator) you should now find an orchestrator‑config‑export‑VC55‑[date].zip file. Go back to the Web browser and upload the ZIP file into Migration Configuration by clicking on Browse and select the file. Click on Import. You now see what can be imported. You can unselect item you wish not to migrate. Click Finish Migration. Restart the Orchestrator service. Check the settings. External database If you have an external database things are pretty easy. For using the initial internal database please see the additional steps in the There's more section of this recipe. Backup the external database. Connect to the Windows Orchestrator Configurator. Write down all the plugins you have installed as well as their version. Shutdown the Windows version and deploy the Appliance, this way you can use the same IP and hostname if you want. Login to the Appliance version's Configurator. Stop the Orchestrator service Install all plugins you had in the Windows version. Attach the external database. Make sure that all trusted SSL certificates are still there, such as vCenter and SSO. Check the authentication if it is still working. Use the test login. Check your licensing. Force a plugin reinstall (Troubleshooting | Reinstall the plug-ins when the server starts). Start the Orchestrator service and try to log in. Make a complete sanity check. Package transfer This is the method that will only pull your packages across. This the only easy method to use when you are transitioning between different databases, such as between MS SQL and PostgreSQL. Connect to your Windows version Create a package of all the workflows, action, and other items you need. Shutdown Windows and deploy the Appliance. Configure the Appliance with DB, Authentication and all the plugins you previously had. Import the package. How it works... Moving from the Windows version of Orchestrator to the Appliance version isn't such a big thing. Worst case scenario is using the packaging transfer. The only really important thing is to use the same version of the Windows Orchestrator as the Appliance version. You can download a lot of old versions including 5.5 from http://www.vmware.com/in.html. If you can't find the same version, upgrade your existing vCenter Orchestrator to one you can download. After you transferred the data to the appliance you need to make sure that everything works correctly and then you can upgrade to vRO7. There's more... When you just run Orchestrator from your Windows vCenter installation and didn't configure an external database then Orchestrator uses the vCenter database and mixes the Orchestrator tables with the vCenter tables. In order to only export the Orchestrator ones, we will use the MS SQL Server Management Studio (free download from www.microsoft.com called Microsoft SQL Server 2008 R2 RTM). To transfer the only the Orchestrator database tables from the vCenter MS-SQL to an external SQL do the following: Stop the VMware vCenter Orchestrator Service (Windows Services) on your Windows Orchestrator. Start the SQL Server Management Studio on your external SQL server. Connect to the vCenter DB. For SQL Express use: [vcenter]VIM_SQLEXP with Windows Authentication. Right-click on your vCenter Database (SQL Express: VIM_VCDB) and select Tasks | Export Data. In the wizard, select your source, which should be the correct one already and click Next. Choose SQL Server Native Client 10.0 and enter the name of your new SQL server. Click on New to create a new database on that SQL server (or use an empty one you created already. Click Next. Select Copy data from one or more tables or views and click Next. Now select every database which starts with VMO_ and then click Next. Select Run immediately and click Finish. Now you have the Orchestrator database extracted as an external database. You still need to configure a user and rights. Then proceed with the section External database in this recipe. Orchestrator client and 4K display scaling This recipe shows a hack how to make the Orchestrator client scale on 4K displays. Getting ready We need to download the program Resource Tuner (http://www.restuner.com/) the trial version will work, however, consider buying it if it works for you. You need to know the path to your Java install, this should be something like: C:Program Files (x86)Javajre1.x.xxbin How to do it... Before you start…. Please be careful as this impacts your whole Java environment. This worked for me very well with Java 1.8.0_91-b14. Download and install Resource Tuner. Run Resource Tuner as administrator. Open the file javaws.exe in your Java directory. Expand manifest and the click on the first entry (the name can change due to localization). Look for the line <dpiAware>true</dpiAware>. Exchange the true for a false Save end exit. Repeat the same for all the other java*.exe in the same directory as well as j2launcher.exe. Start the Client.jnlp (the file that downloads when you start the web application). How it works... In Windows 10 you can set the scaling of applications when you are using high definition monitors (4K displays). What you are doing is telling Java that it is not DPI aware, meaning it the will use the Windows 10 default scaler, instead of an internal scaler. There's more... For any other application such as Snagit or Photoshop I found that this solution works quite well: http://www.danantonielli.com/adobe-app-scaling-on-high-dpi-displays-fix/. Summary In this article we discussed about moving an existing Windows Orchestrator installation to the appliance and a hack on how to make the Orchestrator client scale on 4K displays. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] vRealize Automation and the Deconstruction of Components [article] FAQ on Virtualization and Microsoft App-V [article]
Read more
  • 0
  • 0
  • 7598
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-basics-image-histograms-opencv
Packt
12 Oct 2016
11 min read
Save for later

Basics of Image Histograms in OpenCV

Packt
12 Oct 2016
11 min read
In this article by Samyak Datta, author of the book Learning OpenCV 3 Application Development we are going to focus our attention on a different style of processing pixel values. The output of the techniques, which would comprise our study in the current article, will not be images, but other forms of representation for images, namely image histograms. We have seen that a two-dimensional grid of intensity values is one of the default forms of representing images in digital systems for processing as well as storage. However, such representations are not at all easy to scale. So, for an image with a reasonably low spatial resolution, say 512 x 512 pixels, working with a two-dimensional grid might not pose any serious issues. However, as the dimensions increase, the corresponding increase in the size of the grid may start to adversely affect the performance of the algorithms that work with the images. A primary advantage that an image histogram has to offer is that the size of a histogram is a constant that is independent of the dimensions of the image. As a consequence of this, we are guaranteed that irrespective of the spatial resolution of the images that we are dealing with, the algorithms that power our solutions will have to deal with a constant amount of data if they are working with image histograms. (For more resources related to this topic, see here.) Each descriptor captures some particular aspects or features of the image to construct its own form of representation. One of the common pitfalls of using histograms as a form of image representation as compared to its native form of using the entire two-dimensional grid of values is loss of information. A full-fledged image representation using pixel intensity values for all pixel locations naturally consists of all the information that you would need to reconstruct a digital image. However, the same cannot be said about histograms. When we study about image histograms in detail, we'll get to see exactly what information do we stand to lose. And this loss in information is prevalent across all forms of image descriptors. The basics of histograms At the outset, we will briefly explain the concept of a histogram. Most of you might already know this from your lessons on basic statistics. However, we will reiterate this for the sake of completeness. Histogram is a form of data representation technique that relies on an aggregation of data points. The data is aggregated into a set of predefined bins that are represented along the x axis, and the number of data points that fall within each of the bins make up the corresponding counts on the y axis. For example, let's assume that our data looks something like the following: D={2,7,1,5,6,9,14,11,8,10,13} If we define three bins, namely Bin_1 (1 - 5), Bin_2 (6 - 10), and Bin_3 (11 - 15), then the histogram corresponding to our data would look something like this: Bins Frequency Bin_1 (1 - 5) 3 Bin_2 (6 - 10) 5 Bin_3 (11 - 15) 3 What this histogram data tells us is that we have three values between 1 and 5, five between 6 and 10, and three again between 11 and 15. Note that it doesn't tell us what the values are, just that some n values exist in a given bin. A more familiar visual representation of the histogram in discussion is shown as follows: As you can see, the bins have been plotted along the x axis and their corresponding frequencies along the y axis. Now, in the context of images, how is a histogram computed? Well, it's not that difficult to deduce. Since the data that we have comprise pixel intensity values, an image histogram is computed by plotting a histogram using the intensity values of all its constituent pixels. What this essentially means is that the sequence of pixel intensity values in our image becomes the data. Well, this is in fact the simplest kind of histogram that you can compute using the information available to you from the image. Now, coming back to image histograms, there are some basic terminologies (pertaining to histograms in general) that you need to be aware of before you can dip your hands into code. We have explained them in detail here: Histogram size: The histogram size refers to the number of bins in the histogram. Range: The range of a histogram is the range of data that we are dealing with. The range of data as well as the histogram size are both important parameters that define a histogram. Dimensions: Simply put, dimensions refer to the number of the type of items whose values we aggregate in the histogram bins. For example, consider a grayscale image. We might want to construct a histogram using the pixel intensity values for such an image. This would be an example of a single-dimensional histogram because we are just interested in aggregating the pixel intensity values and nothing else. The data, in this case, is spread over a range of 0 to 255. On account of being one-dimensional, such histograms can be represented graphically as 2D plots—one-dimensional data (pixel intensity values) being plotted on the x axis (in the form of bins) along with the corresponding frequency counts along the y axis. We have already seen an example of this before. Now, imagine a color image with three channels: red, green, and blue. Let's say that we want to plot a histogram for the intensities in the red and green channels combined. This means that our data now becomes a pair of values (r, g). A histogram that is plotted for such data will have a dimensionality of 2. The plot for such a histogram will be a 3D plot with the data bins covering the x and y axes and the frequency counts plotted along the z axis. Now that we have discussed the theoretical aspects of image histograms in detail, let's start thinking along the lines of code. We will start with the simplest (and in fact the most ubiquitous) design of image histograms. The range of our data will be from 0 to 255 (both inclusive), which means that all our data points will be integers that fall within the specified range. Also, the number of data points will equal the number of pixels that make up our input image. The simplicity in design comes from the fact that we fix the size of the histogram (the number of bins) as 256. Now, take a moment to think about what this means. There are 256 different possible values that our data points can take and we have a separate bin corresponding to each one of those values. So such an image histogram will essentially depict the 256 possible intensity values along with the counts of the number of pixels in the image that are colored with each of the different intensities. Before taking a peek at what OpenCV has to offer, let's try to implement such a histogram on our own! We define a function named computeHistogram() that takes the grayscale image as an input argument and returns the image histogram. From our earlier discussions, it is evident that the histogram must contain 256 entries (for the 256 bins): one for each integer between 0 and 255. The value stored in the histogram corresponding to each of the 256 entries will be the count of the image pixels that have a particular intensity value. So, conceptually, we can use an array for our implementation such that the value stored in the histogram [ i ] (for 0≤i≤255) will be the count of the number of pixels in the image having the intensity of i. However, instead of using a C++ array, we will comply with the rules and standards followed by OpenCV and represent the histogram as a Mat object. We have already seen that a Mat object is nothing but a multidimensional array store. The implementation is outlined in the following code snippet: Mat computeHistogram(Mat input_image) { Mat histogram = Mat::zeros(256, 1, CV_32S); for (int i = 0; i < input_image.rows; ++i) { for (int j = 0; j < input_image.cols; ++j) { int binIdx = (int) input_image.at<uchar>(i, j); histogram.at<int>(binIdx, 0) += 1; } } return histogram; } As you can see, we have chosen to represent the histogram as a 256-element-column-vector Mat object. We iterate over all the pixels in the input image and keep on incrementing the corresponding counts in the histogram (which had been initialized to 0). As per our description of the image histogram properties, it is easy to see that the intensity value of any pixel is the same as the bin index that is used to index into the appropriate histogram bin to increment the count. Having such an implementation ready, let's test it out with the help of an actual image. The following code demonstrates a main() function that reads an input image, calls the computeHistogram() function that we have defined just now, and displays the contents of the histogram that is returned as a result: int main() { Mat input_image = imread("/home/samyak/Pictures/lena.jpg", IMREAD_GRAYSCALE); Mat histogram = computeHistogram(input_image); cout << "Histogram...n"; for (int i = 0; i < histogram.rows; ++i) cout << i << " : " << histogram.at<int>(i, 0) << "n"; return 0; } We have used the fact that the histogram that is returned from the function will be a single column Mat object. This makes the code that displays the contents of the histogram much cleaner. Histograms in OpenCV We have just seen the implementation of a very basic and minimalistic histogram using the first principles in OpenCV. The image histogram was basic in the sense that all the bins were uniform in size and comprised only a single pixel intensity. This made our lives simple when we designed our code for the implementation; there wasn't any need to explicitly check the membership of a data point (the intensity value of a pixel) with all the bins of our histograms. However, we know that a histogram can have bins whose sizes span more than one. Can you think of the changes that we might need to make in the code that we had written just now to accommodate for bin sizes larger than 1? If this change seems doable to you, try to figure out how to incorporate the possibility of non-uniform bin sizes or multidimensional histograms. By now, things might have started to get a little overwhelming to you. No need to worry. As always, OpenCV has you covered! The developers at OpenCV have provided you with a calcHist() function whose sole purpose is to calculate the histograms for a given set of arrays. By arrays, we refer to the images represented as Mat objects, and we use the term set because the function has the capability to compute multidimensional histograms from the given data: Mat computeHistogram(Mat input_image) { Mat histogram; int channels[] = { 0 }; int histSize[] = { 256 }; float range[] = { 0, 256 }; const float* ranges[] = { range }; calcHist(&input_image, 1, channels, Mat(), histogram, 1, histSize, ranges, true, false); return histogram; } Before we move on to an explanation of the different parameters involved in the calcHist() function call, I want to bring your attention to the abundant use of arrays in the preceding code snippet. Even arguments as simple as histogram sizes are passed to the function in the form of arrays rather than integer values, which at first glance seem quite unnecessary and counter-intuitive. The usage of arrays is due to the fact that the implementation of calcHist() is equipped to handle multidimensional histograms as well, and when we are dealing with such multidimensional histogram data, we require multiple parameters to be passed, one for each dimension. This would become clearer once we demonstrate an example of calculating multidimensional histograms using the calcHist() function. For the time being, we just wanted to clear the immediate confusion that might have popped up in your minds upon seeing the array parameters. Here is a detailed list of the arguments in the calcHist() function call: Source images Number of source images Channel indices Mask Dimensions (dims) Histogram size Ranges Uniform flag Accumulate flag The last couple of arguments (the uniform and accumulate flags) have default values of true and false, respectively. Hence, the function call that you have seen just now can very well be written as follows: calcHist(&input_image, 1, channels, Mat(), histogram, 1, histSize, ranges); Summary Thus in this article we have successfully studied fundamentals of using histograms in OpenCV for image processing. Resources for Article: Further resources on this subject: Remote Sensing and Histogram [article] OpenCV: Image Processing using Morphological Filters [article] Learn computer vision applications in Open CV [article]
Read more
  • 0
  • 0
  • 22139

article-image-server-side-swift-building-slack-bot-part-1
Peter Zignego
12 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 1

Peter Zignego
12 Oct 2016
5 min read
As a remote iOS developer, I love Slack. It’s my meeting room and my water cooler over the course of a work day. If you’re not familiar with Slack, it is a group communication tool popular in Silicon Valley and beyond. What makes Slack valuable beyond replacing email as the go-to communication method for buisnesses is that it is more than chat; it is a platform. Thanks to Slack’s open attitude toward developers with its API, hundreds of developers have been building what have become known as Slack bots. There are many different libraries available to help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server companies such as Linode and Digital Ocean. But last June, Apple open sourced Swift, including official support for Linux (Ubuntu 14 and 15 specifically). This made it possible to deploy Swift code on Linux servers, and developers hit the ground running to build out the infrastructure needed to make Swift a viable language for server applications. Even with this huge developer effort, it is still early days for server-side Swift. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, a concurrency framework that provides much of the underpinning for Foundation. In addition to rough official tooling, writing code for server-side Swift can be a bit like hitting a moving target, with biweekly snapshot releases and multiple, ABI-incompatible versions to target. Zewo to Sixty on Linux Fortunately, there are some good options for deploying Swift code on servers right now, even with Apple’s libraries in flux. I’m going to focus in on one in particular: Zewo. Zewo is modular by design, allowing us to use the Swift Package Manager to pull in only what we need instead of a monolithic framework. It’s open source and is a great community of developers that spans the globe. If you’re interested in the world of server-side Swift, you should get involved! Oh, and of course they have a Slack. Using Zewo and a few other open source libraries, I was able to build a version of SlackKit that runs on Linux. A Swift Tutorial In this two-part post series I have detailed a step-by-step guide to writing a Slack bot in Swift and deploying it to Heroku. I’m going to be using OS X but this is also achievable on Linux using the editor of your choice. Prerequisites Install Homebrew: /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install swiftenv: brew install kylef/formulae/swiftenv Configure your shell: echo ‘if which swiftenv > /dev/null; then eval “$(swiftenv init -)”; fi’ >> ~/.bash_profile Download and install the latest Zewo-compatible snapshot: swiftenv install DEVELOPMENT-SNAPSHOT-2016-05-09-a swiftenv local DEVELOPMENT-SNAPSHOT-2016-05-09-a Install and Link OpenSSL: brew install openssl brew link openssl --force Let’s Keep Score The sample application we’ll be building is a leaderboard for Slack, like PlusPlus++ by Betaworks. It works like this: add a point for every @thing++, subtract a point for every @thing--, and show a leaderboard when asked @botname leaderboard. First, we need to create the directory for our application and initialize the basic project structure. mkdir leaderbot && cd leaderbot swift build --init Next, we need to edit Package.swift to add our dependency, SlackKit: importPackageDescription let package = Package( name: "Leaderbot", targets: [], dependencies: [ .Package(url: "https://github.com/pvzig/SlackKit.git", majorVersion: 0, minor: 0), ] ) SlackKit is dependent on several Zewo libraries, but thanks to the Swift Package Manager, we don’t have to worry about importing them explicitly. Then we need to build our dependencies: swift build And our development environment (we need to pass in some linker flags so that swift build knows where to find the version of OpenSSL we installed via Homebrew and the C modules that some of our Zewo libraries depend on): swift build -Xlinker -L$(pwd)/.build/debug/ -Xswiftc -I/usr/local/include -Xlinker -L/usr/local/lib -X In Part 2, I will show all of the Swift code, how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina. He writes at bytesized.co, tweets @pvzig, and freelances at Launch Software.fto help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server 
Read more
  • 0
  • 0
  • 5013

article-image-asynchronous-programming-f
Packt
12 Oct 2016
15 min read
Save for later

Asynchronous Programming in F#

Packt
12 Oct 2016
15 min read
In this article by Alfonso Garcia Caro Nunez and Suhaib Fahad, author of the book Mastering F#, sheds light on how writing applications that are non-blocking or reacting to events is increasingly becoming important in this cloud world we live in. A modern application needs to carry out a rich user interaction, communicate with web services, react to notifications, and so on; the execution of reactive applications is controlled by events. Asynchronous programming is characterized by many simultaneously pending reactions to internal or external events. These reactions may or may not be processed in parallel. (For more resources related to this topic, see here.) In .NET, both C# and F# provide asynchronous programming experience through keywords and syntaxes. In this article, we will go through the asynchronous programming model in F#, with a bit of cross-referencing or comparison drawn with the C# world. In this article, you will learn about asynchronous workflows in F# Asynchronous workflows in F# Asynchronous workflows are computation expressions that are setup to run asynchronously. It means that the system runs without blocking the current computation thread when a sleep, I/O, or other asynchronous process is performed. You may be wondering why do we need asynchronous programming and why can't we just use the threading concepts that we did for so long. The problem with threads is that the operation occupies the thread for the entire time that something happens or when a computation is done. On the other hand, asynchronous programming will enable a thread only when it is required, otherwise it will be normal code. There is also lot of marshalling and unmarshalling (junk) code that we will write around to overcome the issues that we face when directly dealing with threads. Thus, asynchronous model allows the code to execute efficiently whether we are downloading a page 50 or 100 times using a single thread or if we are doing some I/O operation over the network and there are a lot of incoming requests from the other endpoint. There is a list of functions that the Async module in F# exposes to create or use these asynchronous workflows to program. The asynchronous pattern allows writing code that looks like it is written for a single-threaded program, but in the internals, it uses async blocks to execute. There are various triggering functions that provide a wide variety of ways to create the asynchronous workflow, which is either a background thread, a .NET framework task object, or running the computation in the current thread itself. In this article, we will use the example of downloading the content of a webpage and modifying the data, which is as follows: let downloadPage (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } downloadPage("https://www.google.com") |> Async.RunSynchronously The preceding function does the following: The async expression, { … }, generates an object of type Async<string> These values are not actual results; rather, they are specifications of tasks that need to run and return a string Async.RunSynchronously takes this object and runs synchronously We just wrote a simple function with asynchronous workflows with relative ease and reason about the code, which is much better than using code with Begin/End routines. One of the most important point here is that the code is never blocked during the execution of the asynchronous workflow. This means that we can, in principle, have thousands of outstanding web requests—the limit being the number supported by the machine, not the number of threads that host them. Using let! In asynchronous workflows, we will use let! binding to enable execution to continue on other computations or threads, while the computation is being performed. After the execution is complete, the rest of the asynchronous workflow is executed, thus simulating a sequential execution in an asynchronous way. In addition to let!, we can also use use! to perform asynchronous bindings; basically, with use!, the object gets disposed when it loses the current scope. In our previous example, we used use! to get the HttpWebResponse object. We can also do as follows: let! resp = req.AsyncGetResponse() // process response We are using let! to start an operation and bind the result to a value, do!, which is used when the return of the async expression is a unit. do! Async.Sleep(1000) Understanding asynchronous workflows As explained earlier, asynchronous workflows are nothing but computation expressions with asynchronous patterns. It basically implements the Bind/Return pattern to implement the inner workings. This means that the let! expression is translated into a call to async. The bind and async.Return function are defined in the Async module in the F# library. This is a compiler functionality to translate the let! expression into computation workflows and, you, as a developer, will never be required to understand this in detail. The purpose of explaining this piece is to understand the internal workings of an asynchronous workflow, which is nothing but a computation expression. The following listing shows the translated version of the downloadPage function we defined earlier: async.Delay(fun() -> let req = HttpWebRequest.Create(url) async.Bind(req.AsyncGetResponse(), fun resp -> async.Using(resp, fun resp -> let respStream = resp.GetResponseStream() async.Using(new StreamReader(respStream), fun sr " -> reader.ReadToEnd() ) ) ) ) The following things are happening in the workflow: The Delay function has a deferred lambda that executes later. The body of the lambda creates an HttpWebRequest and is forwarded in a variable req to the next segment in the workflow. The AsyncGetResponse function is called and a workflow is generated, where it knows how to execute the response and invoke when the operation is completed. This happens internally with the BeginGetResponse and EndGetResponse functions already present in the HttpWebRequest class; the AsyncGetResponse is just a wrapper extension present in the F# Async module. The Using function then creates a closure to dispose the object with the IDisposable interface once the workflow is complete. Async module The Async module has a list of functions that allows writing or consuming asynchronous code. We will go through each function in detail with an example to understand it better. Async.AsBeginEnd It is very useful to expose the F# workflow functionality out of F#, say if we want to use and consume the API's in C#. The Async.AsBeginEnd method gives the possibility of exposing the asynchronous workflows as a triple of methods—Begin/End/Cancel—following the .NET Asynchronous Programming Model (APM). Based on our downloadPage function, we can define the Begin, End, Cancel functions as follows: type Downloader() = let beginMethod, endMethod, cancelMethod = " Async.AsBeginEnd downloadPage member this.BeginDownload(url, callback, state : obj) = " beginMethod(url, callback, state) member this.EndDownload(ar) = endMethod ar member this.CancelDownload(ar) = cancelMethod(ar) Async.AwaitEvent The Async.AwaitEvent method creates an asynchronous computation that waits for a single invocation of a .NET framework event by adding a handler to the event. type MyEvent(v : string) = inherit EventArgs() member this.Value = v; let testAwaitEvent (evt : IEvent<MyEvent>) = async { printfn "Before waiting" let! r = Async.AwaitEvent evt printfn "After waiting: %O" r.Value do! Async.Sleep(1000) return () } let runAwaitEventTest () = let evt = new Event<Handler<MyEvent>, _>() Async.Start <| testAwaitEvent evt.Publish System.Threading.Thread.Sleep(3000) printfn "Before raising" evt.Trigger(null, new MyEvent("value")) printfn "After raising" > runAwaitEventTest();; > Before waiting > Before raising > After raising > After waiting : value The testAwaitEvent function listens to the event using Async.AwaitEvent and prints the value. As the Async.Start will take some time to start up the thread, we will simply call Thread.Sleep to wait on the main thread. This is for example purpose only. We can think of scenarios where a button-click event is awaited and used inside an async block. Async.AwaitIAsyncResult Creates a computation result and waits for the IAsyncResult to complete. IAsyncResult is the asynchronous programming model interface that allows us to write asynchronous programs. It returns true if IAsyncResult issues a signal within the given timeout. The timeout parameter is optional, and its default value is -1 of Timeout.Infinite. let testAwaitIAsyncResult (url: string) = async { let req = HttpWebRequest.Create(url) let aResp = req.BeginGetResponse(null, null) let! asyncResp = Async.AwaitIAsyncResult(aResp, 1000) if asyncResp then let resp = req.EndGetResponse(aResp) use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() else return "" } > Async.RunSynchronously (testAwaitIAsyncResult "https://www.google.com") We will modify the downloadPage example with AwaitIAsyncResult, which allows a bit more flexibility where we want to add timeouts as well. In the preceding example, the AwaitIAsyncResult handle will wait for 1000 milliseconds, and then it will execute the next steps. Async.AwaitWaitHandle Creates a computation that waits on a WaitHandle—wait handles are a mechanism to control the execution of threads. The following is an example with ManualResetEvent: let testAwaitWaitHandle waitHandle = async { printfn "Before waiting" let! r = Async.AwaitWaitHandle waitHandle printfn "After waiting" } let runTestAwaitWaitHandle () = let event = new System.Threading.ManualResetEvent(false) Async.Start <| testAwaitWaitHandle event System.Threading.Thread.Sleep(3000) printfn "Before raising" event.Set() |> ignore printfn "After raising" The preceding example uses ManualResetEvent to show how to use AwaitHandle, which is very similar to the event example that we saw in the previous topic. Async.AwaitTask Returns an asynchronous computation that waits for the given task to complete and returns its result. This helps in consuming C# APIs that exposes task based asynchronous operations. let downloadPageAsTask (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } |> Async.StartAsTask let testAwaitTask (t: Task<string>) = async { let! r = Async.AwaitTask t return r } > downloadPageAsTask "https://www.google.com" |> testAwaitTask |> Async.RunSynchronously;; The preceding function is also downloading the web page as HTML content, but it starts the operation as a .NET task object. Async.FromBeginEnd The FromBeginEnd method acts as an adapter for the asynchronous workflow interface by wrapping the provided Begin/End method. Thus, it allows using large number of existing components that support an asynchronous mode of work. The IAsyncResult interface exposes the functions as Begin/End pattern for asynchronous programming. We will look at the same download page example using FromBeginEnd: let downloadPageBeginEnd (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = Async.FromBeginEnd(req.BeginGetResponse, req.EndGetResponse) use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } The function accepts two parameters and automatically identifies the return type; we will use BeginGetResponse and EndGetResponse as our functions to call. Internally, Async.FromBeginEnd delegates the asynchronous operation and gets back the handle once the EndGetResponse is called. Async.FromContinuations Creates an asynchronous computation that captures the current success, exception, and cancellation continuations. To understand these three operations, let's create a sleep function similar to Async.Sleep using timer: let sleep t = Async.FromContinuations(fun (cont, erFun, _) -> let rec timer = new Timer(TimerCallback(callback)) and callback state = timer.Dispose() cont(()) timer.Change(t, Timeout.Infinite) |> ignore ) let testSleep = async { printfn "Before" do! sleep 5000 printfn "After 5000 msecs" } Async.RunSynchronously testSleep The sleep function takes an integer and returns a unit; it uses Async.FromContinuations to allow the flow of the program to continue when a timer event is raised. It does so by calling the cont(()) function, which is a continuation to allow the next step in the asynchronous flow to execute. If there is any error, we can call erFun to throw the exception and it will be handled from the place we are calling this function. Using the FromContinuation function helps us wrap and expose functionality as async, which can be used inside asynchronous workflows. It also helps to control the execution of the programming with cancelling or throwing errors using simple APIs. Async.Start Starts the asynchronous computation in the thread pool. It accepts an Async<unit> function to start the asynchronous computation. The downloadPage function can be started as follows: let asyncDownloadPage(url) = async { let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   We wrap the function to another async function that returns an Async<unit> function so that it can be called by Async.Start. Async.StartChild Starts a child computation within an asynchronous workflow. This allows multiple asynchronous computations to be executed simultaneously, as follows: let subTask v = async { print "Task %d started" v Thread.Sleep (v * 1000) print "Task %d finished" v return v } let mainTask = async { print "Main task started" let! childTask1 = Async.StartChild (subTask 1) let! childTask2 = Async.StartChild (subTask 5) print "Subtasks started" let! child1Result = childTask1 print "Subtask1 result: %d" child1Result let! child2Result = childTask2 print "Subtask2 result: %d" child2Result print "Subtasks completed" return () } Async.RunSynchronously mainTask Async.StartAsTask Executes a computation in the thread pool and returns a task that will be completed in the corresponding state once the computation terminates. We can use the same example of starting the downloadPage function as a task. let downloadPageAsTask (url: string) = async { let req = HttpWebRequest.Create(url) use! resp = req.AsyncGetResponse() use respStream = resp.GetResponseStream() use sr = new StreamReader(respStream) return sr.ReadToEnd() } |> Async.StartAsTask let task = downloadPageAsTask("http://www.google.com") prinfn "Do some work" task.Wait() printfn "done"   Async.StartChildAsTask Creates an asynchronous computation from within an asynchronous computation, which starts the given computation as a task. let testAwaitTask = async { print "Starting" let! child = Async.StartChildAsTask <| async { // " Async.StartChildAsTask shall be described later print "Child started" Thread.Sleep(5000) print "Child finished" return 100 } print "Waiting for the child task" let! result = Async.AwaitTask child print "Child result %d" result } Async.StartImmediate Runs an asynchronous computation, starting immediately on the current operating system thread. This is very similar to the Async.Start function we saw earlier: let asyncDownloadPage(url) = async { let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.StartImmediate Async.SwitchToNewThread Creates an asynchronous computation that creates a new thread and runs its continuation in it: let asyncDownloadPage(url) = async { do! Async.SwitchToNewThread() let! result = downloadPage(url) printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Async.SwitchToThreadPool Creates an asynchronous computation that queues a work item that runs its continuation, as follows: let asyncDownloadPage(url) = async { do! Async.SwitchToNewThread() let! result = downloadPage(url) do! Async.SwitchToThreadPool() printfn "%s" result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Async.SwitchToContext Creates an asynchronous computation that runs its continuation in the Post method of the synchronization context. Let's assume that we set the text from the downloadPage function to a UI textbox, then we will do it as follows: let syncContext = System.Threading.SynchronizationContext() let asyncDownloadPage(url) = async { do! Async.SwitchToContext(syncContext) let! result = downloadPage(url) textbox.Text <- result"} asyncDownloadPage "http://www.google.com" |> Async.Start   Note that in the console applications, the context will be null. Async.Parallel The Parallel function allows you to execute individual asynchronous computations queued in the thread pool and uses the fork/join pattern. let parallel_download() = let sites = ["http://www.bing.com"; "http://www.google.com"; "http://www.yahoo.com"; "http://www.search.com"] let htmlOfSites = Async.Parallel [for site in sites -> downloadPage site ] |> Async.RunSynchronously printfn "%A" htmlOfSites   We will use the same example of downloading HTML content in a parallel way. The preceding example shows the essence of parallel I/O computation The async function, { … }, in the downloadPage function shows the asynchronous computation These are then composed in parallel using the fork/join combinator In this sample, the composition executed waits synchronously for overall result Async.OnCancel A cancellation interruption in the asynchronous computation when a cancellation occurs. It returns an asynchronous computation trigger before being disposed. // This is a simulated cancellable computation. It checks the " token source // to see whether the cancel signal was received. let computation " (tokenSource:System.Threading.CancellationTokenSource) = async { use! cancelHandler = Async.OnCancel(fun () -> printfn " "Canceling operation.") // Async.Sleep checks for cancellation at the end of " the sleep interval, // so loop over many short sleep intervals instead of " sleeping // for a long time. while true do do! Async.Sleep(100) } let tokenSource1 = new " System.Threading.CancellationTokenSource() let tokenSource2 = new " System.Threading.CancellationTokenSource() Async.Start(computation tokenSource1, tokenSource1.Token) Async.Start(computation tokenSource2, tokenSource2.Token) printfn "Started computations." System.Threading.Thread.Sleep(1000) printfn "Sending cancellation signal." tokenSource1.Cancel() tokenSource2.Cancel() The preceding example implements the Async.OnCancel method to catch or interrupt the process when CancellationTokenSource is cancelled. Summary In this article, we went through detail, explanations about different semantics in asynchronous programming with F#, used with asynchronous workflows. We saw a number of functions from the Async module. Resources for Article: Further resources on this subject: Creating an F# Project [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Responsive Applications with Asynchronous Programming [article]
Read more
  • 0
  • 0
  • 13428

article-image-create-user-profile-system-and-use-null-coalesce-operator
Packt
12 Oct 2016
15 min read
Save for later

Create a User Profile System and use the Null Coalesce Operator

Packt
12 Oct 2016
15 min read
In this article by Jose Palala and Martin Helmich, author of PHP 7 Programming Blueprints, will show you how to build a simple profiles page with listed users which you can click on, and create a simple CRUD-like system which will enable us to register new users to the system, and delete users for banning purposes. (For more resources related to this topic, see here.) You will learn to use the PHP 7 null coalesce operator so that you can show data if there is any, or just display a simple message if there isn’t any. Let's create a simple UserProfile class. The ability to create classes has been available since PHP 5. A class in PHP starts with the word class, and the name of the class: class UserProfile { private $table = 'user_profiles'; } } We've made the table private and added a private variable, where we define which table it will be related to. Let's add two functions, also known as a method, inside the class to simply fetch the data from the database: function fetch_one($id) { $link = mysqli_connect(''); $query = "SELECT * from ". $this->table . " WHERE `id` =' " . $id "'"; $results = mysqli_query($link, $query); } function fetch_all() { $link = mysqli_connect('127.0.0.1', 'root','apassword','my_dataabase' ); $query = "SELECT * from ". $this->table . "; $results = mysqli_query($link, $query); } The null coalesce operator We can use PHP 7's null coalesce operator to allow us to check whether our results contain anything, or return a defined text which we can check on the views—this will be responsible for displaying any data. Lets put this in a file which will contain all the define statements, and call it: //definitions.php define('NO_RESULTS_MESSAGE', 'No results found'); require('definitions.php'); function fetch_all() { …same lines ... $results = $results ?? NO_RESULTS_MESSAGE; return $message; } On the client side, we'll need to come up with a template to show the list of user profiles. Let’s create a basic HTML block to show that each profile can be a div element with several list item elements to output each table. In the following function, we need to make sure that all values have been filled in with at least the name and the age. Then we simply return the entire string when the function is called: function profile_template( $name, $age, $country ) { $name = $name ?? null; $age = $age ?? null; if($name == null || $age === null) { return 'Name or Age need to be set'; } else { return '<div> <li>Name: ' . $name . ' </li> <li>Age: ' . $age . '</li> <li>Country: ' . $country . ' </li> </div>'; } } Separation of concerns In a proper MVC architecture, we need to separate the view from the models that get our data, and the controllers will be responsible for handling business logic. In our simple app, we will skip the controller layer since we just want to display the user profiles in one public facing page. The preceding function is also known as the template render part in an MVC architecture. While there are frameworks available for PHP that use the MVC architecture out of the box, for now we can stick to what we have and make it work. PHP frameworks can benefit a lot from the null coalesce operator. In some codes that I've worked with, we used to use the ternary operator a lot, but still had to add more checks to ensure a value was not falsy. Furthermore, the ternary operator can get confusing, and takes some getting used to. The other alternative is to use the isSet function. However, due to the nature of the isSet function, some falsy values will be interpreted by PHP as being a set. Creating views Now that we have our  model complete, a template render function, we just need to create the view with which we can look at each profile. Our view will be put inside a foreach block, and we'll use the template we wrote to render the right values: //listprofiles.php <html> <!doctype html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"> </head> <body> <?php foreach($results as $item) { echo profile_template($item->name, $item->age, $item->country; } ?> </body> </html> Let's put the code above into index.php. While we may install the Apache server, configure it to run PHP, install new virtual hosts and the other necessary featuress, and put our PHP code into an Apache folder, this will take time, so, for the purposes of testing this out, we can just run PHP's server for development. To  run the built-in PHP server (read more at http://php.net/manual/en/features.commandline.webserver.php) we will use the folder we are running, inside a terminal: php -S localhost:8000 If we open up our browser, we should see nothing yet—No results found. This means we need to populate our database. If you have an error with your database connection, be sure to replace the correct database credentials we supplied into each of the mysql_connect calls that we made. To supply data into our database, we can create a simple SQL script like this: INSERT INTO user_profiles ('Chin Wu', 30, 'Mongolia'); INSERT INTO user_profiles ('Erik Schmidt', 22, 'Germany'); INSERT INTO user_profiles ('Rashma Naru', 33, 'India'); Let's save it in a file such as insert_profiles.sql. In the same directory as the SQL file, log on to the MySQL client by using the following command: mysql -u root -p Then type use <name of database>: mysql> use <database>; Import the script by running the source command: mysql> source insert_profiles.sql Now our user profiles page should show the following: Create a profile input form Now let's create the HTML form for users to enter their profile data. Our profiles app would be no use if we didn't have a simple way for a user to enter their user profile details. We'll create the profile input form  like this: //create_profile.php <html> <body> <form action="post_profile.php" method="POST"> <label>Name</label><input name="name"> <label>Age</label><input name="age"> <label>Country</label><input name="country"> </form> </body> </html> In this profile post, we'll need to create a PHP script to take care of anything the user posts. It will create an SQL statement from the input values and output whether or not they were inserted. We can use the null coalesce operator again to verify that the user has inputted all values and left nothing undefined or null: $name = $_POST['name'] ?? ""; $age = $_POST['country'] ?? ""; $country = $_POST['country'] ?? ""; This prevents us from accumulating errors while inserting data into our database. First, let's create a variable to hold each of the inputs in one array: $input_values = [ 'name' => $name, 'age' => $age, 'country' => $country ]; The preceding code is a new PHP 5.4+ way to write arrays. In PHP 5.4+, it is no longer necessary to put an actual array(); the author personally likes the new syntax better. We should create a new method in our UserProfile class to accept these values: Class UserProfile { public function insert_profile($values) { $link = mysqli_connect('127.0.0.1', 'username','password', 'databasename'); $q = " INSERT INTO " . $this->table . " VALUES ( '". $values['name']."', '".$values['age'] . "' ,'". $values['country']. "')"; return mysqli_query($q); } } Instead of creating a parameter in our function to hold each argument as we did with our profile template render function, we can simply use an array to hold our values. This way, if a new field needs to be inserted into our database, we can just add another field to the SQL insert statement. While we are at it, let's create the edit profile section. For now, we'll assume that whoever is using this edit profile is the administrator of the site. We'll need to create a page where, provided the $_GET['id'] or has been set, that the user that we will be fetching from the database and displaying on the form. <?php require('class/userprofile.php');//contains the class UserProfile into $id = $_GET['id'] ?? 'No ID'; //if id was a string, i.e. "No ID", this would go into the if block if(is_numeric($id)) { $profile = new UserProfile(); //get data from our database $results = $user->fetch_id($id); if($results && $results->num_rows > 0 ) { while($obj = $results->fetch_object()) { $name = $obj->name; $age = $obj->age; $country = $obj->country; } //display form with a hidden field containing the value of the ID ?> <form action="post_update_profile.php" method="post"> <label>Name</label><input name="name" value="<?=$name?>"> <label>Age</label><input name="age" value="<?=$age?>"> <label>Country</label><input name="country" value="<?=country?>"> </form> <?php } else { exit('No such user'); } } else { echo $id; //this should be No ID'; exit; } Notice that we're using what is known as the shortcut echo statement in the form. It makes our code simpler and easier to read. Since we're using PHP 7, this feature should come out of the box. Once someone submits the form, it goes into our $_POST variable and we'll create a new Update function in our UserProfile class. Admin system Let's finish off by creating a simple grid for an admin dashboard portal that will be used with our user profiles database. Our requirement for this is simple: We can just set up a table-based layout that displays each user profile in rows. From the grid, we will add the links to be able to edit the profile, or  delete it, if we want to. The code to display a table in our HTML view would look like this: <table> <tr> <td>John Doe</td> <td>21</td> <td>USA</td> <td><a href="edit_profile.php?id=1">Edit</a></td> <td><a href="profileview.php?id=1">View</a> <td><a href="delete_profile.php?id=1">Delete</a> </tr> </table> This script to this is the following: //listprofiles.php $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?=$row['name'];?></td> <td><?=$row['age'];?></td> <td><?=$row['country'];?></td> <td><a href="edit_profile.php?id=<?=$id?>">Edit</a></td> <td><a href="profileview.php?id=<?=$id?>">View</a> <td><a href="delete_profile.php?id=<?=$id?>">Delete</a> </tr> <?php } There's one thing that we haven't yet created: A delete_profile.php page. The view and edit pages - have been discussed already. Here's how the delete_profile.php page would look: <?php //delete_profile.php $connection = mysqli_connect('localhost','<username>','<password>', '<databasename>'); $id = $_GET['id'] ?? 'No ID'; if(is_numeric($id)) { mysqli_query( $connection, "DELETE FROM userprofiles WHERE id = '" .$id . "'"); } else { echo $id; } i(!is_numeric($id)) { exit('Error: non numeric $id'); } else { echo "Profile #" . $id . " has been deleted"; ?> Of course, since we might have a lot of user profiles in our database, we have to create a simple pagination. In any pagination system, you just need to figure out the total number of rows, and how many rows you want displayed per page. We can create a function that will be able to return a URL that contains the page number and how many to view per page. From our queries database, we first create a new function for us to select only up to the total number of items in our database: class UserProfile{ // …. Etc … function count_rows($table) { $dbconn = new mysqli('localhost', 'root', 'somepass', 'databasename'); $query = $dbconn->query("select COUNT(*) as num from '". $table . "'"); $total_pages = mysqli_fetch_array($query); return $total_pages['num']; //fetching by array, so element 'num' = count } For our pagination, we can create a simple paginate function which accepts the base_url of the page where we have pagination, the rows per page — also known as the number of records we want each page to have — and the total number of records found: require('definitions.php'); require('db.php'); //our database class Function paginate ($baseurl, $rows_per_page, $total_rows) { $pagination_links = array(); //instantiate an array to hold our html page links //we can use null coalesce to check if the inputs are null ( $total_rows || $rows_per_page) ?? exit('Error: no rows per page and total rows); //we exit with an error message if this function is called incorrectly $pages = $total_rows % $rows_per_page; $i= 0; $pagination_links[$i] = "<a href="http://". $base_url . "?pagenum=". $pagenum."&rpp=".$rows_per_page. ">" . $pagenum . "</a>"; } return $pagination_links; } This function will help display the above page links in a table: function display_pagination($links) { $display = ' <div class="pagination">'; <table><tr>'; foreach ($links as $link) { echo "<td>" . $link . "</td>"; } $display .= '</tr></table></div>'; return $display; } Notice that we're following the principle that there should rarely be any echo statements inside a function. This is because we want to make sure that other users of these functions are not confused when they debug some mysterious output on their page. By requiring the programmer to echo out whatever the functions return, it becomes easier to debug our program. Also, we're following the separation of concerns—our code doesn't output the display, it just formats the display. So any future programmer can just update the function's internal code and return something else. It also makes our function reusable; imagine that in the future someone uses our function—this way, they won't have to double check that there's some misplaced echo statement within our functions. A note on alternative short tags As you know, another way to echo is to use  the <?= tag. You can use it like so: <?="helloworld"?>.These are known as short tags. In PHP 7, alternative PHP tags have been removed. The RFC states that <%, <%=, %> and <script language=php>  have been deprecated. The RFC at https://wiki.php.net/rfc/remove_alternative_php_tags says that the RFC does not remove short opening tags (<?) or short opening tags with echo (<?=). Since we have laid out the groundwork of creating paginate links, we now just have to invoke our functions. The following script is all that is needed to create a paginated page using the preceding function: $mysqli = mysqli_connect('localhost','<username>','<password>', '<dbname>'); $limit = $_GET['rpp'] ?? 10; //how many items to show per page default 10; $pagenum = $_GET['pagenum']; //what page we are on if($pagenum) $start = ($pagenum - 1) * $limit; //first item to display on this page else $start = 0; //if no page var is given, set start to 0 /*Display records here*/ $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?php echo $row['name']; ?></td> <td><?php echo $row['age']; ?></td> <td><?php echo $row['country']; ?></td> </tr> <?php } /* Let's show our page */ /* get number of records through */ $record_count = $db->count_rows('userprofiles'); $pagination_links = paginate('listprofiles.php' , $limit, $rec_count); echo display_pagination($paginaiton_links); The HTML output of our page links in listprofiles.php will look something like this: <div class="pagination"><table> <tr> <td> <a href="listprofiles.php?pagenum=1&rpp=10">1</a> </td> <td><a href="listprofiles.php?pagenum=2&rpp=10">2</a> </td> <td><a href="listprofiles.php?pagenum=3&rpp=10">2</a> </td> </tr> </table></div> Summary As you can see, we have a lot of use cases for the null coalesce. We learned how to make a simple user profile system, and how to use PHP 7's null coalesce feature when fetching data from the database, which returns null if there are no records. We also learned that the null coalesce operator is similar to a ternary operator, except this returns null by default if there is no data. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Mapping Requirements for a Modular Web Shop App [article] HTML5: Generic Containers [article]
Read more
  • 0
  • 0
  • 8131
article-image-reactive-python-asynchronous-programming-rescue-part-2
Xavier Bruhiere
10 Oct 2016
5 min read
Save for later

Reactive Python - Asynchronous programming to the rescue, Part 2

Xavier Bruhiere
10 Oct 2016
5 min read
This two-part series explores asynchronous programming with Python using Asyncio. In Part 1 of this series, we started by building a project that shows how you can use Reactive Python in asynchronous programming. Let’s pick it back up here by exploring peer-to-peer communication and then just touching on service discovery before examining the streaming machine-to-machine concept. Peer-to-peer communication So far we’ve established a websocket connection to process clock events asynchronously. Now that one pin swings between 1's and 0's, let's wire a buzzer and pretend it buzzes on high states (1) and remains silent on low ones (0). We can rephrase that in Python, like so: # filename: sketches.py import factory class Buzzer(factory.FactoryLoop): """Buzz on light changes.""" def setup(self, sound): # customize buzz sound self.sound = sound @factory.reactive async def loop(self, channel, signal): """Buzzing.""" behavior = self.sound if signal == '1' else '...' self.out('signal {} received -> {}'.format(signal, behavior)) return behavior So how do we make them to communicate? Since they share a common parent class, we implement a stream method to send arbitrary data and acknowledge reception with, also, arbitrary data. To sum up, we want IOPin to use this API: class IOPin(factory.FactoryLoop): # [ ... ] @protocol.reactive async def loop(self, channel, msg): # [ ... ] await self.stream('buzzer', bits_stream) return 'acknowledged' Service discovery The first challenge to solve is service discovery. We need to target specific nodes within a fleet of reactive workers. This topic, however, goes past the scope of this post series. The shortcut below will do the job (that is, hardcode the nodes we will start), while keeping us focused on reactive messaging. # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: mesh.py """Provide nodes network knowledge.""" import websockets class Node(object): def __init__(self, name, socket, port): print('[ mesh ] registering new node: {}'.format(name)) self.name = name self._socket = socket self._port = port def uri(self, path): return 'ws://{socket}:{port}/{path}'.format(socket=self._socket, port=self._port, path=path) def connection(self, path=''): # instanciate the same connection as `clock` method return websockets.connect(self.uri(path)) # TODO service discovery def grid(): """Discover and build nodes network.""" # of course a proper service discovery should be used here # see consul or zookkeeper for example # note: clock is not a server so it doesn't need a port return [ Node('clock', 'localhost', None), Node('blink', 'localhost', 8765), Node('buzzer', 'localhost', 8765 + 1) ] Streaming machine-to-machine chat Let's provide FactoryLoop with the knowledge of the grid and implement an asynchronous communication channel. # filename: factory.py (continued) import mesh class FactoryLoop(object): def __init__(self, *args, **kwargs): # now every instance will know about the other ones self.grid = mesh.grid() # ... def node(self, name): """Search for the given node in the grid.""" return next(filter(lambda x: x.name == name, self.grid)) async def stream(self, target, data, channel): self.out('starting to stream message to {}'.format(target)) # use the node webscoket connection defined in mesh.py # the method is exactly the same as the clock async with self.node(target).connection(channel) as ws: for partial in data: self.out('> sending payload: {}'.format(partial)) # websockets requires bytes or strings await ws.send(str(partial)) self.out('< {}'.format(await ws.recv())) We added a bit of debugging lines to better understand how the data flows through the network. Every implementation of the FactoryLoop can both react to events and communicate with other nodes it is aware of. Wrapping up Time to update arduino.py and run our cluster of three reactive workers in three @click.command()# [ ... ]def main(sketch, **flags): # [ ... ] elif sketch == 'buzzer': sketchs.Buzzer(sound='buzz buzz buzz').run(flags['socket'], flags['port']) Launch three terminals or use a tool such as foreman to spawn multiple processes. Either way, keep in mind that you will need to track the scripts output. way, keep in mind that you will need to track the scripts output. $ # start IOPin and Buzzer on the same ports we hardcoded in mesh.py $ ./arduino.py buzzer --port 8766 $ ./arduino.py iopin --port 8765 $ # now that they listen, trigger actions with the clock (targetting IOPin port) $ ./arduino.py clock --port 8765 [ ... ] $ # Profit ! We just saw one worker reacting to a clock and another reacting to randomly generated events. The websocket protocol allowed us to exchange streaming data and receive arbitrary responses, unlocking sophisticated fleet orchestration. While we limited this example to two nodes, a powerful service discovery mechanism could bring to life a distributed network of microservices. By completing this post series, you should now have a better understanding of how to use Python with Asyncio for asynchronous programming. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high-intensity sports.
Read more
  • 0
  • 0
  • 5890

article-image-modern-natural-language-processing-part-3
Brian McMahan
07 Oct 2016
8 min read
Save for later

Modern Natural Language Processing – Part 3

Brian McMahan
07 Oct 2016
8 min read
In the previous two posts, I walked through how to preprocess raw data to a cleaner version and then turn that into a form which can be used in a machine learning experiment. I also discussed how you can set up a modular infrastructure so changing components isn't a hassle and your workflow is streamlined. In this final post in the series, I will outline a language model and discuss the modeling choices. I will outline the algorithms needed to both decode from the language model and to sample from it. Note that if you want to do a sequence labeling task instead of a language modeling task, the outputs must become your sequence labels, but the inputs are your sentences. The Language Model A language model has one goal: do not be surprised by the next token given all previous tokens. This translates into trying to maximize the probability of the next word given the previously seen words. It is useful to think of the 'shape' of our data's matrices at each step in our model. Specifically, though, our model will go through the following steps: Take as input the data being served from our server 2-dimensional matrices: (batch, sequence_length) Embed it using the matrix we constructed 3-dimensional tensors: (batch, sequence_length, embedding_size) Use any RNN variant to sequentially go through our data This will assume each example is on the batch dimension and time on the sequence_length dimension It will take vectors of size embedding_size and perform operations on them. Using the RNN output, we will apply a Dense layer, which will perform a classification back to our vocabulary space. This is our target. 0. Imports from keras.layers import Input, Embedding, LSTM, Dropout, Dense, TimeDistributed from keras.engine import Model from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint class TrainingModel(object): def__init__(self, igor): self.igor = igor def make(self): ### code below goes here 1. Input Defining an entry point into Keras is very similar to the other layers. The only difference is that you have to give it information about the shape of the input data. I tend to give it more information than it needs—the batch size in addition to the sequence length—because omitting the batch size is useful when you watch variable batch sizes, but it serves no purpose otherwise. It also quells any paranoid worries that the model will break because it got the shape wrong at some point. words_in = Input(batch_shape=(igor.batch_size, igor.sequence_length), dtype='int32') 2. Embed This is where we can use the embeddings we had previously calculated. Note the mask_zero flag. This is set to True so that the Layer will calculate the mask—where each position in the input tensor is equal to 0. The Layer, in accordance to Keras' underlying functionality, is then pushed through the network to be used in final calculations. emb_W = self.igor.embeddings.astype(K.floatx()) words_embedded = Embedding(igor.vocab_size, igor.embedding_size, mask_zero=True, weights=[emb_W])(words_in) 3. Recurrence word_seq = LSTM(igor.rnn_size, return_sequences=True)(words_embedded) 4. Classification predictions = TimeDistributed(Dense(igor.vocab_size, activation='softmax'))(word_seq) 5. Compile Model Now, we can compile the model. Keras makes this simple: specify the inputs, outputs, loss, optimizer and metrics. I have omitted the custom metrics for now. I will bring them back up in evaluations below. optimizer = Adam(igor.learning_rate) model = Model(input=words_in, output=predictions) model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=custom_metrics) All together ### model.py from keras.layers import Input, Embedding, LSTM, Dropout, Dense, TimeDistributed from keras.engine import Model from keras.optimizers import Adam from keras.callbacks import ModelCheckpoint class TrainingModel(object): def__init__(self, igor): self.igor = igor def make(self): words_in = Input(batch_shape=(igor.batch_size, igor.sequence_length), dtype='int32') words_embedded = Embedding(igor.vocab_size, igor.embedding_size, mask_zero=True)(words_in) word_seq = LSTM(igor.rnn_size, return_sequences=True)(words_embedded) predictions = TimeDistributed(Dense(igor.vocab_size, activation='softmax'))(word_seq) optimizer = Adam(igor.learning_rate) self.model = Model(input=words_in, output=predictions) self.model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=custom_metrics) Training The driver is a useful part of the pipeline. Not only does it give a convenient entry point to the training, but it also allows you to easily switch between training, debugging, and testing. ### driver.py if__name__ == "__main__": import sys if__name__ == "__main__": import sys igor = Igor.from_file(sys.argv[2]) if sys.argv[1] == "train": igor.prep() next(igor.train_gen(forever=True)) model = TrainingModel(igor) model.make() try: model.train() exceptKeyboardInterruptas e: # safe exitting stuff here. # perhaps, model save. print("death by keyboard") Train Function ### model.py class TrainingModel(object): # ... def train(self): igor = self.igor train_data = igor.train_gen(forever=True) dev_data = igor.dev_gen(forever=True) callbacks = [ModelCheckpoint(filepath=igor.checkpoint_filepath, verbose=1, save_best_only=True)] self.model.fit_generator(generator=train_data, samples_per_epoch=igor.num_train_samples, nb_epoch=igor.num_epochs, callbacks=callbacks, verbose=1, validation_data=dev_data, nb_val_samples=igor.num_dev_samples) Failure to learn There are many ways that learning can fail. Stanford's CS231N course has a few things on this. Additionally, here are many Quora and Stack Overflow posts on debugging the learning process. Evaluating Language model evaluations aim to quantify how well the model captures the signal and anticipates the noise. For this, there are two standard metrics. The first is an aggregate of the probabilities of the model: Log Likelihood or Negative Log Likelihood. I will use Negative Log Likelihood (NLL) because it is more interpretable. The other is Perplexity. This is very related to NLL and originates from information theory as a way to quantify the information gain of the model's learned distribution to the empirical distribution of the test dataset. It is usually interpreted as the uniform uncertainty left in the data. At the time of writing this blog, masks in Keras currently do not get used in the accuracy calculations. But this will soon be implemented. Until then, there is a Keras fork that has these implemented. It can be found here. The custom_metrics from above would then simply be ['accuracy', 'perplexity']. Decoding Decoding is the process by which you infer a sequence of labels from a sequence of inputs. The idea and algorithms for it come from the signal processing research in which a noisy channel is emitting a signal and the task is to recover the signal. Typically, there is an encoder at one end that provides information so that a decoder at the other end can decode it. This means sequentially deciding which discrete token each part of the signal represents. In NLP, decoding is essentially the same task. In a sequence, the history of tokens can influence the likelihood of future tokens, so naive decoding by selecting the most likely token at each time step may not be the optimal sequence. The alternative solution, enumerating all possible sequences, is prohibitively expensive because of the combinatoric explosion of paths. Luckily, there are dynamic programming algorithms, such as the Viterbi algorithm, which solve such issues. The idea behind Viterbi is simple: Obtain the maximum likelihood classification at the last time step. At each time t, there is a set of k hypotheses that can be True. By looking at the previous time steps k hypotheses scores and the cost of transition from each to the current k possible, you can compute the best path so far for each of the k hypotheses. Thus, at every time step, you can do a linear update and ensure the optimal set of paths. At every time step, the backpointers were cached and used at the final time step to decode the path through the k states. Viterbi has its own limitations, such as also becoming expensive when the discrete hypothesis space is large. There are additional approximations, such as Beam Search, which uses a subset of the viterbi paths (selected by score at every time step). Several tasks are accomplished with this decoding. Sampling to produce a sentence (or caption an image) is typically done with a Beam search. Additionally, labeling each word in a sentence (such as part of speech tagging or entity tagging) is done with a sequential decoding procedure. Conclusion Now that you have completed this three part series, you can start to run your own NLP experiments! About the author Brian McMahan is in his final year of graduate school at Rutgers University, completing a PhD in computer science and an MS in cognitive psychology. He holds a BS in cognitive science from Minnesota State University, Mankato. At Rutgers, Brian investigates how natural language and computer vision can be brought closer together with the aim of developing interactive machines that can coordinate in the real world. His research uses machine learning models to derive flexible semantic representations of open-ended perceptual language.
Read more
  • 0
  • 0
  • 1379

article-image-how-start-chainer
Masayuki Takagi
07 Oct 2016
11 min read
Save for later

How to start Chainer

Masayuki Takagi
07 Oct 2016
11 min read
Chainer is a powerful, flexible, and intuitive framework for deep learning. In this post, I will help you to get started with Chainer. Why is Chainer important to learn? With its "Define by run" approach, it provides very flexible and easy ways to construct—and debug if you encounter some troubles—network models. It also works efficiently with well-tuned numpy. Additionally, it provides transparent GPU access facility with a fairly sophisticated cupy library, and it is actively developed. How to install Chainer Here I’ll show you how to install Chainer. Chainer has CUDA and cuDNN support. If you want to use Chainer with them, follow along in the "Install Chainer with CUDA and cuDNN support" section. Install Chainer via pip You can install Chainer easily via pip, which is the officially recommended way. $ pip install chainer Install Chainer from source code You can also install it from its source code. $ tar zxf chainer-x.x.x.tar.gz $ cd chainer-x.x.x $ python setup.py install Install Chainer with CUDA and cuDNN support Chainer has GPU support with Nvidia CUDA. If you want to use Chainer with CUDA support, install the related software in the following order. Install CUDA Toolkit (optional) Install cuDNN Install Chainer Chainer has cuDNN suport as well as CUDA. cuDNN is a library for accelerating deep neural networks and Nvidia provides it. If you want to enable cuDNN support, install cuDNN in an appropriate path before installing Chainer. The recommended install path is in the CUDA Toolkit directory. $ cp /path/to/cudnn.h $CUDA_PATH/include $ cp /path/to/libcudnn.so* $CUDA_PATH/lib64 After that, if the CUDA Toolkit (and cuDNN if you want) is installed in its default path, the Chainer installer finds them automatically. $ pip install chainer If CUDA Toolkit is in a directory other than the default one, setting the CUDA_PATH environment variable helps. $ CUDA_PATH=/opt/nvidia/cuda pip install chainer In case you have already installed Chainer before setting up CUDA (and cuDNN), reinstall Chainer after setting them up. $ pip uninstall chainer $ pip install chainer --no-cache-dir Trouble shooting If you have some trouble installing Chainer, using the -vvvv option with the pip command may help you. It shows all logs during installation. $ pip install chainer -vvvv Train multi-layer perceptron with MNIST dataset Now I cover how to train the multi-layer perceptron (MLP) model with the MNIST handwritten digit dataset with Chainer is shown. It is very simple because Chainer provides an example program to do it. Additionally, we run the program on GPU to compare its training efficiency between CPU and GPU. Run MNIST example program Chainer proves an example program to train the multi-layer perceptron (MLP) model with the MNIST dataset. $ cd chainer/examples/mnist $ ls *.py data.py net.py train_mnist.py data.py, which is used from train_mnist.py, is a script for downloading the MNIST dataset from the Internet. net.py is also used from train_mnist.py as a script where a MLP network model is defined. train_mnist.py is a script we run to train the MLP model. It does the following things: Download and convert the MNIST dataset. Prepare an MLP model. Train the MLP model with the MNIST dataset. Test the trained MLP model. Here we run the train_mnist.py script on the CPU. It may take a couple of hours depending on your machine spec. $ python train_mnist.py GPU: -1 # unit: 1000 # Minibatch-size: 100 # epoch: 20 Network type: simple load MNIST dataset Downloading train-images-idx3-ubyte.gz... Done Downloading train-labels-idx1-ubyte.gz... Done Downloading t10k-images-idx3-ubyte.gz... Done Downloading t10k-labels-idx1-ubyte.gz... Done Converting training data... Done Converting test data... Done Save output... Done Convert completed epoch 1 graph generated train mean loss=0.192189417146, accuracy=0.941533335938, throughput=121.97765842 images/sec test mean loss=0.108210202637, accuracy=0.966000005007 epoch 2 train mean loss=0.0734026790201, accuracy=0.977350010276, throughput=122.715585263 images/sec test mean loss=0.0777539889357, accuracy=0.974500003457 ... epoch 20 train mean loss=0.00832913763473, accuracy=0.997666668793, throughput=121.496046895 images/sec test mean loss=0.131264564424, accuracy=0.978300007582 save the model save the optimizer After training for 20 epochs, the MLP model is well trained to achieve a loss value of around 0.131 and accuracy of around 0.978 in testing. The trained model and state files are stored as mlp.model and mlp.state respectively, so we can resume training or testing with them. Accelerate training MLP with GPU The train_mnist.py script has a --gpu option to train the MLP model on GPU. To use it, specify which GPU you use giving a GPU index that is an integer beginning from 0. -1 means you are using CPU. $ python train_mnist.py --gpu 0 GPU: 0 # unit: 1000 # Minibatch-size: 100 # epoch: 20 Network type: simple load MNIST dataset epoch 1 graph generated train mean loss=0.189480076165, accuracy=0.942750002556, throughput=12713.8872884 images/sec test mean loss=0.0917134844698, accuracy=0.97090000689 epoch 2 train mean loss=0.0744868403107, accuracy=0.976266676188, throughput=14545.6445472 images/sec test mean loss=0.0737037020434, accuracy=0.976600006223 ... epoch 20 train mean loss=0.00728972356146, accuracy=0.9978333353, throughput=14483.9658281 images/sec test mean loss=0.0995463995047, accuracy=0.982400006056 save the model save the optimizer After training for 20 epochs, the MLP model is trained as well as on CPU to get a loss value of around 0.0995 and accuracy of around 0.982 in testing. Compared in average throughput, the GPU is more than 10,000 images/sec than that on CPU, shown in the previous section at around 100 images/sec[VP1] . While the CPU of my machine is a bit too poor compared with its GPU, in fairness the CPU version runs in a single thread, but this result should be enough to illustrate the efficiency of training on the GPU. Little code required to run Chainer on GPU The code required to run Chainer on GPU is less. In the train_mnist.py script, only the three treatments are needed: Select the GPU device to use. Transfer the network model to GPU. Allocate the multi-dimensional matrix on the GPU device installed on the host. The following lines are related to train_mnist.py. ... if args.gpu >= 0: cuda.get_device(args.gpu).use() model.to_gpu() xp = np if args.gpu < 0else cuda.cupy ... and ... for i in six.moves.range(0, N, batchsize): x = chainer.Variable(xp.asarray(x_train[perm[i:i + batchsize]])) t = chainer.Variable(xp.asarray(y_train[perm[i:i + batchsize]])) ... Run Caffe reference models on Chainer Chainer has a powerful feature to interpret pre-trained Caffe reference models. This makes us able to use those models easily on Chainer without enormous training efforts. In this part we import the pre-trained GoogLeNet model and use it to predict what an input image means. Model Zoo directory Chainer has a Model Zoo directory in the following path. $ cd chainer/example/modelzoo Download Caffe reference model First, download a Caffe reference model from BVLC Model Zoo. Chainer provides a simple script for that. This time use the pre-trained GoogLeNet model. $ python download_model.py googlenet Downloading model file... Done $ ls *.caffemodel bvlc_googlenet.caffemodel Download ILSVRC12 mean file We also download the ILSVRC12 mean image file, which Chainer uses. This mean image is used to subtract from images we want to predict. $ python download_mean_file.py Downloading ILSVRC12 mean file for NumPy... Done $ ls *.npy ilsvrc_2012_mean.npy Get ILSVRC12 label descriptions Because the output of the Caffe reference model is a set of label indices as integers, we cannot find out which index means which category of images. So we get label descriptions from BVLC. The row numbers correspond to the label indices of the model. $ wget -c http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz $ tar -xf caffe_ilsvrc12.tar.gz $ cat synset_words.txt | awk '{$1=""; print}' > labels.txt $ cat labels.txt tench, Tinca tinca goldfish, Carassius auratus great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias tiger shark, Galeocerdo cuvieri hammerhead, hammerhead shark electric ray, crampfish, numbfish, torpedo stingray cock hen ostrich, Struthio camelus ... Mac OS X does not have a wget command, so you can use the curl command instead. $ curl -O http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz Python script to run Caffe reference model While there is an evaluate_caffe_net.py script in the modelzoo directory, it is for evaluating the accuracy of interpreted Caffe reference models against the ILSVRC12 dataset. Now input an image and predict what it means. So use another predict_caffe_net.py script that is not included in the modelzoo directory. Here is that code listing. #!/usr/bin/env python from __future__ import print_function import argparse import sys import numpy as np from PIL import Image import chainer import chainer.functions as F from chainer.links import caffe in_size = 224 # Parse command line arguments. parser = argparse.ArgumentParser() parser.add_argument('image', help='Path to input image file.') parser.add_argument('model', help='Path to pretrained GoogLeNet Caffe model.') args = parser.parse_args() # Constant mean over spatial pixels. mean_image = np.ndarray((3, in_size, in_size), dtype=np.float32) mean_image[0] = 104 mean_image[1] = 117 mean_image[2] = 123 # Prepare input image. def resize(image, base_size): width, height = image.size if width > height: new_width = base_size * width / height new_height = base_size else: new_width = base_size new_height = base_size * height / width return image.copy().resize((new_width, new_height)) def clip_center(image, size): width, height = image.size width_offset = (width - size) / 2 height_offset = (height - size) / 2 box = (width_offset, height_offset, width_offset + size, height_offset + size) image1 = image.crop(box) image1.load() return image1 image = Image.open(args.image) image = resize(image, 256) image = clip_center(image, in_size) image = np.asarray(image).transpose(2, 0, 1).astype(np.float32) image -= mean_image # Make input data from the image. x_data = np.ndarray((1, 3, in_size, in_size), dtype=np.float32) x_data[0] = image # Load Caffe model file. print('Loading Caffe model file ...', file=sys.stderr) func = caffe.CaffeFunction(args.model) print('Loaded', file=sys.stderr) # Predict input image. def predict(x): y, = func(inputs={'data': x}, outputs=['loss3/classifier'], disable=['loss1/ave_pool', 'loss2/ave_pool'], train=False) return F.softmax(y) x = chainer.Variable(x_data, volatile=True) y = predict(x) # Print prediction scores. categories = np.loadtxt("labels.txt", str, delimiter="t") top_k = 20 result = zip(y.data[0].tolist(), categories) result.sort(cmp=lambda x, y: cmp(x[0], y[0]), reverse=True) for rank, (score, name) in enumerate(result[:top_k], start=1): print('#%d | %4.1f%% | %s' % (rank, score * 100, name)) You may need to install Pillow to run this script. Pillow is a Python imaging library and we use it to manipulate input images. $ pip install pillow Run Caffe reference model Now you can run the pre-trained GoogLeNet model on Chainer for prediction. This time, we use the following JPEG image of a dog, specially classified as a long coat Chihuahua in dog breeds. $ ls *.jpg image.jpg Sample image To predict, just run the predict_caffe_net.py script. $ python predict_caffe_net.py image.jpg googlenet bvlc_googlenet.caffemodel Loading Caffe model file bvlc_googlenet.caffemodel... Loaded #1 | 48.2% | Chihuahua #2 | 24.5% | Japanese spaniel #3 | 24.2% | papillon #4 | 1.1% | Pekinese, Pekingese, Peke #5 | 0.5% | Boston bull, Boston terrier #6 | 0.4% | toy terrier #7 | 0.2% | Border collie #8 | 0.2% | Pomeranian #9 | 0.1% | Shih-Tzu #10 | 0.1% | bow tie, bow-tie, bowtie #11 | 0.0% | feather boa, boa #12 | 0.0% | tennis ball #13 | 0.0% | Brabancon griffon #14 | 0.0% | Blenheim spaniel #15 | 0.0% | collie #16 | 0.0% | quill, quill pen #17 | 0.0% | sunglasses, dark glasses, shades #18 | 0.0% | muzzle #19 | 0.0% | Yorkshire terrier #20 | 0.0% | sunglass Here GoogLeNet predicts that most likely the input image means Chihuahua, which is the correct answer. While its prediction percentage is 48.2%, the following candidates are Japanese spaniel (aka. Japanese Chin) and papillon, which are hard to distinguish from Chihuahua even for human eyes at a glance. Conclusion In this post, I showed you some basic entry points to start using Chainer. The first is how to install Chainer. Then we saw how to train a multi-layer perceptron with the MNIST dataset on Chainer, illustrating the efficiency of training on GPU, which you can get easily on Chainer. Finally, I indicated how to run the Caffe reference model on Chainer, which enables you to use pre-trained Caffe models out of the box. As next steps, you may want to learn: Playing with other Chainer examples. Defining your own network models. Implementing your own functions. For details, Chainer's official documentation will help you. About the author Masayuki Takagi is an entrepreneur and software engineer from Japan. His professional experience domains are advertising and deep learning, serving big Japanese corporations. His personal interests are fluid simulation, GPU computing, FPGA, and compiler and programming language design. Common Lisp is his most beloved programming language and he is the author of the cl-cuda library. Masayuki is a graduate of the university of Tokyo and lives in Tokyo with his buddy Plum, a long coat Chihuahua, and aco.
Read more
  • 0
  • 0
  • 4495
article-image-basics-classes-and-objects
Packt
06 Oct 2016
11 min read
Save for later

Basics of Classes and Objects

Packt
06 Oct 2016
11 min read
In this article by Steven Lott, the author of the book Modern Python Cookbook, we will see how to use a class to encapsulate data plus processing. (For more resources related to this topic, see here.) Introduction The point of computing is to process data. Even when building something like an interactive game, the game state and the player's actions are the data, the processing computes the next game state and the display update. The data plus processing is ubiquitous. Some games can have a relatively complex internal state. When we think of console games with multiple players and complex graphics, there are complex, real-time state changes. On the other hand, when we think of a very simple casino game like Craps, the game state is very simple. There may be no point established, or one of the numbers 4, 5, 6, 8, 9, 10 may be the established point. The transitions are relatively simple, and are often denoted by moving markers and chips around on the casino table. The data includes the current state, player actions, and rolls of the dice. The processing is the rules of the game. A game like Blackjack has a somewhat more complex internal state change as each card is accepted. In games where the hands can be split, the state of play can become quite complex. The data includes the current game state, the player's commands, and the cards drawn from the deck. Processing is defined by the rules of the game as modified by any house rules. In the case of Craps, the player may place bets. Interestingly, the player's input, has no effect on the game state. The internal state of the game object is determined entirely by the next throw of the dice. This leads to a class design that's relatively easy to visualize. Using a class to encapsulate data plus processing The essential idea of computing is to process data. This is exemplified when we write functions that process data. Often, we'd like to have a number of closely related functions that work with a common data structure. This concept is the heart of object-oriented programming. A class definition will contain a number of methods that will control the internal state of an object. The unifying concept behind a class definition is often captured as a summary of the responsibilities allocated to the class. How can we do this effectively? What's a good way to design a class? Getting Ready Let's look at a simple, stateful object—a pair of dice. The context for this would be an application which simulates the casino game of Craps. The goal is to use simulation of results to help invent a better playing strategy. This will save us from losing real money while we try to beat the house edge. There's an important distinction between the class definition and an instance of the class, called an object. We call this idea – as a whole – Object-Oriented Programming. Our focus is on writing class definitions. Our overall application will create instances of the classes. The behavior that emerges from the collaboration of the instances is the overall goal of the design process. Most of the design effort is on class definitions. Because of this, the name object-oriented programming can be misleading. The idea of emergent behavior is an essential ingredient in object-oriented programming. We don't specify every behavior of a program. Instead, we decompose the program into objects, define the object's state and behavior via the object's classes. The programming decomposes into class definitions based on their responsibilities and collaborations. An object should be viewed as a thing—a noun. The behavior of the class should be viewed as verbs. This gives us a hint as to how we can proceed with design classes that work effectively. Object-oriented design is often easiest to understand when it relates to tangible real-world things. It's often easier to write a software to simulate a playing card than to create a software that implements an Abstract Data Type (ADT). For this example, we'll simulate the rolling of die. For some games – like the casino game of Craps – two dice are used. We'll define a class which models the pair of dice. To be sure that the example is tangible, we'll model the pair of dice in the context of simulating a casino game. How to do it... Write down simple sentences that describe what an instance of the class does. We can call these as the problem statements. It's essential to focus on short sentences, and emphasize the nouns and verbs. The game of Craps has two standard dice. Each die has six faces with point values from 1 to 6. Dice are rolled by a player. The total of the dice changes the state of the craps game. However, those rules are separate from the dice. If the two dice match, the number was rolled the hard way. If the two dice do not match, the number was easy. Some bets depend on this hard vs easy distinction. Identify all of the nouns in the sentences. Nouns may identify different classes of objects. These are collaborators. Examples include player and game. Nouns may also identify attributes of objects in questions. Examples include face and point value. Identify all the verbs in the sentences. Verbs are generally methods of the class in question. Examples include rolled and match. Sometimes, they are methods of other classes. Examples include change the state, which applies to the Craps game. Identify any adjectives. Adjectives are words or phrases which clarify a noun. In many cases, some adjectives will clearly be properties of an object. In other cases, the adjectives will describe relationships among objects. In our example, a phrase like the total of the dice is an example of a prepositional phrase taking the role of an adjective. The the total of phrase modifies the noun the dice. The total is a property of the pair of dice. Start writing the class with the class statement. class Dice: Initialize the object's attributes in the __init__ method. def __init__(self): self.faces = None We'll model the internal state of the dice with the self.faces attribute. The self variable is required to be sure that we're referencing an attribute of a given instance of a class. The object is identified by the value of the instance variable, self We could put some other properties here as well. The alternative is to implement the properties as separate methods. These details of the design decision is the subject for using properties for lazy attributes. Define the object's methods based on the various verbs. In our case, we have several methods that must be defined. Here's how we can implement dice are rolled by a player. def roll(self): self.faces = (random.randint(1,6), random.randint(1,6)) We've updated the internal state of the dice by setting the self.faces attribute. Again, the self variable is essential for identifying the object to be updated. Note that this method mutates the internal state of the object. We've elected to not return a value. This makes our approach somewhat like the approach of Python's built-in collection classes. Any method which mutates the object does not return a value. This method helps implement the total of the dice changes the state of the Craps game. The game is a separate object, but this method provides a total that fits the sentence. def total(self): return sum(self.faces) These two methods help answer the hard way and easy way questions. def hardway(self): return self.faces[0] == self.faces[1] def easyway(self): return self.faces[0] != self.faces[1] It's rare in a casino game to have a rule that has a simple logical inverse. It's more common to have a rare third alternative that has a remarkably bad payoff rule. In this case, we could have defined easy way as return not self.hardway(). Here's an example of using the class. First, we'll seed the random number generator with a fixed value, so that we can get a fixed sequence of results. This is a way to create a unit test for this class. >>> import random >>> random.seed(1)   We'll create a Dice object, d1. We can then set its state with the roll() method. We'll then look at the total() method to see what was rolled. We'll examine the state by looking at the faces attribute. >>> from ch06_r01 import Dice >>> d1 = Dice() >>> d1.roll() >>> d1.total() 7 >>> d1.faces (2, 5)   We'll create a second Dice object, d2. We can then set its state with the roll() method. We'll look at the result of the total() method, as well as the hardway() method. We'll examine the state by looking at the faces attribute. >>> d2 = Dice() >>> d2.roll() >>> d2.total() 4 >>> d2.hardway() False >>> d2.faces (1, 3)   Since the two objects are independent instances of the Dice class, a change to d2 has no effect on d1. >>> d1.total() 7   How it works... The core idea here is to use ordinary rules of grammar – nouns, verbs, and adjectives – as a way to identify basic features of a class. Noun represents things. A good descriptive sentence should focus on tangible, real-world things more than ideas or abstractions. In our example, dice are real things. We try to avoid using abstract terms like randomizers or event generators. It's easier to describe the tangible features of real things, and then locate an abstract implementation that offers some of the tangible features. The idea of rolling the dice is an example of physical action that we can model with a method definition. Clearly, this action changes the state of the object. In rare cases – one time in 36 – the next state will happen to match the previous state. Adjectives often hold the potential for confusion. There are several cases such as: Some adjectives like first, last, least, most, next, previous, and so on will have a simple interpretation. These can have a lazy implementation as a method or an eager implementation as an attribute value. Some adjectives are more complex phrase like "the total of the dice". This is an adjective phrase built from a noun (total) and a preposition (of). This, too, can be seen as a method or an attribute. Some adjectives involve nouns that appear elsewhere in our software. We might have had a phrase like "the state of the Craps game" is a phrase where "state of" modifies another object, the "Craps game". This is clearly only tangentially related to the dice themselves. This may reflect a relationship between "dice" and "game". We might add a sentence to the problem statement like "The dice are part of the game". This can help clarify the presence of a relationship between game and dice. Prepositional phrases like "are part of" can always be reversed to create the a statement from the other object's point of view—"The game contains dice". This can help clarify the relationships among objects. In Python, the attributes of an object are – by default – dynamic. We don't specific a fixed list of attributes. We can initialize some (or all) of the attributes in the __init__() method of a class definition. Since attributes aren't static, we have considerable flexibility in our design. There's more... Capturing the essential internal state, and methods that cause state change is the first step in good class design. We can summarize some helpful design principles using the acronym SOLID. Single Responsibility Principle: A class should have one clearly-defined responsibility. Open/Closed Principle: A class should be open to extension – generally via inheritance – but closed to modification. We should design our classes so that we don't need to tweak the code to add or change features. Liskov Substitution Principle: We need to design inheritance so that a subclass can be used in place of the superclass. Interface Segregation Principle: When writing a problem statement, we want to be sure that collaborating classes have as few dependencies as possible. In many cases, this principle will lead us to decompose large problems into many small class definitions. Dependency Inversion Principle: It's less than ideal for a class to depend directly on other classes. It's better if a class depends on an abstraction, and a concrete implementation class is substituted for the abstract class. The goal is to create classes that have the proper behavior and also adhere to the design principles. Resources for Article: Further resources on this subject: Python Data Structures [article] Web scraping with Python (Part 2) [article] How is Python code organized [article]
Read more
  • 0
  • 0
  • 4277

article-image-frontend-development-bootstrap-4
Packt
06 Oct 2016
19 min read
Save for later

Frontend development with Bootstrap 4

Packt
06 Oct 2016
19 min read
In this article by Bass Jobsen author of the book Bootstrap 4 Site Blueprints explains Bootstrap's popularity as a frontend web development framework is easy to understand. It provides a palette of user-friendly, cross-browser-tested solutions for the most standard UI conventions. Its ready-made, community-tested combination of HTML markup, CSS styles, and JavaScript plugins greatly speed up the task of developing a frontend web interface, and it yields a pleasing result out of the gate. With the fundamental elements in place, we can customize the design on top of a solid foundation. (For more resources related to this topic, see here.) However, not all that is popular, efficient, and effective is good. Too often, a handy tool can generate and reinforce bad habits; not so with Bootstrap, at least not necessarily so. Those who have followed it from the beginning know that its first release and early updates have occasionally favored pragmatic efficiency over best practices. The fact is that some best practices, including from semantic markup, mobile-first design, and performance-optimized assets, require extra time and effort for implementation. Quantity and quality If handled well, I feel that Bootstrap is a boon for the web development community in terms of quality and efficiency. Since developers are attracted to the web development framework, they become part of a coding community that draws them increasingly to the current best practices. From the start, Bootstrap has encouraged the implementation of tried, tested, and future-friendly CSS solutions, from Nicholas Galagher's CSS normalize to CSS3's displacement of image-heavy design elements. It has also supported (if not always modeled) HTML5 semantic markup. Improving with age With the release of v2.0, Bootstrap took responsive design into the mainstream, ensuring that its interface elements could travel well across devices, from desktops to tablets to handhelds. With the v3.0 release, Bootstrap stepped up its game again by providing the following features: The responsive grid was now mobile-first friendly Icons now utilize web fonts and, thus, were mobile- and retina-friendly With the drop of the support for IE7, markup and CSS conventions were now leaner and more efficient Since version 3.2, autoprefixer was required to build Bootstrap This article is about the v4.0 release. This release contains many improvements and also some new components, while some other components and plugins are dropped. In the following overview, you will find the most important improvements and changes in Bootstrap 4: Less (Leaner CSS) has been replaced with Sass. CSS code has been refactored to avoid tag and child selectors. There is an improved grid system with a new grid tier to better target the mobile devices. The navbar has been replaced. It has an opt-in flexbox support. It has a new HTML reset module called Reboot. Reboot extends Nicholas Galagher's CSS normalize and handles the box-sizing: border-box declarations. jQuery plugins are written in ES6 now and come with a UMD support. There is an improved auto-placement of tooltips and popovers, thanks to the help of a library called Tether. It has dropped the support for Internet Explorer 8, which enables us to swap pixels with rem and em units. It has added the Card component, which replaces the Wells, thumbnails, and Panels in earlier versions. It has dropped the icons in the font format from the Glyphicon Halflings set. The Affix plugin is dropped, and it can be replaced with the position: sticky polyfill (https://github.com/filamentgroup/fixed-sticky). The power of Sass When working with Bootstrap, there is the power of Sass to consider. Sass is a preprocessor for CSS. It extends the CSS syntax with variables, mixins, and functions and helps you in DRY (Don't Repeat Yourself) coding your CSS code. Sass has originally been written in Ruby. Nowadays, a fast port of Sass written in C++, called libSass, is available. Bootstrap uses the modern SCSS syntax for Sass instead of the older indented syntax of Sass. Using Bootstrap CLI You will be introduced to Bootstrap CLI. Instead of using Bootstrap's bundled build process, you can also start a new project by running the Bootstrap CLI. Bootstrap CLI is the command-line interface for Bootstrap 4. It includes some built-in example projects, but you can also use it to employ and deliver your own projects. You'll need the following software installed to get started with Bootstrap CLI: Node.js 0.12+: Use the installer provided on the NodeJS website, which can be found at http://nodejs.org/ With Node installed, run [sudo] npm install -g grunt bower Git: Use the installer for your OS Windows users can also try Git Gulp is another task runner for the Node.js system. Note that if you prefer Gulp over Grunt, you should install gulp instead of grunt with the following command: [sudo] npm install -g gulp bower The Bootstrap CLI is installed through npm by running the following command in your console: npm install -g bootstrap-cli This will add the bootstrap command to your system. Preparing a new Bootstrap project After installing the Bootstrap CLI, you can create a new Bootstrap project by running the following command in your console: bootstrap new --template empty-bootstrap-project-gulp Enter the name of your project for the question "What's the project called? (no spaces)". A new folder with the project name will be created. After the setup process, the directory and file structure of your new project folder should look as shown in the following figure: The project folder also contains a Gulpfile.js file. Now, you can run the bootstrap watch command in your console and start editing the html/pages/index.html file. The HTML templates are compiled with Panini. Panini is a flat file compiler that helps you to create HTML pages with consistent layouts and reusable partials with ease. You can read more about Panini at http://foundation.zurb.com/sites/docs/panini.html. Responsive features and breakpoints Bootstrap has got four breakpoints at 544, 768, 992, and 1200 pixels by default. At these breakpoints, your design may adapt to and target specific devices and viewport sizes. Bootstrap's mobile-first and responsive grid(s) also use these breakpoints. You can read more about the grids later on. You can use these breakpoints to specify and name the viewport ranges. The extra small (xs) range is for portrait phones with a viewport smaller than 544 pixels, the small (sm) range is for landscape phones with viewports smaller than 768pixels, the medium (md) range is for tablets with viewports smaller than 992pixels, the large (lg) range is for desktop with viewports wider than 992pixels, and finally the extra-large (xl) range is for desktops with a viewport wider than 1200 pixels. The breakpoints are in pixel values, as the viewport pixel size does not depend on the font size and modern browsers have already fixed some zooming bugs. Some people claim that em values should be preferred. To learn more about this, check out the following link: http://zellwk.com/blog/media-query-units/. Those who still prefer em values over pixels value can simply change the $grid-breakpointsvariable declaration in the scss/includes/_variables.scssfile. To use em values for media queries, the SCSS code should as follows: $grid-breakpoints: ( // Extra small screen / phone xs: 0, // Small screen / phone sm: 34em, // 544px // Medium screen / tablet md: 48em, // 768px // Large screen / desktop lg: 62em, // 992px // Extra large screen / wide desktop xl: 75em //1200px ); Note that you also have to change the $container-max-widths variable declaration. You should change or modify Bootstrap's variables in the local scss/includes/_variables.scss file, as explained at http://bassjobsen.weblogs.fm/preserve_settings_and_customizations_when_updating_bootstrap/. This will ensure that your changes are not overwritten when you update Bootstrap. The new Reboot module and Normalize.css When talking about cascade in CSS, there will, no doubt, be a mention of the browser default settings getting a higher precedence than the author's preferred styling. In other words, anything that is not defined by the author will be assigned a default styling set by the browser. The default styling may differ for each browser, and this behavior plays a major role in many cross-browser issues. To prevent these sorts of problems, you can perform a CSS reset. CSS or HTML resets set a default author style for commonly used HTML elements to make sure that browser default styles do not mess up your pages or render your HTML elements to be different on other browsers. Bootstrap uses Normalize.css written by Nicholas Galagher. Normalize.css is a modern, HTML5-ready alternative to CSS resets and can be downloaded from http://necolas.github.io/normalize.css/. It lets browsers render all elements more consistently and makes them adhere to modern standards. Together with some other styles, Normalize.css forms the new Reboot module of Bootstrap. Box-sizing The Reboot module also sets the global box-sizing value from content-box to border-box. The box-sizing property is the one that sets the CSS-box model used for calculating the dimensions of an element. In fact, box-sizing is not new in CSS, but nonetheless, switching your code to box-sizing: border-box will make your work a lot easier. When using the border-box settings, calculation of the width of an element includes border width and padding. So, changing the border width or padding of an element won't break your layouts. Predefined CSS classes Bootstrap ships with predefined CSS classes for everything. You can build a mobile-first responsive grid for your project by only using div elements and the right grid classes. CSS classes for styling other elements and components are also available. Consider the styling of a button in the following HTML code: <button class="btn btn-warning">Warning!</button> Now, your button will be as shown in the following screenshot: You will notice that Bootstrap uses two classes to style a single button. The first is the .btn class that gives the button the general button layout styles. The second class is the .btn-warning class that sets the custom colors of the buttons. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file called app.scss. Then, create a new subdirectory in the new scss directory named includes. Now, you will have to copy bootstrap.scss and _variables.scss from the Bootstrap source code in the bower_components directory to the new scss/includes directory as follows: cp bower_components/bootstrap/scss/bootstrap.scss scss/includes/_bootstrap.scss cp bower_components/bootstrap/scss/_variables.scss scss/includes/ You will notice that the bootstrap.scss file has been renamed to _bootstrap.scss, starting with an underscore, and has become a partial file now. Import the files you have copied in the previous step into the app.scss file, as follows: @import "includes/variables"; @import "includes/bootstrap"; Then, open the scss/includes/_bootstrap.scss file and change the import part for the Bootstrap partial files so that the original code in the bower_components directory will be imported here. Note that we will set the include path for the Sass compiler to the bower_components directory later on. The @import statements should look as shown in the following SCSS code: // Core variables and mixins @import "bootstrap/scss/variables"; @import "bootstrap/scss/mixins"; // Reset and dependencies @import "bootstrap/scss/normalize"; You're importing all of Bootstrap's SCSS code in your project now. When preparing your code for production, you can consider commenting out the partials that you do not require for your project. Modification of scss/includes/_variables.scss is not required, but you can consider removing the !default declarations because the real default values are set in the original _variables.scss file, which is imported after the local one. Note that the local scss/includes/_variables.scss file does not have to contain a copy of all of the Bootstrap's variables. Having them all just makes it easier to modify them for customization; it also ensures that your default values do not change when you are updating Bootstrap. Setting up your project and requirements For this project, you'll use the Bootstrap CLI again, as it helps you create a setup for your project comfortably. Bootstrap CLI requires you to have Node.js and Gulp already installed on your system. Now, create a new project by running the following command in your console: bootstrap new Enter the name of your project and choose the An empty new Bootstrap project. Powered by Panini, Sass and Gulp. template. Now your project is ready to start with your design work. However, before you start, let's first go through the introduction to Sass and the strategies for customization. The power of Sass in your project Sass is a preprocessor for CSS code and is an extension of CSS3, which adds nested rules, variables, mixins, functions, selector inheritance, and more. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file and name it app.scss. Using the CLI and running the code from GitHub Install the Bootstrap CLI using the following commands in your console: [sudo] npm install -g gulp bower npm install bootstrap-cli --global Then, use the following command to set up a Bootstrap 4 Weblog project: bootstrap new --repo https://github.com/bassjobsen/bootstrap-weblog.git The following figure shows the end result of your efforts: Turning our design into a WordPress theme WordPress is a very popular CMS (Content Management System) system; it now powers 25 percent of all sites across the web. WordPress is a free and open source CMS system and is based on PHP. To learn more about WordPress, you can also visit Packt Publishing’s WordPress Tech Page at https://www.packtpub.com/tech/wordpress. Now let's turn our design into a WordPress theme. There are many Bootstrap-based themes that we could choose from. We've taken care to integrate Bootstrap's powerful Sass styles and JavaScript plugins with the best practices found for HTML5. It will be to our advantage to use a theme that does the same. We'll use the JBST4 theme for this exercise. JBST4 is a blank WordPress theme built with Bootstrap 4. Installing the JBST 4 theme Let's get started by downloading the JBST theme. Navigate to your wordpress/wp-content/themes/ directory and run the following command in your console: git clone https://github.com/bassjobsen/jbst-4-sass.git jbst-weblog-theme Then navigate to the new jbst-weblog-theme directory and run the following command to confirm whether everything is working: npm install gulp Download from GitHub You can download the newest and updated version of this theme from GitHub too. You will find it at https://github.com/bassjobsen/jbst-weblog-theme. JavaScript events of the Carousel plugin Bootstrap provides custom events for most of the plugins' unique actions. The Carousel plugin fires the slide.bs.carousel (at the beginning of the slide transition) and slid.bs.carousel (at the end of the slide transition) events. You can use these events to add custom JavaScript code. You can, for instance, change the background color of the body on the events by adding the following JavaScript into the js/main.js file: $('.carousel').on('slide.bs.carousel', function () { $('body').css('background-color','#'+(Math.random()*0xFFFFFF<<0).toString(16)); }); You will notice that the gulp watch task is not set for the js/main.js file, so you have to run the gulp or bootstrap watch command manually after you are done with the changes. For more advanced changes of the plugin's behavior, you can overwrite its methods by using, for instance, the following JavaScript code: !function($) { var number = 0; var tmp = $.fn.carousel.Constructor.prototype.cycle; $.fn.carousel.Constructor.prototype.cycle = function (relatedTarget) { // custom JavaScript code here number = (number % 4) + 1; $('body').css('transform','rotate('+ number * 90 +'deg)'); tmp.call(this); // call the original function }; }(jQuery); The preceding JavaScript sets the transform CSS property without vendor prefixes. The autoprefixer only prefixes your static CSS code. For full browser compatibility, you should add the vendor prefixes in the JavaScript code yourself. Bootstrap exclusively uses CSS3 for its animations, but Internet Explorer 9 doesn’t support the necessary CSS properties. Adding drop-down menus to our navbar Bootstrap’s JavaScript Dropdown Plugin enables you to create drop-down menus with ease. You can also add these drop-down menus in your navbar too. Open the html/includes/header.html file in your text editor. You will notice that the Gulp build process uses the Panini HTML compiler to compile our HTML templates into HTML pages. Panini is powered by the Handlebars template language. You can use helpers, iterations, and custom data in your templates. In this example, you'll use the power of Panini to build the navbar items with drop-down menus. First, create a html/data/productgroups.yml file that contains the titles of the navbar items: Shoes Clothing Accessories Women Men Kids All Departments The preceding code is written in the YAML format. YAML is a human-readable data serialization language that takes concepts from programming languages and ideas from XML; you can read more about it at http://yaml.org/. Using the data described in the preceding code, you can use the following HTML and template code to build the navbar items: <ul class="nav navbar-nav navbar-toggleable-sm collapse" id="collapsiblecontent"> {{#each productgroups}} <li class="nav-item dropdown {{#ifCond this 'Shoes'}}active{{/ifCond}}"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> {{ this }} </a> <div class="dropdown-menu"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Separated link</a> </div> </li> {{/each}} </ul> The preceding code uses a (for) each loop to build the seven navbar items; each item gets the same drop-down menu. The Shoes menu got the active class. Handlebars, and so Panini, does not support conditional comparisons by default. The if-statement can only handle a single value, but you can add a custom helper to enable conditional comparisons. The custom helper, which enables us to use the ifCond statement can be found in the html/helpers/ifCond.js file. Read my blog post, How to set up Panini for different environment, at http://bassjobsen.weblogs.fm/set-panini-different-environments/, to learn more about Panini and custom helpers. The HTML code for the drop-down menu is in accordance with the code for drop-down menus as described for the Dropdown plugin at http://getbootstrap.com/components/dropdowns/. The navbar collapses for smaller screen sizes. By default, the drop-down menus look the same on all grids: Now, you will use your Bootstrap skills to build an Angular 2 app. Angular 2 is the successor of AngularJS. You can read more about Angular 2 at https://angular.io/. It is a toolset for building the framework that is most suited to your application development; it lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. Angular is maintained by Google and a community of individuals and corporations. I have also published the source for Angular 2 with Bootstrap 4 starting point at GitHub. You will find it at the following URL: https://github.com/bassjobsen/angular2-bootstrap4-website-builder. You can install it by simply running the following command in your console: git clone https://github.com/bassjobsen/angular2-bootstrap4-website-builder.git yourproject Next, navigate to the new folder and run the following commands and verify that it works: npm install npm start Other tools to deploy Bootstrap 4 A Brunch skeleton using Bootstrap 4 is available at https://github.com/bassjobsen/brunch-bootstrap4. Brunch is a frontend web app build tool that builds, lints, compiles, concatenates, and shrinks your HTML5 apps. Read more about Brunch at the official website, which can be found at http://brunch.io/. You can try Brunch by running the following commands in your console: npm install -g brunch brunch new -s https://github.com/bassjobsen/brunch-bootstrap4 Notice that the first command requires administrator rights to run. After installing the tool, you can run the following command to build your project: brunch build The preceding command will create a new public/index.html file, after which you can open it in your browser. You'll find that it look like this: Yeoman Yeoman is another build tool. It’s a command-line utility that allows the creation of projects utilizing scaffolding templates, called generators. A Yeoman generator that scaffolds out a frontend Bootstrap 4 web app can be found at the following URL: https://github.com/bassjobsen/generator-bootstrap4 You can run the Yeoman Bootstrap 4 generator by running the following commands in your console: npm install -g yo npm install -g generator-bootstrap4 yo bootstrap4 grunt serve Again, note that the first two commands require administrator rights. The grunt serve command runs a local web server at http://localhost:9000. Point your browser to that address and check whether it look as follows: Summary Beyond this, there are a plethora of resources available for pushing further with Bootstrap. The Bootstrap community is an active and exciting one. This is truly an exciting point in the history of frontend web development. Bootstrap has made a mark in history, and for a good reason. Check out my GitHub pages at http://github.com/bassjobsen for new projects and updated sources or ask me a question on Stack Overflow (http://stackoverflow.com/users/1596547/bass-jobsen). Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 34510
Modal Close icon
Modal Close icon