Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-building-components-using-angular
Packt
06 Apr 2017
11 min read
Save for later

Building Components Using Angular

Packt
06 Apr 2017
11 min read
In this article by Shravan Kumar Kasagoni, the author of the book Angular UI Development, we will learn how to use new features of Angular framework to build web components. After going through this article you will understand the following: What are web components How to setup the project for Angular application development Data binding in Angular (For more resources related to this topic, see here.) Web components In today's web world if we need to use any of the UI components provided by libraries like jQuery UI, YUI library and so on. We write lot of imperative JavaScript code, we can't use them simply in declarative fashion like HTML markup. There are fundamental problems with the approach. There is no way to define custom HTML elements to use them in declarative fashion. The JavaScript, CSS code inside UI components can accidentally modify other parts of our web pages, our code can also accidentally modify UI components, which is unintended. There is no standard way to encapsulate code inside these UI components. Web Components provides solution to all these problems. Web Components are set of specifications for building reusable UI components. Web Components specifications is comprised of four parts: Templates: Allows us to declare fragments of HTML that can be cloned and inserted in the document by script Shadow DOM: Solves DOM tree encapsulation problem Custom elements: Allows us to define custom HTML tags for UI components HTML imports: Allows to us add UI components to web page using import statement More information on web components can be found at: https://www.w3.org/TR/components-intro/. Component are the fundament building blocks of any Angular application. Components in Angular are built on top of the web components specification. Web components specification is still under development and might change in future, not all browsers supports it. But Angular provides very high abstraction so that we don't need to deal with multiple technologies in web components. Even if specification changes Angular can take care of internally, it provides much simpler API to write web components. Getting started with Angular We know Angular is completely re-written from scratch, so everything is new in Angular. In this article we will discuss few important features like data binding, new templating syntax and built-in directives. We are going use more practical approach to learn these new features. In the next section we are going to look at the partially implemented Angular application. We will incrementally use Angular new features to implement this application. Follow the instruction specified in next section to setup sample application. Project Setup Here is a sample application with required Angular configuration and some sample code. Application Structure Create directory structure, files as mentioned below and copy the code into files from next section. Source Code package.json We are going to use npm as our package manager to download libraries and packages required for our application development. Copy the following code to package.json file. { "name": "display-data", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "tsc": "tsc", "tsc:w": "tsc -w", "lite": "lite-server", "start": "concurrent "npm run tsc:w" "npm run lite" " }, "author": "Shravan", "license": "ISC", "dependencies": { "angular2": "^2.0.0-beta.1", "es6-promise": "^3.0.2", "es6-shim": "^0.33.13", "reflect-metadata": "^0.1.2", "rxjs": "^5.0.0-beta.0", "systemjs": "^0.19.14", "zone.js": "^0.5.10" }, "devDependencies": { "concurrently": "^1.0.0", "lite-server": "^1.3.2", "typescript": "^1.7.5" } } The package.json file holds metadata for npm, in the preceding code snippet there are two important sections: dependencies: It holds all the packages required for an application to run devDependencies: It holds all the packages required only for development Once we add the preceding package.json file to our project we should run the following command at the root of our application. $ npm install The preceding command will create node_modules directory in the root of project and downloads all the packages mentioned in dependencies, devDependencies sections into node_modules directory. There is one more important section, that is scripts. We will discuss about scripts section, when we are ready to run our application. tsconfig.json Copy the below code to tsconfig.json file. { "compilerOptions": { "target": "es5", "module": "system", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "removeComments": false, "noImplicitAny": false }, "exclude": [ "node_modules" ] } We are going to use TypeScript for developing our Angular applications. The tsconfig.json file is the configuration file for TypeScript compiler. Options specified in this file are used while transpiling our code into JavaScript. This is totally optional, if we don't use it TypeScript compiler use are all default flags during compilation. But this is the best way to pass the flags to TypeScript compiler. Following is the expiation for each flag specified in tsconfig.json: target: Specifies ECMAScript target version: 'ES3' (default), 'ES5', or 'ES6' module: Specifies module code generation: 'commonjs', 'amd', 'system', 'umd' or 'es6' moduleResolution: Specifies module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6) sourceMap: If true generates corresponding '.map' file for .js file emitDecoratorMetadata: If true enables the output JavaScript to create the metadata for the decorators experimentalDecorators: If true enables experimental support for ES7 decorators removeComments: If true, removes comments from output JavaScript files noImplicitAny: If true raise error if we use 'any' type on expressions and declarations exclude: If specified, the compiler will not compile the TypeScript files in the containing directory and subdirectories index.html Copy the following code to index.html file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Top 10 Fastest Cars in the World</title> <link rel="stylesheet" href="app/site.css"> <script src="node_modules/angular2/bundles/angular2-polyfills.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="node_modules/rxjs/bundles/Rx.js"></script> <script src="node_modules/angular2/bundles/angular2.dev.js"></script> <script> System.config({ transpiler: 'typescript', typescriptOptions: {emitDecoratorMetadata: true}, map: {typescript: 'node_modules/typescript/lib/typescript.js'}, packages: { 'app' : { defaultExtension: 'ts' } } }); System.import('app/boot').then(null, console.error.bind(console)); </script> </head> <body> <cars-list>Loading...</cars-list> </body> </html> This is startup page of our application, it contains required angular scripts, SystemJS configuration for module loading. Body tag contains <cars-list> tag which renders the root component of our application. However, I want to point out one specific statement: The System.import('app/boot') statement will import boot module from app package. Physically it loading boot.js file under app folder. car.ts Copy the following code to car.ts file. export interface Car { make: string; model: string; speed: number; } We are defining a car model using TypeScript interface, we are going to use this car model object in our components. app.component.ts Copy the following code to app.component.ts file. import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } Important points about AppComponent class: The AppComponent class is our application root component, it has one public property named 'heading' The AppComponent class is decorated with @Component() function with selector, template properties in its configuration object The @Component() function is imported using ES2015 module import syntax from 'angular2/core' module in Angular library We are also exporting the AppComponent class as module using export keyword Other modules in application can also import the AppComponent class using module name (app.component – file name without extension) using ES2015 module import syntax boot.ts Copy the following code to boot.ts file. import {bootstrap} from 'angular2/platform/browser' import {AppComponent} from './app.component'; bootstrap(AppComponent); In this file we are importing bootstrap() function from 'angular2/platform/browser' module and the AppComponent class from 'app.component' module. Next we are invoking bootstrap() function with the AppComponent class as parameter, this will instantiate an Angular application with the AppComponent as root component. site.css Copy the following code to site.css file. * { font-family: 'Segoe UI Light', 'Helvetica Neue', 'Segoe UI', 'Segoe'; color: rgb(51, 51, 51); } This file contains some basic styles for our application. Working with data in Angular In any typical web application, we need to display data on a HTML page and read the data from input controls on a HTML page. In Angular everything is a component, HTML page is represented as template and it is always associated with a component class. Application data lives on component's class properties. Either push values to template or pull values from template, to do this we need to bind the properties of component class to the controls on the template. This mechanism is known as data binding. Data binding in angular allows us to use simple syntax to push or pull data. When we bind the properties of component class to the controls on the template, if the data on the properties changes, Angular will automatically update the template to display the latest data and vice versa. We can also control the direction of data flow (from component to template, from template to component). Displaying Data using Interpolation If we go back to our AppComponent class in sample application, we have heading property. We need to display this heading property on the template. Here is the revised AppComponent class: app/app.component.ts import {Component} from 'angular2/core'; @Component({ selector: 'cars-list', template: '<h1>{{heading}}</h1>' }) export class AppComponent { public heading = "Top 10 Fastest Cars in the World"; } In @Component() function we updated template property with expression {{heading}} surrounded by h1 tag. The double curly braces are the interpolation syntax in Angular. Any property on the class we need to display on the template, use the property name surrounded by double curly braces. Angular will automatically render the value of property on the browser screen. Let's run our application, go to command line and navigate to the root of the application structure, then run the following command. $ npm start The preceding start command is part of scripts section in package.json file. It is invoking two other commands npm run tsc:w, npm run lite. npm run tsc:w: This command is performing the following actions: It is invoking TypeScript compiler in watch mode TypeScript compiler will compile all our TypeScript files to JavaScript using configuration mentioned in tsconfig.json TypeScript compiler will not exit after the compilation is over, it will wait for changes in TypeScript files Whenever we modify any TypeScript file, on the fly compiler will compile them to JavaScript npm run lite: This command will start a lite weight Node.js web server and launches our application in browser Now we can continue to make the changes in our application. Changes are detected and browser will refresh automatically with updates. Output in the browser: Let's further extend this simple application, we are going to bind the heading property to a textbox, here is revised template: template: ` <h1>{{heading}}</h1> <input type="text" value="{{heading}}"/> ` If we notice the template it is a multiline string and it is surrounded by ` (backquote/ backtick) symbols instead of single or double quotes. The backtick (``) symbols are new multi-line string syntax in ECMAScript 2015. We don't need start our application again, as mentioned earlier it will automatically refresh the browser with updated output until we stop 'npm start' command is at command line. Output in the browser: Now textbox also displaying the same value in heading property. Let's change the value in textbox by typing something, then hit the tab button. We don't see any changes happening on the browser. But as mentioned earlier in data binding whenever we change the value of any control on the template, which is bind to a property of component class it should update the property value. Then any other controls bind to same property should also display the updated value. In browser h1 tag should also display the same text whatever we type in textbox, but it won't happen. Summary We started this article by covering introduction to web components. Next we discussed a sample application which is the foundation for this article. Then we discussed how to write components using new features in Angular to like data binding and new templating syntaxes using lot of examples. By the end of this article, you should have good understanding of Angular new concepts and should be able to write basic components. Resources for Article: Further resources on this subject: Get Familiar with Angular [article] Gearing Up for Bootstrap 4 [article] Angular's component architecture [article]
Read more
  • 0
  • 0
  • 34866

article-image-vertex-functions
Packt
01 Feb 2016
18 min read
Save for later

The Vertex Functions

Packt
01 Feb 2016
18 min read
In this article by Alan Zucconi, author of the book Unity 5.x Shaders and Effects Cookbook, we will see that the term shader originates from the fact that Cg has been mainly used to simulate realistic lighting conditions (shadows) on three-dimensional models. Despite this, shaders are now much more than that. They not only define the way objects are going to look, but also redefine their shapes entirely. If you want to learn how to manipulate the geometry of a three-dimensional object only via shaders, this article is for you. In this article, you will learn the following: Extruding your models Implementing a snow shader Implementing a volumetric explosion (For more resources related to this topic, see here.) In this article, we will explain that 3D models are not just a collection of triangles. Each vertex can contain data, which is essential for correctly rendering the model itself. This article will explore how to access this information in order to use it in a shader. We will also explore how the geometry of an object can be deformed simply using Cg code. Extruding your models One of the biggest problems in games is repetition. Creating new content is a time-consuming task and when you have to face a thousand enemies, the chances are that they will all look the same. A relatively cheap technique to add variations to your models is using a shader that alters its basic geometry. This recipe will show a technique called normal extrusion, which can be used to create a chubbier or skinnier version of a model, as shown in the following image with the soldier from the Unity camp (Demo Gameplay): Getting ready For this recipe, we need to have access to the shader used by the model that you want to alter. Once you have it, we will duplicate it so that we can edit it safely. It can be done as follows: Find the shader that your model is using and, once selected, duplicate it by pressing Ctrl+D. Duplicate the original material of the model and assign the cloned shader to it. Assign the new material to your model and start editing it. For this effect to work, your model should have normals. How to do it… To create this effect, start by modifying the duplicated shader as shown in the following: Let's start by adding a property to our shader, which will be used to modulate its extrusion. The range that is presented here goes from -1 to +1;however, you might have to adjust that according to your own needs, as follows: _Amount ("Extrusion Amount", Range(-1,+1)) = 0 Couple the property with its respective variable, as shown in the following: float _Amount; Change the pragma directive so that it now uses a vertex modifier. You can do this by adding vertex:function_name at the end of it. In our case, we have called the vertfunction, as follows: #pragma surface surf Lambert vertex:vert Add the following vertex modifier: void vert (inout appdata_full v) { v.vertex.xyz += v.normal * _Amount; } The shader is now ready; you can use the Extrusion Amount slider in the Inspectormaterial to make your model skinnier or chubbier. How it works… Surface shaders works in two steps: the surface function and the vertex modifier. It takes the data structure of a vertex (which is usually called appdata_full) and applies a transformation to it. This gives us the freedom to virtually do everything with the geometry of our model. We signal the graphics processing unit(GPU) that such a function exists by adding vertex:vert to the pragma directive of the surface shader. One of the most simple yet effective techniques that can be used to alter the geometry of a model is called normal extrusion. It works by projecting a vertex along its normal direction. This is done by the following line of code: v.vertex.xyz += v.normal * _Amount; The position of a vertex is displaced by the_Amount units toward the vertex normal. If _Amount gets too high, the results can be quite unpleasant. However, you can add lot of variations to your modelswith smaller values. There's more… If you have multiple enemies and you want each one to have theirown weight, you have to create a different material for each one of them. This is necessary as thematerials are normally shared between models and changing one will change all of them. There are several ways in which you can do this; the quickest one is to create a script that automatically does it for you. The following script, once attached to an object with Renderer, will duplicate its first material and set the _Amount property automatically, as follows: using UnityEngine; publicclassNormalExtruder : MonoBehaviour { [Range(-0.0001f, 0.0001f)] publicfloat amount = 0; // Use this for initialization void Start () { Material material = GetComponent<Renderer>().sharedMaterial; Material newMaterial = new Material(material); newMaterial.SetFloat("_Amount", amount); GetComponent<Renderer>().material = newMaterial; } } Adding extrusion maps This technique can actually be improved even further. We can add an extra texture (or using the alpha channel of the main one) to indicate the amount of the extrusion. This allows a better control over which parts are raised or lowered. The following code shows how it is possible to achieve such an effect: sampler2D _ExtrusionTex; void vert(inout appdata_full v) { float4 tex = tex2Dlod (_ExtrusionTex, float4(v.texcoord.xy,0,0)); float extrusion = tex.r * 2 - 1; v.vertex.xyz += v.normal * _Amount * extrusion; } The red channel of _ExtrusionTex is used as a multiplying coefficient for normal extrusion. A value of 0.5 leaves the model unaffected; darker or lighter shades are used to extrude vertices inward or outward, respectively. You should notice that to sample a texture in a vertex modifier, tex2Dlod should be used instead of tex2D. In shaders, colour channels go from 0 to 1.Although, sometimes, you need to represent negative values as well (such as inward extrusion). When this is the case, treat 0.5 as zero; having smaller values as negative and higher values as positive. This is exactly what happens with normals, which are usually encoded in RGB textures. The UnpackNormal()function is used to map a value in the (0,1) range on the (-1,+1)range. Mathematically speaking, this is equivalent to tex.r * 2 -1. Extrusion maps are perfect to zombify characters by shrinking the skin in order to highlight the shape of the bones underneath. The following image shows how a "healthy" soldier can be transformed into a corpse using a shader and an extrusion map. Compared to the previous example, you can notice how the clothing is unaffected. The shader used in the following image also darkens the extruded regions in order to give an even more emaciated look to the soldier:   Implementing a snow shader The simulation of snow has always been a challenge in games. The vast majority of games simply baked snow directly in the models textures so that their tops look white. However, what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a proper accumulation of material and it should be treated as so. This recipe will show how to give a snowy look to your models using just a shader. This effect is achieved in two steps. First, a white colour is used for all the triangles facing the sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can see the result in the following image:   Keep in mind that this recipe does not aim to create photorealistic snow effect. It provides a good starting point;however, it is up to an artist to create the right textures and find the right parameters to make it fit your game. Getting ready This effect is purely based on shaders. We will need to do the following: Create a new shader for the snow effect. Create a new material for the shader. Assign the newly created material to the object that you want to be snowy. How to do it… To create a snowy effect, open your shader and make the following changes: Replace the properties of the shader with the following ones: _MainColor("Main Color", Color) = (1.0,1.0,1.0,1.0) _MainTex("Base (RGB)", 2D) = "white" {} _Bump("Bump", 2D) = "bump" {} _Snow("Level of snow", Range(1, -1)) = 1 _SnowColor("Color of snow", Color) = (1.0,1.0,1.0,1.0) _SnowDirection("Direction of snow", Vector) = (0,1,0) _SnowDepth("Depth of snow", Range(0,1)) = 0 Complete them with their relative variables, as follows: sampler2D _MainTex; sampler2D _Bump; float _Snow; float4 _SnowColor; float4 _MainColor; float4 _SnowDirection; float _SnowDepth; Replace the Input structure with the following: struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA }; Replace the surface function with the following one. It will color the snowy parts of the model white: void surf(Input IN, inout SurfaceOutputStandard o) { half4 c = tex2D(_MainTex, IN.uv_MainTex); o.Normal = UnpackNormal(tex2D(_Bump, IN.uv_Bump)); if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; o.Alpha = 1; } Configure the pragma directive so that it uses a vertex modifiers, as follows: #pragma surface surf Standard vertex:vert Add the following vertex modifiers that extrudes the vertices covered in snow, as follows: void vert(inout appdata_full v) { float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; } You can now use the Inspectormaterial to select how much of your mode is going to be covered and how thick the snow should be. How it works… This shader works in two steps. Coloring the surface The first one alters the color of the triangles thatare facing the sky. It affects all the triangles with a normal direction similar to _SnowDirection. Comparing unit vectors can be done using the dot product. When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they are parallel to each other. The _Snowproperty is used to decide how aligned they should be in order to be considered facing the sky. If you look closely at the surface function, you can see that we are not directly dotting the normal and the snow direction. This is because they are usually defined in a different space. The snow direction is expressed in world coordinates, while the object normals are usually relative to the model itself. If we rotate the model, its normals will not change, which is not what we want. To fix this, we need to convert the normals from their object coordinates to world coordinates. This is done with the WorldNormalVector()function, as follows: if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; This shader simply colors the model white; a more advanced one should initialize the SurfaceOutputStandard structure with textures and parameters from a realistic snow material. Altering the geometry The second effect of this shader alters the geometry to simulate the accumulation of snow. Firstly, we identify the triangles that have been coloured white by testing the same condition used in the surface function. This time, unfortunately, we cannot rely on WorldNormalVector()asthe SurfaceOutputStandard structure is not yet initialized in the vertex modifier. We will use this other method instead, which converts _SnowDirection in objectcoordinates, as follows: float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); Then, we can extrude the geometry to simulate the accumulation of snow, as shown in the following: if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; Once again, this is a very basic effect. One could use a texture map to control the accumulation of snow more precisely or to give it a peculiar, uneven look. See also If you need high quality snow effects and props for your game, you can also check the following resources in the Asset Storeof Unity: Winter Suite ($30): A much more sophisticated version of the snow shader presented in this recipe can be found at: https://www.assetstore.unity3d.com/en/#!/content/13927 Winter Pack ($60): A very realistic set of props and materials for snowy environments are found at: https://www.assetstore.unity3d.com/en/#!/content/13316 Implementing a volumetric explosion The art of game development is a clever trade-off between realism and efficiency. This is particularly true for explosions; they are at the heart of many games, yet the physics behind them is often beyond the computational power of modern machines. Explosions are essentially nothing more than hot balls of gas; hence, the only way to correctly simulate them is by integrating a fluid simulation in your game. As you can imagine, this is infeasible for runtime applications and many games simply simulate them with particles. When an object explodes, it is common to simply instantiate many fire, smoke, and debris particles that can have believableresulttogether. This approach, unfortunately, is not very realistic and is easy to spot. There is an intermediate technique that can be used to achieve a much more realistic effect: the volumetric explosions. The idea behind this concept is that the explosions are not treated like a bunch of particlesanymore; they are evolving three-dimensional objects and not just flat two-dimensionaltextures. Getting ready Start this recipe with the following steps: Create a new shader for this effect. Create a new material to host the shader. Attach the material to a sphere. You can create one directly from the editor bynavigating to GameObject | 3D Object | Sphere. This recipe works well with the standard Unity Sphere;however, if you need big explosions, you might need to use a more high-poly sphere. In fact, a vertex function can only modify the vertices of a mesh. All the other points will be interpolated using the positions of the nearby vertices. Fewer vertices mean lower resolution for your explosions. For this recipe, you will also need a ramp texture that has, in a gradient, all the colors that your explosions will have. You can create the following texture using GIMP or Photoshop. The following is the one used for this recipe: Once you have the picture, import it to Unity. Then, from its Inspector, make sure the Filter Mode is set to Bilinear and the Wrap Mode to Clamp. These two settings make sure that the ramp texture is sampled smoothly. Lastly, you will need a noisy texture. You can find many of them on the Internet as freely available noise textures. The most commonly used ones are generated using Perlin noise. How to do it… This effect works in two steps: a vertex function to change the geometry and a surface function to give it the right color. The steps are as follows: Add the following properties for the shader: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Add their relative variables so that the Cg code of the shader can actually access them, as follows: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Change the Input structure so that it receives the UV data of the ramp texture, as shown in the following: struct Input { float2 uv_NoiseTex; }; Add the following vertex function: void vert(inout appdata_full v) { float3 disp = tex2Dlod(_NoiseTex, float4(v.texcoord.xy,0,0)); float time = sin(_Time[3] *_Period + disp.r*10); v.vertex.xyz += v.normal * disp.r * _Amount * time; } Add the following surface function: void surf(Input IN, inout SurfaceOutput o) { float3 noise = tex2D(_NoiseTex, IN.uv_NoiseTex); float n = saturate(noise.r + _RampOffset); clip(_ClipRange - n); half4 c = tex2D(_RampTex, float2(n,0.5)); o.Albedo = c.rgb; o.Emission = c.rgb*c.a; } We will specify the vertex function in the pragma directive, adding the nolightmapparameter to prevent Unity from adding realistic lightings to our explosion, as follows: #pragma surface surf Lambert vertex:vert nolightmap The last step is to select the material and attaching the two textures in the relative slotsfrom its inspector. This is an animated material, meaning that it evolves over time. You can watch the material changing in the editor by clicking on Animated Materials from the Scene window: How it works If you are reading this recipe, you are already familiar with how surface shaders and vertex modifiers work. The main idea behind this effect is to alter the geometry of the sphere in a seemingly chaotic way, exactly like it happens in a real explosion. The following image shows how such explosion will look in the editor. You can see that the original mesh has been heavily deformed in the following image: The vertex function is a variant of the technique called normal extrusion. The difference here is that the amount of the extrusion is determined by both the time and the noise texture. When you need a random number in Unity, you can rely on the Random.Range()function. There is no standard way to get random numbers within a shader, therefore,the easiest way is to sample a noise texture. There is no standard way to do this, therefore, take the following only as an example: float time = sin(_Time[3] *_Period + disp.r*10); The built-in _Time[3]variable is used to get the current time from the shader and the red channel of the disp.rnoise texture is used to make sure that each vertex moves independently. The sin()function makes the vertices go up and down, simulating the chaotic behavior of an explosion. Then, the normal extrusion takes place as shown in the following: v.vertex.xyz += v.normal * disp.r * _Amount * time; You should play with these numbers and variables until you find a pattern of movement that you are happy with. The last part of the effect is achieved by the surface function. Here, the noise texture is used to sample a random color from the ramp texture. However, there are two more aspects that are worth noticing. The first one is the introduction of _RampOffset. Its usage forces the explosion to sample colors from the left or right side of the texture. With positive values, the surface of the explosion tends to show more grey tones— which is exactly what happens when it is dissolving. You can use _RampOffset to determine how much fire or smoke should be there in your explosion. The second aspect introduced in the surface function is the use of clip(). Theclip()function clips (removes) pixels from the rendering pipeline. When invoked with a negative value, the current pixel is not drawn. This effect is controlled by _ClipRange, which determines the pixels of the volumetric explosions that are going to be transparent. By controlling both _RampOffset and _ClipRange, you have full control to determine how the explosion behaves and dissolves. There's more… The shader presented in this recipe makes a sphere look like an explosion. If you really want to use it, you should couple it with some scripts in order to get the most out of it. The best thing to do is to create an explosion object and turn it to a prefab so that you can reuse it every time you need. You can do this by dragging the sphere back in the Project window. Once it is done, you can create as many explosions as you want using the Instantiate() function. However,it is worth noticing that all the objects with the same material share the same look. If you have multiple explosions at the same time, they should not use the same material. When you are instantiating a new explosion, you should also duplicate its material. You can do this easily with the following piece of code: GameObject explosion = Instantiate(explosionPrefab) as GameObject; Renderer renderer = explosion.GetComponent<Renderer>(); Material material = new Material(renderer.sharedMaterial); renderer.material = material; Lastly, if you are going to use this shader in a realistic way, you should attach a script to it, which changes its size—_RampOffsetor_ClipRange—accordingly to the type of explosion you want to recreate. See also A lot more can be done to make explosions realistic. The approach presented in this recipe only creates an empty shell; the explosion in it is actually empty. An easy trick to improve it is to create particles in it. However, you can only go so far with this. The short movie,The Butterfly Effect (http://unity3d.com/pages/butterfly), created by Unity Technologies in collaboration with Passion Pictures and Nvidia, is the perfect example. It is based on the same concept of altering the geometry of a sphere;however, it renders it with a technique called volume ray casting. In a nutshell, it renders the geometry as if it's complete. You can see the following image as an example:   If you are looking for high quality explosions, refer toPyro Technix (https://www.assetstore.unity3d.com/en/#!/content/16925) on the Asset Store. It includes volumetric explosions and couples them with realistic shockwaves. Summary In this article, we saw the recipes to extrude models and implement a snow shader and volumetric explosion. Resources for Article: Further resources on this subject: Lights and Effects [article] Looking Back, Looking Forward [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 34864

article-image-how-to-develop-a-simple-to-do-list-app-tutorial
Sugandha Lahoti
26 Sep 2018
11 min read
Save for later

How to develop a Simple To-Do List App [Tutorial]

Sugandha Lahoti
26 Sep 2018
11 min read
In this article, we will build a simple to-do list app that allows a user to add and display tasks. In this process, we will learn the following: How to build a user interface in Android Studio Working with ListViews How to work with Dialogs This article is taken from the book Learning Kotlin by building Android Applications by Eunice Adutwumwaa Obugyei and Natarajan Raman. This book will teach you programming in Kotlin including data types, flow control, lambdas, object-oriented, and functional programming while building  Android Apps. Creating the project Let's start by creating a new project in Android Studio, with the name TodoList. Select Add No Activity on the Add an Activity to Mobile screen: When the project creation is complete, create a Kotlin Activity by selecting File | New | Kotlin Activity, as shown in the following screenshot: This will start a New Android Activity wizard. On the Add an Activity to Mobile screen, select Basic Activity, as shown in the following screenshot: Now, check Launcher Activity on the Customize the Activity screen, and click the Finish button: Building your UI In Android, the code for your user interface is written in XML. You can build your UI by doing either of the following: Using the Android Studio Layout Editor Writing the XML code by hand Let's go ahead and start designing our TodoList app. Using the Android Studio layout editor Android Studio provides a layout editor, which gives you the ability to build your layouts by dragging widgets into the visual editor. This will auto-generate the XML code for your UI. Open the content_main.xml file. Make sure the Design tab at the bottom of the screen is selected, as shown in the following screenshot: To add a component to your layout, you just drag the item from the Palette on the left side of the screen. To find a component, either scroll through the items on the Palette, or click on the Palette search icon and search for the item you need. If the Palette is not showing on your screen, select View | Tool Windows | Palette to display it. Go ahead and add a ListView to your view. When a view is selected, its attributes are displayed in the XML Attributes editor on the right side of the screen. The Attributes editor allows you to view and edit the attributes of the selected component. Go ahead and make the following changes: Set the ID as list_view Change both the layout_width and layout_height attributes to match_parent If the Attributes editor is not showing; select View | Tool Windows | Attributes to display it. Now, select Text at the bottom of the editor window to view the generated XML code. You'll notice that the XML code now has a ListView placed within the ConstraintLayout:  A layout always has a root element. In the preceding code, ConstraintLayout is the root element. Instead of using the layout editor, you could have written the previous code yourself. The choice between using the layout editor or writing the XML code is up to you. You can use the option that you're most comfortable with. We'll continue to make additions to the UI as we go along. Now, build and run your code. as shown in the following screenshot: As you can see, the app currently doesn't have much to it. Let's go ahead and add a little more flesh to it. Since we'll use the FloatingActionButton as the button the user uses to add a new item to their to-do list, we need to change its icon to one that makes its purpose quite clear. Open the activity_main.xml file: One of the attributes of the android.support.design.widget.FloatingActionButton is app:srcCompat. This is used to specify the icon for the FloatingActionButton. Change its value from @android:drawable/ic_dialog_email to @android:drawable/ic_input_add. Build and run again. The FloatingActionButton at the bottom now looks like an Add icon, as shown in the following screenshot: Adding functionality to the user interface At the moment, when the user clicks on the Add button, a ticker shows at the bottom of the screen. This is because of the piece of code in the onCreate() method that defines and sets an OnClickListener to the FloatingActionButton: fab.setOnClickListener { view -> Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG) .setAction("Action", null).show() } This is not ideal for our to-do list app. Let's go ahead and create a new method in the MainActivity class that will handle the click event: fun showNewTaskUI() { } The method currently does nothing. We'll add code to show the appropriate UI soon. Now, replace the code within the setOnClickListener() call with a call to the new method: fab.setOnClickListener { showNewTaskUI() } Adding a new task For adding a new task, we'll show the user an AlertDialog with an editable field. Let's start by building the UI for the dialog. Right-click the res/layout directory and select New | Layout resource file, as shown in the following screenshot: On the New Resource File window, change the Root element to LinearLayout and set the File name as dialog_new_task. Click OK to create the layout,  as shown in the following screenshot: Open the dialog_new_task layout and add an EditText view to the LinearLayout. The XML code in the layout should now look like this: The inputType attribute is used to specify what kind of data the field can take. By specifying this attribute, the user is shown an appropriate keyboard. For example, if the inputType is set to number, the numbers keyboard is displayed: Now, let's go ahead and add a few string resources we'll need for the next section. Open the res/values/strings.xml file and add the following lines of code to the resources tag: <string name="add_new_task_dialog_title">Add New Task</string> <string name="save">Save</string> The add_new_task_dialog_title string will be used as the title of our dialog The save string will be used as the text of a button on the dialog The best way to use an AlertDialog is by encapsulating it in a DialogFragment. The DialogFragment takes away the burden of handling the dialog's life cycle events. It also makes it easy for you to reuse the dialog in other activities. Create a new Kotlin class with the name NewTaskDialogFragment, and replace the class definition with the following lines of code: class NewTaskDialogFragment: DialogFragment() { // 1 // 2 interface NewTaskDialogListener { fun onDialogPositiveClick(dialog: DialogFragment, task: String) fun onDialogNegativeClick(dialog: DialogFragment) } var newTaskDialogListener: NewTaskDialogListener? = null // 3 // 4 companion object { fun newInstance(title: Int): NewTaskDialogFragment { val newTaskDialogFragment = NewTaskDialogFragment() val args = Bundle() args.putInt("dialog_title", title) newTaskDialogFragment.arguments = args return newTaskDialogFragment } } override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { // 5 val title = arguments.getInt("dialog_title") val builder = AlertDialog.Builder(activity) builder.setTitle(title) val dialogView = activity.layoutInflater.inflate(R.layout.dialog_new_task, null) val task = dialogView.findViewById<EditText>(R.id.task) builder.setView(dialogView) .setPositiveButton(R.string.save, { dialog, id -> newTaskDialogListener?.onDialogPositiveClick(this, task.text.toString); }) .setNegativeButton(android.R.string.cancel, { dialog, id -> newTaskDialogListener?.onDialogNegativeClick(this) }) return builder.create() } override fun onAttach(activity: Activity) { // 6 super.onAttach(activity) try { newTaskDialogListener = activity as NewTaskDialogListener } catch (e: ClassCastException) { throw ClassCastException(activity.toString() + " must implement NewTaskDialogListener") } } } Let's take a closer look at what this class does: The class extends the DialogFragment class. It declares an interface with the name NewTaskDialogListener, which declares two methods: onDialogPositiveClick(dialog: DialogFragment, task: String) onDialogNegativeClick(dialog: DialogFragment) It declares a variable of type NewTaskDialogListener. It defines a method, newInstance(), in a companion object. By doing this, the method can be accessed without having to create an instance of the NewTaskDialogFragment class. The newInstance() method does the following: It takes an Int parameter named title It creates an instance of the NewTaskDialogFragment and passes the title as part of its arguments It returns the new instance of the NewTaskDialogFragment It overrides the onCreateDialog() method. This method does the following: It attempts to retrieve the title argument passed It instantiates an AlertDialog builder and assigns the retrieved title as the dialog's title It uses the LayoutInflater of the DialogFragment instance's parent activity to inflate the layout we created Then, it sets the inflated view as the dialog's view Sets two buttons to the dialog: Save and Cancel When the Save button is clicked, the text in the EditText will be retrieved and passed to the newTaskDialogListener variable via the onDialogPositiveClick() method In the onAttach() method, we attempt to assign the Activity object passed to the newTaskDialogListener variable created earlier. For this to work, the Activity object should implement the NewTaskDialogListener interface. Now, open the MainActivity class. Change the class declaration to include implementation of the NewTaskDialogListener. Your class declaration should now look like this: class MainActivity : AppCompatActivity(), NewTaskDialogFragment.NewTaskDialogListener { And, add implementations of the methods declared in the NewTaskDialogListener by adding the following methods to the MainActivity class: override fun onDialogPositiveClick(dialog: DialogFragment, task:String) { } override fun onDialogNegativeClick(dialog: DialogFragment) { } In the showNewTaskUI() method, add the following lines of code: val newFragment = NewTaskDialogFragment.newInstance(R.string.add_new_task_dialog_title) newFragment.show(fragmentManager, "newtask") In the preceding lines of code, the newInstance() method in NewTaskDialogFragment is called to generate an instance of the NewTaskDialogFragment class. The show() method of the DialogFragment is then called to display the dialog. Build and run. Now, when you click the Add button, you should see a dialog on your screen,  as shown in the following screenshot: As you may have noticed, nothing happens when you click the SAVE button. In the  onDialogPositiveClick() method, add the line of code shown here: Snackbar.make(fab, "Task Added Successfully", Snackbar.LENGTH_LONG).setAction("Action", null).show() As we may remember, this line of code displays a ticker at the bottom of the screen. Now, when you click the SAVE button on the New Task dialog, a ticker shows at the bottom of the screen. We're currently not storing the task the user enters. Let's create a collection variable to store any task the user adds. In the MainActivity class, add a new variable of type ArrayList<String>, and instantiate it with an empty ArrayList: private var todoListItems = ArrayList<String>() In the onDialogPositiveClick() method, place the following lines of code at the beginning of the method definition: todoListItems.add(task) listAdapter?.notifyDataSetChanged() This will add the task variable passed to the todoListItems data, and call notifyDataSetChanged() on the listAdapter to update the ListView. Saving the data is great, but our ListView is still empty. Let's go ahead and rectify that. Displaying data in the ListView To make changes to a UI element in the XML layout, you need to use the findViewById() method to retrieve the instance of the element in the corresponding Activity of your layout. This is usually done in the onCreate() method of the Activity. Open MainActivity.kt, and declare a new ListView instance variable at the top of the class: private var listView: ListView? = null Next, instantiate the ListView variable with its corresponding element in the layout. Do this by adding the following line of code at the end of the onCreate() method: listView = findViewById(R.id.list_view) To display data in a ListView, you need to create an Adapter, and give it the data to display and information on how to display that data. Depending on how you want the data displayed in your ListView, you can either use one of the existing Android Adapters, or create your own. For now, we'll use one of the simplest Android Adapters, ArrayAdapter. The ArrayAdapter takes an array or list of items, a layout ID, and displays your data based on the layout passed to it. In the MainActivity class, add a new variable, of type ArrayAdapter: private var listAdapter: ArrayAdapter<String>? = null Add the method shown here to the class: private fun populateListView() { listAdapter = ArrayAdapter(this, android.R.layout.simple_list_item_1, todoListItems) listView?.adapter = listAdapter } In the preceding lines of code, we create a simple ArrayAdapter and assign it to the listView as its Adapter. Now, add a call to the previous method in the onCreate() method: populateListView() Build and run. Now, when you click the Add button, you'll see your entry show up on the ListView, as shown in the following screenshot: In this article, we built a simple TodoList app that allows a user to add new tasks, and edit or delete an already added task. In the process, we learned to use ListViews and Dialogs. Next, to learn about the different datastore options available and how to use them to make our app more usable, read our book, Learning Kotlin by building Android Applications. 6 common challenges faced by Android App developers. Google plans to let the AMP Project have an open governance model, soon! Entry level phones to taste the Go edition of the Android 9.0 Pie version
Read more
  • 0
  • 0
  • 34779

article-image-how-ai-is-transforming-the-smart-cities-iot-tutorial
Natasha Mathur
23 Mar 2019
11 min read
Save for later

How AI is transforming the Smart Cities IoT? [Tutorial]

Natasha Mathur
23 Mar 2019
11 min read
According to techopedia, a smart city is a city that utilizes information and communication technologies so that it enhances the quality and performance of urban services (such as energy and transportation) so that there's a reduction in resource consumption, wastage, and overall costs. In this article, we will look at components of a smart city and its AI-powered-IoT use cases, how AI helps with the adaption of IoT in Smart cities, and an example of AI-powered-IoT solution. Deakin and AI Waer list four factors that contribute to the definition of a smart city: Using a wide range of electronic and digital technologies in the city infrastructure Employing Information and Communication Technology (ICT) to transform living and working environment Embedding ICT in government systems Implementing practices and policies that bring people and ICT together to promote innovation and enhance the knowledge that they offer Hence, a smart city would be a city that not only possesses ICT but also employs technology in a way that positively impacts the inhabitants. This article is an excerpt taken from the book 'Hands-On Artificial Intelligence for IoT' written by  Amita Kapoor.  The book explores building smarter systems by combining artificial intelligence and the Internet of Things—two of the most talked about topics today. Artificial Intelligence (AI), together with IoT, has the potential to address the key challenges posed by excessive urban population; they can help with traffic management, healthcare, energy crisis, and many other issues. IoT data and AI technology can improve the lives of the citizens and businesses that inhabit a smart city.  Let's see how. Smart city and its AI-powered-IoT use cases A smart city has lots of use cases for AI-powered IoT-enabled technology, from maintaining a healthier environment to enhancing public transport and safety. In the following diagram, you can see some the of use cases for a smart city: Smart city components Let's have a look at some of the most popular use cases that have already been implemented in smart cities across the world. Smart traffic management AI and IoT can implement smart traffic solutions to ensure that inhabitants of a smart city get from one point to another in the city as safely and efficiently as possible. Los Angeles, one of the most congested cities in the world, has implemented a smart traffic solution to control the flow of traffic. It has installed road-surface sensors and closed-circuit television cameras that send real-time updates about the traffic flow to a central traffic management system. The data feed from the sensors and cameras is analyzed, and it notifies the users of congestion and traffic signal malfunctions. In July 2018, the city further installed Advanced Transportation Controller (ATC) cabinets at each intersection. Enabled with vehicle-to-infrastructure (V2I) communications and 5G connectivity, this allows them to communicate with cars that have the traffic light information feature, such as Audi A4 or Q7. You can learn more about the Los Angeles smart transportation system from their website. The launch of automated vehicles embedded with sensors can provide both the location and speed of the vehicle; they can directly communicate with the smart traffic lights and prevent congestion. Additionally, using historical data, future traffic could be predicted and used to prevent any possible congestion. Smart parking Anyone living in a city must have felt the struggle of finding a parking spot, especially during the holiday time. Smart parking can ease the struggle. With road surface sensors embedded in the ground on parking spots, smart parking solutions can determine whether the parking spots are free or occupied and create a real-time parking map. The city of Adelaide installed a smart parking system in February 2018, they are also launching a mobile app: Park Adelaide, which will provide the user with accurate and real-time parking information. The app can provide users with the ability to locate, pay for, and even extend the parking session remotely. The smart parking system of the city of Adelaide aims to also improve traffic flow, reduce traffic congestion, and decrease carbon emissions. The details of the smart parking system are available in the city of Adelaide website. The San Francisco Municipal Transportation Agency (SAFTA) implemented SFpark a smart parking system. They use wireless sensors to detect real-time parking-space occupancy in metered spaces. Launched in the year 2013, SFpark has reduced weekday greenhouse gas emissions by 25%, the traffic volume has gone down, and drivers' search time has reduced by 50%. In London, the city of Westminster also established a smart parking system in the year 2014 in association with Machina Research. Earlier, drivers had to wait an average of 12 minutes, resulting in congestion and pollution, but since the installation of the smart parking system, there's no need to wait; drivers can find an available parking spot using the mobile. These are some of the use-cases mentioned. Other use-cases include smart waste management, smart policing, smart lighting, and smart governance. What can AI do for IoT adaption in smart cities? Building a smart city is not a one-day business, neither is it the work of one person or organization. It requires the collaboration of many strategic partners, leaders, and even citizens. Let's explore what the AI community can do, what are the areas that provide us with a career or entrepreneurship opportunity. Any IoT platform will necessarily require the following: A network of smart things (sensors, cameras, actuators, and so on) for gathering data Field (cloud) gateways that can gather the data from low power IoT devices, store it, and forward it securely to the cloud Streaming data processor for aggregating numerous data streams and distributing them to a data lake and control applications A data lake for storing all the raw data, even the ones that seem of no value yet A data warehouse that can clean and structure the collected data Tools for analyzing and visualizing the data collected by sensors AI algorithms and techniques for automating city services based on long-term data analysis and finding ways to improve the performance of control applications Control applications for sending commands to the IoT actuators User applications for connecting smart things and citizens Besides this, there will be issues regarding security and privacy, and the service provider will have to ensure that these smart services do not pose any threat to citizens' wellbeing. The services themselves should be easy to use and employ so that citizens can adopt them. As you can see, this offers a range of job opportunities, specifically for AI engineers. The IoT-generated data needs to be processed, and to benefit from it truly, we will need to go beyond monitoring and basic analysis. The AI tools will be required to identify patterns and hidden correlations in the sensor data. Analysis of historical sensor data using ML/AI tools can help in identifying trends and create predictive models based on them. These models can then be used by control applications that send commands to IoT devices' actuators. The process of building a smart city will be an iterative process, with more processing and analysis added at each iteration. Let's now have a look at an example of AI-powered-IoT solution. Detecting crime using San Francisco crime data The San Francisco city also has an open data portal providing data from different departments online. In this section, we take the dataset providing about 12 years (from January 2003 to May 2015) of crime reports from across all of San Francisco's neighborhoods and train a model to predict the category of crime that occurred. There are 39 discreet crime categories, thus it's a multi-class classification problem. We will use make use of Apache's PySpark and use its easy to use text processing features for this dataset. So the first step will be to create a Spark session: The first step is to import the necessary modules and create a Spark session: from pyspark.ml.classification import LogisticRegression as LR from pyspark.ml.feature import RegexTokenizer as RT from pyspark.ml.feature import StopWordsRemover as SWR from pyspark.ml.feature import CountVectorizer from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler from pyspark.ml import Pipeline from pyspark.sql.functions import col from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("Crime Category Prediction") \ .config("spark.executor.memory", "70g") \ .config("spark.driver.memory", "50g") \ .config("spark.memory.offHeap.enabled",True) \ .config("spark.memory.offHeap.size","16g") \ .getOrCreate() We load the dataset available in a csv file: data = spark.read.format("csv"). \ options(header="true", inferschema="true"). \ load("sf_crime_dataset.csv") data.columns The data contains nine columns: [Dates, Category, Descript, DayOfWeek, PdDistrict, Resolution, Address, X, Y], we will need only Category and Descript fields for training and testing dataset: drop_data = ['Dates', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y'] data = data.select([column for column in data.columns if column not in drop_data]) data.show(5) Now the dataset we have has textual data, so we will need to perform text processing. The three important text processing steps are: tokenizing the data, remove the stop words and vectorize the words into vectors. We will use RegexTokenizer which will uses regex to tokenize the sentence into a list of words, since punctuation or special characters do not add anything to the meaning, we retain only the words containing alphanumeric content. There are some words like the, which will be very commonly present in the text, but not add any meaning to context. We can remove these words (also called stop words) using the inbuilt StopWordsRemover class. We use standard stop words ["http","https","amp","rt","t","c","the"]. And finally using the CountVectorizer, we convert the words to numeric vector (features). It's these numeric features that will be used as input to train the model. The output for our data is the Category column, but it's also textual with 36 distinct categories, and so, we need to convert it to one hot encoded vector; the PySpark's StringIndexer can be easily used for it. We add all these transformations into our data Pipeline: # regular expression tokenizer re_Tokenizer = RT(inputCol="Descript", outputCol="words", pattern="\\W") # stop words stop_words = ["http","https","amp","rt","t","c","the"] stop_words_remover = SWR(inputCol="words", outputCol="filtered").setStopWords(stop_words) # bag of words count count_vectors = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10000, minDF=5) #One hot encoding the label label_string_Idx = StringIndexer(inputCol = "Category", outputCol = "label") # Create the pipeline pipeline = Pipeline(stages=[re_Tokenizer, stop_words_remover, count_vectors, label_string_Idx]) # Fit the pipeline to data. pipeline_fit = pipeline.fit(data) dataset = pipeline_fit.transform(data) dataset.show(5) Now, the data is ready, we split it into training and test dataset: # Split the data randomly into training and test data sets. (trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100) print("Training Dataset Size: " + str(trainingData.count())) print("Test Dataset Size: " + str(testData.count())) Let's fit a simple logistic regression model for it. On the test dataset, it provides a 97% accuracy. Yahoo!: # Build the model logistic_regrssor = LR(maxIter=20, regParam=0.3, elasticNetParam=0) # Train model with Training Data model = logistic_regrssor.fit(trainingData) # Make predictions on Test Data predictions = model.transform(testData) # evaluate the model on test data set evaluator = MulticlassClassificationEvaluator(predictionCol="prediction") evaluator.evaluate(predictions) AI is changing the way cities operate, deliver, and maintain public amenities, from lighting and transportation to connectivity and health services. However, the adoption can be obstructed by the selection of technology that doesn't efficiently work together or integrate with other city services. For cities to truly benefit from the potential that smart cities offer, a change in mindset is required. The authorities should plan longer and across multiple departments. The city of Barcelona is a prime example where the implementation of IoT systems created an estimated 47,000 jobs, saved €42.5 million on water, and generated an extra €36.5 million a year through smart parking. We can easily see that cities can benefit tremendously from the technological advances that utilize AI-powered IoT solutions. AI-powered IoT solutions can help connect cities and manage multiple infrastructure, and public services. In this article, we looked at use-cases of smart-cities from smart lighting and road traffic to connected public transport, and waste management. We also learned to use tools that can help categorize the data from the San Francisco crime reports done in a period of 12 years. If you want to explore more topics in the book, be sure to check out the book 'Hands-On Artificial Intelligence for IoT'. IBM Watson announces pre-trained AI tools to accelerate IoT operations Implementing cost-effective IoT analytics for predictive maintenance [Tutorial] AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?
Read more
  • 0
  • 0
  • 34760

article-image-key-skills-every-database-programmer-should-have
Sugandha Lahoti
05 Sep 2019
7 min read
Save for later

Key skills every database programmer should have

Sugandha Lahoti
05 Sep 2019
7 min read
According to Robert Half Technology’s 2019 IT salary report, ‘Database programmer’ is one of the 13 most in-demand tech jobs for 2019. For an entry-level programmer, the average salary is $98,250 which goes up to $167,750 for a seasoned expert. A typical database programmer is responsible for designing, developing, testing, deploying, and maintaining databases. In this article, we will list down the top critical tech skills essential to database programmers. #1 Ability to perform Data Modelling The first step is to learn to model the data. In Data modeling, you create a conceptual model of how data items relate to each other. In order to efficiently plan a database design, you should know the organization you are designing the database from. This is because Data models describe real-world entities such as ‘customer’, ‘service’, ‘products’, and the relation between these entities. Data models provide an abstraction for the relations in the database. They aid programmers in modeling business requirements and in translating business requirements into relations. They are also used for exchanging information between the developers and business owners. During the design phase, the database developer should pay great attention to the underlying design principles, run a benchmark stack to ensure performance, and validate user requirements. They should also avoid pitfalls such as data redundancy, null saturation, and tight coupling. #2 Know a database programming language, preferably SQL Database programmers need to design, write and modify programs to improve their databases. SQL is one of the top languages that are used to manipulate the data in a database and to query the database. It's also used to define and change the structure of the data—in other words, to implement the data model. Therefore it is essential that you learn SQL. In general, SQL has three parts: Data Definition Language (DDL): used to create and manage the structure of the data Data Manipulation Language (DML): used to manage the data itself Data Control Language (DCL): controls access to the data Considering, data is constantly inserted into the database, changed, or retrieved DML is used more often in day-to-day operations than the DDL, so you should have a strong grasp on DML. If you plan to grow in a database architect role in the near future, then having a good grasp of DDL will go a long way. Another reason why you should learn SQL is that almost every modern relational database supports SQL. Although different databases might support different features and implement their own dialect of SQL, the basics of the language remain the same. If you know SQL, you can quickly adapt to MySQL, for example. At present, there are a number of categories of database models predominantly, relational, object-relational, and NoSQL databases. All of these are meant for different purposes. Relational databases often adhere to SQL. Object-relational databases (ORDs) are also similar to relational databases. NoSQL, which stands for "not only SQL," is an alternative to traditional relational databases useful for working with large sets of distributed data. They provide benefits such as availability, schema-free, and horizontal scaling, but also have limitations such as performance, data retrieval constraints, and learning time. For beginners, it is advisable to first start with experimenting on relational databases learning SQL, gradually transitioning to NoSQL DBMS. #3 Know how to Extract, Transform, Load various data types and sources A database programmer should have a good working knowledge of ETL (Extract, Transform Load) programming. ETL developers basically extract data from different databases, transform it and then load the data into the Data Warehouse system. A Data Warehouse provides a common data repository that is essential for business needs. A database programmer should know how to tune existing packages, tables, and queries for faster ETL processing. They should conduct unit tests before applying any change to the existing ETL process. Since ETL takes data from different data sources (SQL Server, CSV, and flat files), a database developer should have knowledge on how to deal with different data sources. #4 Design and test Database plans Database programmers o perform regular tests to identify ways to solve database usage concerns and malfunctions. As databases are usually found at the lowest level of the software architecture, testing is done in an extremely cautious fashion. This is because changes in the database schema affect many other software components. A database developer should make sure that when changing the database structure, they do not break existing applications and that they are using the new structures properly. You should be proficient in Unit testing your database. Unit tests are typically used to check if small units of code are functioning properly. For databases, unit testing can be difficult. So the easiest way to do all of that is by writing the tests as SQL scripts. You should also know about System Integration Testing which is done on the complete system after the hardware and software modules of that system have been integrated. SIT validates the behavior of the system and ensures that modules in the system are functioning suitably. #5 Secure your Database Data protection and security are essential for the continuity of business. Databases often store sensitive data, such as user information, email addresses, geographical addresses, and payment information. A robust security system to protect your database against any data breach is therefore necessary. While a database architect is responsible for designing and implementing secure design options, a database admin must ensure that the right security and privacy policies are in place and are being observed. However, this does not absolve database programmers from adopting secure coding practices. Database programmers need to ensure that data integrity is maintained over time and is secure from unauthorized changes or theft. They need to especially be careful about Table Permissions i.e who can read and write to what tables. You should be aware of who is allowed to perform the 4 basic operations of INSERT, UPDATE, DELETE and SELECT against which tables. Database programmers should also adopt authentication best practices depending on the infrastructure setup, the application's nature, the user's characteristics, and data sensitivity. If the database server is accessed from the outside world, it is beneficial to encrypt sessions using SSL certificates to avoid packet sniffing. Also, you should secure database servers that trust all localhost connections, as anyone who accesses the localhost can access the database server. #6 Optimize your database performance A database programmer should also be aware of how to optimize their database performance to achieve the best results. At the basic level, they should know how to rewrite SQL queries and maintain indexes. Other aspects of optimizing database performance, include hardware configuration, network settings, and database configuration. Generally speaking, tuning database performance requires knowledge about the system's nature. Once the database server is configured you should calculate the number of transactions per second (TPS) for the database server setup. Once the system is up and running, and you should set up a monitoring system or log analysis, which periodically finds slow queries, the most time-consuming queries, etc. #7 Develop your soft skills Apart from the above technical skills, a database programmer needs to be comfortable communicating with developers, testers and project managers while working on any software project. A keen eye for detail and critical thinking can often spot malfunctions and errors that may otherwise be overlooked. A database programmer should be able to quickly fix issues within the database and streamline the code. They should also possess quick-thinking to prioritize tasks and meet deadlines effectively. Often database programmers would be required to work on documentation and technical user guides so strong writing and technical skills are a must. Get started If you want to get started with becoming a Database programmer, Packt has a range of products. Here are some of the best: PostgreSQL 11 Administration Cookbook Learning PostgreSQL 11 - Third Edition PostgreSQL 11 in 7 days [ Video ] Using MySQL Databases With Python [ Video ] Basic Relational Database Design [ Video ] How to learn data science: from data mining to machine learning How to ace a data science interview 5 barriers to learning and technology training for small software development teams
Read more
  • 0
  • 0
  • 34749

article-image-how-to-translate-openqasm-programs-in-ibx-qx-into-quantum-scores-tutorial
Natasha Mathur
21 Apr 2019
8 min read
Save for later

How to translate OpenQASM programs in IBX QX into quantum scores [Tutorial]

Natasha Mathur
21 Apr 2019
8 min read
Open Quantum Assembly Language (OpenQASM, pronounced open kazm) is a custom programming language designed specifically to minimally describe quantum circuits. In this tutorial, we will learn how to translate OpenQASM programs into quantum scores with IBM QX. We will also look at representing quantum scores in OpenQASM 2.0 programs. You will need a modern web browser and the ability to sign into IBM QX This tutorial is an excerpt taken from the book 'Mastering Quantum Computing with IBM QX' written by  Dr. Christine Corbett Moran. The book explores quantum computing by implementing quantum programs on IBM QX, helping you be at the forefront of the next revolution in computation. The Quantum Composer is a tool to specify quantum programs graphically, and many SDKs and APIs exist to write compute code to represent a quantum program in a modern language (Python being a common choice). Like the Quantum Composer, OpenQASM a higher-level language for specifying quantum programs than computer code, but unlike the Quantum Composer, it is neither graphical nor user interface specific, so it can be much easier to specify longer programs that can be directly copied into the many quantum simulators or into IBM QX for use. The Quantum Composer can take as input, programs in OpenQASM, and translate them into the graphical view. Likewise, for every program specified in the Quantum Composer it is easy to access the equivalent in OpenQASM within the IBM QX user interface. OpenQASM is similar in syntax to C: Comments are one per line and begin with // White space isn't important Case is important Every line in the program must end in a semicolon ; Additionally, the following points apply: Every program must begin with  OPENQASM 2.0; (IBM QX at the time of writing uses version 2.0, but this can be updated to whichever version of OpenQASM you are using). When working with IBM QX, the include "qelib1.inc"; header must be given. Any other file can be included with the same syntax; what OpenQASM does is simply copies the content of the file at the location of include. The path to the file is a relative path from the current program. Reading and writing OpenQASM 2.0 programs for the IBM QX will involve the following operations: Include header include "qelib1.inc"; Declaring a quantum register (qregname is any name you choose for the quantum register) qreg qregname[k]; Referencing a quantum register qregname[i]; Declaring a classical register (cregname is any name you choose for the quantum register) creg cregname[k]; Referencing a classical register cregname[i]; One-qubit gate list, available with inclusion of qelib1.inc on IBM QX h, t, tdg, s, sdg, x, y, z, id One-qubit gate action syntax gate q[i]; Two-qubit CNOT gate list, available with inclusion of qelib1.inc on IBM QX cx Two-qubit CNOT gate action (control and target both qubits in a previously declared quantum register) cnot control, target; Measurement operations available  measure, bloch Measurement operation action syntax measure q[i] -> c[j]; bloch q[i] -> c[j]; Barrier operation (args are a comma-separated list of qubits) barrier args; Primitive gates (OpenQASM standard but not used on IBM QX) CX, U We will now learn reading OpenQASM programs and translating them into quantum scores as well as translating quantum scores to OpenQASM programs. Note that i and j are integer counters, starting at 0, which specifies which qubit/bit in the quantum or classical register the program would like to reference; k is an integer counter greater than 0 which specifies the size of a classical or quantum register at declaration. Translating OpenQASM programs into quantum scores In this tutorial, we will translate OpenQASM programs into quantum scores by hand to practice, reading OpenQASM code. OpenQASM to negate one qubit Consider the following program: OPENQASM 2.0; include "qelib1.inc"; qreg q[1]; x q[0]; The following lines are the standard headers for working with IBM QX: OPENQASM 2.0; include "qelib1.inc"; Then the following line declares a quantum register of size one named q: qreg q[1]; Quantum registers are automatically initialized to contain |"0">. Finally, the next line operates the X gate on the first (and only) qubit in the  q quantum register: x q[0]; Putting this all together, we can create the following equivalent quantum score: OpenQASM to apply gates to two qubits, and measure the first qubit Next, consider the OpenQASM program: OPENQASM 2.0; include "qelib1.inc"; qreg q[2]; creg c[1]; x q[0]; y q[0]; z q[0]; s q[1]; measure q[0] -> c[0]; The first two preceding lines are the standard header to declare a program and OpenQASM program and the standard import header to interface with the IBM QX. The next two lines declare a quantum register of two qubits initialized to |"00"> and a classical register of one bit initialized to 0: qreg q[2]; creg c[1]; The next three lines apply gates, in order, to the first qubit in the q quantum register: x q[0]; y q[0]; z q[0]; The next line applies a gate to the second qubit in the q quantum register: s q[1]; And the final line measures the first qubit in the q quantum register and places the result in the first (and only) bit in the c classical register: measure q[0] -> c[0]; Putting this all together, we can create the following equivalent quantum score: Representing quantum scores in OpenQASM 2.0 programs Here is an example of writing an OpenQASM 2.0 program from a quantum score. I have broken it down into columns from top to bottom of the score for clarity and annotated these in the diagram by indicating column numbers in orange. Here's the circuit illustrating the reversibility of quantum computations: Let's dissect the OpenQASM that generates this circuit. The first lines are, as usual, the headers, indicating the code is OpenQASM and that we will be using the standard IBM QX header: OPENQASM 2.0; include "qelib1.inc"; The next lines declare a quantum register named q  of 5 qubits  initialized to |"00000"> and a classical register name c of 5 bits initialized to 00000: // declare the quantum and classical registers we will use qreg q[5]; creg c[5]; The next lines will go column by column in the circuit diagram, creating the code for each column in order. We will start with the first column: The first column we can see only contains a CNOT gate, with its control qubit being the third qubit in the q quantum register, q[2] and the target qubit being the second qubit in the q quantum register, q[1]. Looking up the OpenQASM syntax for the CNOT gate in the table in the previous section, we see that it is cnot control, target; which means that the first column will be coded as: //column 1 cx q[2],q[1]; Next, we will move to the second column, which has a number of gates specified. The code for the second column is: //column2 x q[1]; h q[2]; s q[3]; y q[4]; Each successive column should now be straightforward to encode from looking at the quantum score in OpenQASM. The full program is as follows: OPENQASM 2.0; include "qelib1.inc"; // declare the quantum and classical registers we will use qreg q[5]; creg c[5]; //column 1 cx q[2],q[1]; //column2 x q[1]; h q[2]; s q[3]; y q[4]; //column 3 t q[2]; z q[3]; //column 4 tdg q[2]; z q[3]; //column 5 x q[1]; h q[2]; sdg q[3]; y q[4]; // column 6 cx q[2],q[1]; // column 7 measure q[0] -> c[0]; // column 8 measure q[1] -> c[1]; // column 9 measure q[2] -> c[2]; // column 10 measure q[3] -> c[3]; // column 11 measure q[4] -> c[4]; The previous code exactly reproduced the quantum score as depicted, but we could make several quantum scores, which are equivalent (and thus several variations on the OpenQASM program that are equivalent), as we saw in previous sections. Here are a couple of things to keep in mind: Each column could be in any order, for example, column 3 could be: t q[2]; z q[3]; Or it could be: z q[3]; tdg q[2]; In addition, any gate operating on a qubit in any column where there is no gate in the previous column on the qubit can be moved to the previous column, without affecting the computation. In this article, we learned how to translate OpenQASM programs in IBX QX into quantum scores. We also looked at Representing quantum scores in OpenQASM 2.0 programs. If you want to learn other concepts and principles of Quantum Computing with IBM QX, be sure to check out the book 'Mastering Quantum Computing with IBM QX'. Quantum computing, edge analytics, and meta-learning: key trends in data science and big data in 2019 Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes quantum entanglement possible Quantum Computing is poised to take a quantum leap with industries and governments on its side
Read more
  • 0
  • 0
  • 34740
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 34737

article-image-creating-graphs-and-charts
Packt
12 Apr 2016
17 min read
Save for later

Creating Graphs and Charts

Packt
12 Apr 2016
17 min read
In this article by Bhushan Purushottam Joshi author of the book Canvas Cookbook, highlights data representation in the form of graphs and charts with the following topics: Drawing the axes Drawing a simple equation Drawing a sinusoidal wave Drawing a line graph Drawing a bar graph Drawing a pie chart (For more resources related to this topic, see here.) Drawing the axes In school days, we all might have used a graph paper and drawn a vertical line called y axis and a horizontal line called as x axis. Here, in the first recipe of ours, we do only the drawing of axes. Also, we mark the points at equal intervals. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Axes</title> <script src="graphaxes.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> </form> </body> </html> The JavaScript code is as follows: varxMin=-10;varyMin=-10;varxMax=10;varyMax=10; //draw the x-axis varcan;varctx;varxaxisx;varxaxisy;varyaxisx;varyaxisy; varinterval;var length; functioninit(){ can=document.getElementById('MyCanvasArea'); ctx=can.getContext('2d'); ctx.clearRect(0,0,can.width,can.height); varsel=document.forms['myform'].elements['startvalue']; xMin=sel.value; yMin=xMin; xMax=-xMin; yMax=-xMin; drawXAxis(); drawYAxis(); } functiondrawXAxis(){ //x axis drawing and marking on the same xaxisx=10; xaxisy=can.height/2; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(xaxisx,xaxisy); xaxisx=can.width-10; ctx.lineTo(xaxisx,xaxisy); ctx.stroke(); ctx.closePath(); length=xaxisx-10; noofxfragments=xMax-xMin; interval=length/noofxfragments; //mark the x-axis xaxisx=10; ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=xMin;i<=xMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,xaxisx-5,xaxisy-10); ctx.moveTo(xaxisx,xaxisy-(can.width/2)); ctx.lineTo(xaxisx,(xaxisy+(can.width/2))); ctx.stroke(); xaxisx=Math.round(xaxisx+interval); } ctx.closePath(); } functiondrawYAxis(){ yaxisx=can.width/2; yaxisy=can.height-10; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(yaxisx,yaxisy); yaxisy=10 ctx.lineTo(yaxisx,yaxisy); ctx.stroke(); ctx.closePath(); yaxisy=can.height-10; length=yaxisy-10; noofxfragments=yMax-yMin; interval=length/noofxfragments; //mark the y-axis ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=yMin;i<=yMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,yaxisx-20,yaxisy+5); ctx.moveTo(yaxisx-(can.height/2),yaxisy); ctx.lineTo((yaxisx+(can.height/2)),yaxisy); ctx.stroke(); yaxisy=Math.round(yaxisy-interval); } ctx.closePath(); } How it works... There are two functions in the JavaScript code viz. drawXAxis and drawYAxis. A canvas is not calibrated the way a graph paper is. A simple calculation is used to do the same. In both the functions, there are two parts. One part draws the axis and the second marks the axis on regular intervals. These are delimited by ctx.beginPath() and ctx.closePath(). In the first part, the canvas width and height are used to draw the axis. In the second part, we do some calculation. The length of the axis is divided by the number of markers to get the interval. If the starting point is -3, then we have -3, -2, -1, 0, 1, 2, and 3 on the axis, which makes 7 marks and 6 parts. The interval is used to generate x and y coordinate value for the starting point and plot the markers. There is more... Try to replace the following: ctx.moveTo(xaxisx,xaxisy-(can.width/2)); (in drawXAxis()) ctx.lineTo(xaxisx,(xaxisy+(can.width/2)));(in drawXAxis()) ctx.moveTo(yaxisx-(can.height/2),yaxisy);(in drawYAxis()) ctx.lineTo((yaxisx+(can.height/2)),yaxisy);(in drawYAxis()) WITH ctx.moveTo(xaxisx,xaxisy-5); ctx.lineTo(xaxisx,(xaxisy+5)); ctx.moveTo(yaxisx-5,yaxisy); ctx.lineTo((yaxisx+5),yaxisy); Also, instead of grey color for markers, you can use red. Drawing a simple equation This recipe is a simple line drawing on a graph using an equation. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotequation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> Enter the coeficient(c) for the equation y=cx <input type="text" size=5 name="coef"> <input type="button" value="Click to plot" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation(){ varcoef=document.forms['myform'].elements['coef']; var s=document.forms['myform'].elements['startvalue']; var c=coef.value; var x=parseInt(s.value); varxPos; varyPos; while(x<=xMax) { y=c*x; xZero=can.width/2; yZero=can.height/2; if(x!=0) xPos=xZero+x*interval; else xPos=xZero-x*interval; if(y!=0) yPos=yZero-y*interval; else yPos=yZero+y*interval; ctx.beginPath(); ctx.fillStyle="blue"; ctx.arc(xPos,yPos,5,Math.PI/180,360*Math.PI/180,false); ctx.fill(); ctx.closePath(); if(x<xMax) { ctx.beginPath(); ctx.lineWidth=3; ctx.strokeStyle="green"; ctx.moveTo(xPos,yPos); nextX=x+1; nextY=c*nextX; if(nextX!=0) nextXPos=xZero+nextX*interval; else nextXPos=xZero-nextX*interval; if(nextY!=0) nextYPos=yZero-nextY*interval; else nextYPos=yZero+nextY*interval; ctx.lineTo(nextXPos,nextYPos); ctx.stroke(); ctx.closePath(); } x=x+1; } } How it works... We use one more script in this recipe. There are two scripts referred by the HTML file. One is the previous recipe named graphaxes.js, and the other one is the current one named plotequation.js. JavaScript allows you to use the variables created in one file into the other, and this is done in this new recipe. You already know how the axes are drawn. This recipe is to plot an equation y=cx, where c is the coefficient entered by the user. We take the minimum of the x value from the drop-down list and calculate the values for y in a loop. We plot the current and next coordinate and draw a line between the two. This happens till we reach the maximum value of x. Remember that the maximum and minimum value of x and y is same. There is more... Try the following: Input positive as well as negative value for coefficient. Drawing a sinusoidal wave This recipe also uses the previous recipe of axes drawing. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotSineEquation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> <input type="button" value="Click to plot a sine wave" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation() { var s=document.forms['myform'].elements['startvalue']; var x=parseInt(s.value); //ctx.fillText(x,100,100); varxPos; varyPos; varnoofintervals=Math.round((2*Math.abs(x)+1)/2); xPos=10; yPos=can.height/2; xEnd=xPos+(2*interval); yEnd=yPos; xCtrl1=xPos+Math.ceil(interval/2); yCtrl1=yPos-200; xCtrl2=xEnd-Math.ceil(interval/2); yCtrl2=yPos+200; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); for(vari=1;i<noofintervals;i++) { xPos=xEnd; xEnd=xPos+(2*interval); xCtrl1=xPos+Math.floor(interval/2)+15; xCtrl2=xEnd-Math.floor(interval/2)-15; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); } } function drawBezierCurve(ctx,xstart,ystart,xctrl1,yctrl1,xctrl2,yctrl2,xend,yend,color,width) { ctx.strokeStyle=color; ctx.lineWidth=width; ctx.beginPath(); ctx.moveTo(xstart,ystart); ctx.bezierCurveTo(xctrl1,yctrl1,xctrl2,yctrl2,xend,yend); ctx.stroke(); } How it works... We use the Bezier curve to draw the sine wave along the x axis. A bit of calculation using the interval between two points, which encompasses a phase, is done to achieve this. The number of intervals is calculated in the following statement: varnoofintervals=Math.round((2*Math.abs(x)+1)/2); where x is the value in the drop-down list. One phase is initially drawn before the for loop begins. The subsequent phases are drawn in the for loop. The start and end x coordinate changes in every iteration. The ending coordinate for the first sine wave is the first coordinate for the subsequent sine wave. Drawing a line graph Graphs are always informative. The basic graphical representation can be a line graph, which is demonstrated here: How to do it… The HTML code is as follows: <html> <head> <title>A simple Line chart</title> <script src="linechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "1,130"; data[1] = "2,140"; data[2] = "3,150"; data[3] = "4,140"; data[4] = "5,180"; data[5] = "6,240"; data[6] = "7,340"; // Draw the bar chart drawLineGraph(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawLineGraph(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x axis ctx.lineWidth = "3.0"; var max=0; varstartY = chartHeight; drawLine(ctx, startX, startY, startX, 1); drawLine(ctx, startX, startY, 490, startY); for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.lineWidth=0.3; drawLine(ctx,startX,startY-m,490,startY-m) ctx.font="bold 12pt Arial"; ctx.fillText(m,startX-30,startY-m); } for(vari=0,m=0;i<data.length;i++,m+=61) { ctx.lineWidth=0.3; drawLine(ctx, startX+m, startY, startX+m, 1); var values=data[i].split(","); var day; switch(values[0]) { case "1": day="MO"; break; case "2": day="TU"; break; case "3": day="WE"; break; case "4": day="TH"; break; case "5": day="FR"; break; case "6": day="SA"; break; case "7": day="SU"; break; } ctx.fillText(day,startX+m-10, startY+20); } //plot the points and draw lines between them varstartAngle = 0 * (Math.PI/180); varendAngle = 360 * (Math.PI/180); varnewValues; for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.beginPath(); var values=data[i].split(","); varxPos=startX+parseInt(values[0])+m; varyPos=chartHeight-parseInt(values[1]); ctx.arc(xPos, yPos, 5, startAngle,endAngle, false); ctx.fillStyle="red"; ctx.fill(); ctx.fillStyle="blue"; ctx.fillText(values[1],xPos, yPos); ctx.stroke(); ctx.closePath(); if(i>0){ ctx.strokeStyle="green"; ctx.lineWidth=1.5; ctx.moveTo(oldxPos,oldyPos); ctx.lineTo(xPos,yPos); ctx.stroke(); } oldxPos=xPos; oldyPos=yPos; } } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } How it works... All the graphs in the subsequent recipes also work on an array named data. The array element has two parts: one indicates the day and the second indicates the usage in minutes. A split function down the code splits the element into two independent elements. The coordinates are calculated using a parameter named m, which is used in calculating the value of the x coordinate. The value in minutes and the chart height is used to calculate the position of y coordinate. Inside the loop, there are two coordinates, which are used to draw a line. One in the moveTo() method and the other in the lineTo() method. However, the coordinates oldxPos and oldyPos are not calculated in the first iteration, for the simple reason that we cannot draw a line with a single coordinate. Next iteration onwards, we have two coordinates and then the line is drawn between the prior and current coordinates. There is more... Use your own data Drawing a bar graph Another typical representation, which is widely used, is the bar graph. Here is an output of this recipe: How to do it… The HTML code is as follows: <html> <head> <title>A simple Bar chart</title> <script src="bargraph.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit(){ vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "MON,130"; data[1] = "TUE,140"; data[2] = "WED,150"; data[3] = "THU,140"; data[4] = "FRI,170"; data[5] = "SAT,250"; data[6] = "SUN,340"; // Draw the bar chart drawBarChart(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawBarChart(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x and y axes ctx.lineWidth = "3.0"; varstartY = chartHeight; //drawLine(ctx, startX, startY, startX, 30); drawBarGraph(ctx, startX, startY, startX, 30,data,chartHeight); drawLine(ctx, startX, startY, 570, startY); } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } functiondrawBarGraph(ctx, startx, starty, endx, endy,data,chartHeight) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); var max=0; //code to label x-axis for(i=0;i<data.length;i++) { varxValues=data[i].split(","); varxName=xValues[0]; ctx.textAlign="left"; ctx.fillStyle="#b90000"; ctx.font="bold 15px Arial"; ctx.fillText(xName,startx+i*50+i*20,chartHeight+15,200); var height=parseInt(xValues[1]); if(parseInt(height)>parseInt(max)) max=height; varcolor='#'+Math.floor(Math.random()*16777215).toString(16); drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); ctx.fillText(Math.round(height/60)+" hrs",startx+i*50+i*20,(chartHeight-height-20),200); } //title the x-axis ctx.beginPath(); ctx.fillStyle="black"; ctx.font="bolder 20pt Arial"; ctx.fillText("<------------Weekdays------------>",startx+150,chartHeight+35,200); ctx.closePath(); //y-axis labelling varylabels=Math.ceil(max/60); varyvalue=0; ctx.font="bold 15pt Arial"; for(i=0;i<=ylabels;i++) { ctx.textAlign="right"; ctx.fillText(yvalue,startx-5,(chartHeight-yvalue),50); yvalue+=60; } //title the y-axis ctx.beginPath(); ctx.font = 'bolder 20pt Arial'; ctx.save(); ctx.translate(20,70); ctx.rotate(-0.5*Math.PI); varrText = 'Rotated Text'; ctx.fillText("<--------Time in minutes--------->" , 0, 0); ctx.closePath(); ctx.restore(); } functiondrawBar(ctx,xPos,yPos,height,width,color){ ctx.beginPath(); ctx.fillStyle=color; ctx.rect(xPos,yPos,width,height); ctx.closePath(); ctx.stroke(); ctx.fill(); } How it works... The processing is similar to that of a line graph, except that here there are rectangles drawn, which represent bars. Also, the number 1, 2, 3… are represented as day of the week (for example, 1 means Monday). This line in the code: varcolor='#'+Math.floor(Math.random()*16777215).toString(16); is used to generate random colors for the bars. The number 16777215 is a decimal value for #FFFFF. Note that the value of the control variable i is not directly used for drawing the bar. Rather i is manipulated to get the correct coordinates on the canvas and then the bar is drawn using the drawBar() function. drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); There is more... Use your own data and change the colors. Drawing a pie chart A share can be easily represented in form of a pie chart. This recipe demonstrates a pie chart: How to do it… The HTML code is as follows: <html> <head> <title>A simple Pie chart</title> <script src="piechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { var can = document.getElementById('MyCanvasArea'); varctx = can.getContext('2d'); var data = [130,140,150,140,170,250,340]; varcolors = ["crimson", "blue", "yellow", "navy", "aqua", "purple","red"]; var names=["MON","TUE","WED","THU","FRI","SAT","SUN"]; varcenterX=can.width/2; varcenterY=can.height/2; //varcenter = [can.width/2,can.height / 2]; var radius = (Math.min(can.width,can.height) / 2)-50; varstartAngle=0, total=0; for(vari in data) { total += data[i]; } varincrFactor=-(centerX-centerX/2); var angle=0; for (vari = 0; i<data.length; i++){ ctx.fillStyle = colors[i]; ctx.beginPath(); ctx.moveTo(centerX,centerY); ctx.arc(centerX,centerY,radius,startAngle,startAngle+(Math.PI*2*(data[i]/total)),false); ctx.lineTo(centerX,centerY); ctx.rect(centerX+incrFactor,20,20,10); ctx.fill(); ctx.fillStyle="black"; ctx.font="bold 10pt Arial"; ctx.fillText(names[i],centerX+incrFactor,15); ctx.save(); ctx.translate(centerX,centerY); ctx.rotate(startAngle); var dx=Math.floor(can.width*0.5)-100; vardy=Math.floor(can.height*0.20); ctx.fillText(names[i],dx,dy); ctx.restore(); startAngle += Math.PI*2*(data[i]/total); incrFactor+=50; } } How it works... Again the data here is the same, but instead of bars, we use arcs here. The trick is done by changing the end angle as per the data available. Translation and rotation helps in naming the weekdays for the pie chart. There is more... Use your own data and change the colors to get acquainted. Summary Managers make decisions based on the data representations. The data is usually represented in a report form and in the form of graph or charts. The latter representation plays a major role in providing a quick review of the data. In this article, we represent dummy data in the form of graphs and chart. Resources for Article: Further resources on this subject: HTML5 Canvas[article] HTML5: Developing Rich Media Applications using Canvas[article] Building the Untangle Game with Canvas and the Drawing API[article]
Read more
  • 0
  • 0
  • 34650

article-image-managing-payment-and-shipping-magento-2
Packt
10 May 2016
24 min read
Save for later

Managing Payment and Shipping with Magento 2

Packt
10 May 2016
24 min read
In this article by Bret Williams, author of the book Learning Magento 2 Administration, we will see how to manage payment gateways, shipping methods and orders with Magneto 2. E-commerce doesn't work unless customers actually purchase a product or service. In order to make that happen on your Magento store, you need to take payments, provide shipping solutions, collect any required taxes, and, of course, process orders. In this article , we're going to: Understand the checkout and payment process Discuss various payment methods you can offer your customers Configure table rate shipping and review other shipping options Manage the order process (For more resources related to this topic, see here.) It's extremely important that you take care to understand and manage these aspects of your online business, as this involves money — the customer's and yours. No matter how great your products or your pricing, if customers cannot purchase easily, understand your shipping and delivery, or feel in the least hesitant about completing their transaction, your customer leaves and neither they nor you achieve satisfactory results. Once an order is placed, you also have steps to take to process the purchase and make good on your obligation to fulfill your customer's request. Fortunately — as with many other aspects of online commerce — Magento has the features and tools in place to create a solid, efficient checkout experience. Understanding the checkout and payment process Since most people shopping online today have made at least one e-commerce purchase on a website, the general process of completing an order is fairly well established, although the exact steps will vary somewhat: Customer reviews their shopping cart, confirming the items they have decided to purchase. Customer enters their shipping destination information. Customer chooses a shipping method based on cost, method and time of delivery. Customer enters their payment information. Customer reviews their order and confirms their intent to purchase. The system (Magento, in our case) queries a payment processor for approval. The order is completed and ready for processing. Of course, as we'll explore in this article, there is much more detail related to this process. As online merchants, you want your customers to have a thorough, yet easy, purchasing experience and you want a valid order that can be fulfilled without complications. To achieve both ends, you have to prepare your Magento store to accurately process orders. So, let's jump in. Payment methods When a customer places an order on your Magento store, you'll naturally want to provide a means of capturing payment, whether it's immediate (credit card, PayPal, etc.) or delayed (COD, check, money order, credit). The payment methods you choose to provide, of course, are up to you, but you'll want to provide methods that: Reduce your risk of not getting paid. Provide convenience to your customers while fulfilling their payment expectations. Consumers expect to pay by credit card or through a third-party service such as PayPal. Wholesale buyers may expect to purchase using a Purchase Order or sending you a check before shipment. As with any business, you have to decide what will best benefit both you and your buyers. How Payment gateways work If you're new to online payments as a merchant, it's helpful to have an understanding of how payments are approved and captured in e-commerce. For this explanation, we're focusing on those payment gateways that allow you to accept credit and debit cards in your store. While PayPal Express and Standard works in a similar fashion, the three gateways that are included in the default Magento installation – PayPal Payments, Braintree and Authorize.net — process credit and debit cards similarly: Your customer enters their card information in your website during checkout. When the order is submitted, Magento sends a request to the gateway (PayPal Payments, Braintree or Authorize.net) for authorization of the card. The gateway submits the card information and order amount to a clearinghouse service that determines if the card is valid and the order amount does not exceed the credit limit of the cardholder. A success or failure code is returned to the gateway and on to the Magento store. If the intent is to capture the funds at time of purchase, the gateway will queue the capture into a batch for processing later in the day and notify Magento that the funds are "captured". A successful transaction will commit the order in Magento and a failure will result in a message to the purchaser. Other payment methods, such as PayPal Standard and PayPal Express, take the customer to the payment provider's website to complete the payment portion of the transaction. Once the payment is completed, the customer is returned to your Magento store front. When properly configured, integrated payment gateways will update Magento orders as they are authorized and/or captured. This automation means you spend less time managing orders and more time fulfilling shipments and satisfying your customers! PCI Compliance The protection of your customer's payment information is extremely important. Not only would a breach of security cause damage to your customer's credit and financial accounts, but the publicity of such a breach could be devastating to your business. Merchant account providers will require that your store meet stringent guidelines for PCI Compliance, a set of security requirements called Payment Card Industry Data Security Standard (PCI DSS). Your ability to be PCI compliant is based on the integrity of your hosting environment and by why methods you allow customers to enter credit card information on your site. Magento 2 no longer offers a Stored Credit Card payment method. It is highly unlikely that you could — or would want to — provide a server configuration secure enough to meet PCI DSS requirements for storing credit card information. You probably don't want the liability exposure, as well. You can, however, provide SSL Encryption that could satisfy PCI compliance as long as the credit card information is encrypted before being sent to your server, and then from your server to the credit card processor. As long as you're not storing the customer's credit card information on your server, you can meet PCI compliance as long as your hosting provider can assure compliance for server and database security. Even with SSL encryption, not all hosting environments will pass PCI DSS standards. It's vital that you work with a hosting company that has real Magento experience and can document proof of PCI compliance. Therefore, you should decide whether to provide onsite or offsite credit card payments. In other words, do you want to take payment information within your Magento checkout page or redirect the user to a payment service, such as PayPal, to complete their transaction? There are pros and cons of each method. Onsite transactions may be perceived as less secure and you do have to prove PCI compliance to your merchant account provider on an ongoing basis. However, onsite transactions mean that the customer can complete their transaction without leaving your website. This helps to preserve your brand experience for your customers. Fortunately, Magento is versatile enough to allow you to provide both options to your customers. Personally, we feel that offering multiple payment methods means you're more likely to complete a sale, while also showing your customers that you want to provide the most convenience in purchasing. Let's now review the various payment methods offered by default in Magento 2. Magento 2 comes with a host of the most popular and common payment methods. However, you should review other possibilities, such as Amazon Payments, Stripe and Moneybookers, depending on your target market. We anticipate that developers will be offering add-ons for these and other payment methods. Note that as you change the Merchant Location at the top of the Payment Methods panel, the payment methods available to you may change. PayPal all-in-one payment solutions While PayPal is commonly known for their quick and easy PayPal Express buttons — the ubiquitous yellow buttons you see throughout the web — PayPal can provide you with credit/debit card solutions that allow customers to use their cards without needing a PayPal account. To your customer, the checkout appears no different than if they were using a normal credit card checkout process. The big difference is that you have to set up a business account with PayPal before you can begin accepting non-PayPal account payments. Proceeds will go almost immediately into your PayPal account (you have to have a PayPal account), but your customers can pay by using a credit/debit card or their own PayPal account. With all-in-one solution, PayPal approves your application for a merchant account and allows you to accept all popular cards, including American Express, as a flat 2.9% rate, plus $0.30 per transaction. PayPal payments incur normal per transaction PayPal charges. We like this solution as it keeps all your online receipts in one account, while also giving you fast access to your sales income. PayPal also provides a debit card for its merchants that can earn back 1% on purchases. We use our PayPal debit card for all kinds of business purchases and receive a nice little cash back dividend each month. PayPal provide two ways to incorporate credit card payment capture on your website: PayPal Payments Advanced inserts a form on your site that is actually hosted from PayPal's highly secure servers. The form appears as part of your store, but you don't have any PCI compliance concerns. PayPal Payments Pro allows you to obtain payment information using the normal Magento form, then submit it to PayPal for approval. The difference to your customer is that for Advanced, there is a slight delay while the credit card form is inserted into the checkout page. You may also have some limitations in terms of styling. PayPal Standard, also a part of the all-in-one solution, takes your customer to a PayPal site for payment. Unlike PayPal Express, however, you can style this page to better reflect your brand image. Plus, customers do not have to have a PayPal account in order to use this checkout method. PayPal payment gateways If you already have a merchant account for collecting online payments, you can still utilize the integration of PayPal and Magento by setting up a PayPal business account that is linked to your merchant account. Instead of paying PayPal a percentage of each transaction — you would pay this to your merchant account provider — you simply pay a small per transaction fee. PayPal Express Offering PayPal Express is as easy as having a PayPal account. It does require some configurations of API credentials, but it does provide the simplest means of offering payment services without setting up a merchant account. PayPal Express will add "Buy Now" buttons to your product pages and the cart page of your store, giving shoppers quick and immediate ability to checkout using their PayPal account. Braintree PayPal recently acquired Braintree, a payment services company that adds additional services to merchants. While many of their offerings appear to overlap PayPal's, Braintree brings additional features to the marketplace such as Bitcoin, Venmo, Android Pay and Apple Pay payment methods, recurring billing and fraud protection. Like PayPal Payments, Braintree charges 2.9% + $0.30 per transaction. A Word about Merchant Fees After operating our own e-commerce businesses for many years, we have used many different merchant accounts and gateways. At first glance, 2.9% — offered by PayPal, Braintree and Stripe — appear to be expensive percentages. If you've been solicited by merchant account providers, you no doubt have been quoted rates as low as 1.7%. What is not often disclosed is that this rate only applies to basic cards that do not contain miles or other premiums. Rates for most cards you accept can be quite a bit higher. American Express usually charges more than 3% on transactions. Once you factor in gateway costs, reporting, monthly account costs, etc. you may find, as we did, that our total merchant costs using a traditional merchant account averaged over 3.3%! One cost you may not think to factor is the expense of set-up and integration. PayPal and Braintree have worked hard to create easy integrations to Magento (Stripe is not yet available for Magento 2 as of this writing). Check / Money Order If you have customers for whom you will accept payment by check and/or money order, you can enable this payment method. Be sure to enter all the information fields, especially Make Check Payable to and Send Check to. You will most likely want to keep the New Order Status as Pending, which means the order is not ready for fulfillment until you receive payment and update the order as Paid. As with any payment method, be sure to edit the Title of the method to reflect how you wish to communicate it to your customers. If you only wish to accept Money Orders, for instance, you might change Title to Money Orders (sorry, no checks). Bank transfer payment As with Check / Money order, you can allow customers to wire money to your account by providing information to your customers who choose this method. Cash on Delivery payment Likewise, you can offer COD payments. We still see this method being made available on wholesale shipments, but very rarely on B2C (Business-to-Consumer) sales. COD shipments usually cost more, so you will need to accommodate this added fee in your pricing or shipping methods. At present, there is no ability to add a COD fee using this payment method panel. Zero Subtotal Checkout If your customer, by use of discounts or credits, or selecting free items, owes nothing at checkout, enabling this method will cause Magento to hide payment methods during checkout. The content in the Title field will be displayed in these cases. Purchase order In B2B (Business-to-Business) sales, it's quite common to accept purchase order (PO) for customers with approved credit. If you enable this payment method, an additional field is presented to customers for entering their PO number when ordering. Authorize.net direct post Authorize.net — perhaps the largest payment gateway provider in the USA — provides an integrated payment capture mechanism that gives your customers the convenience of entering credit/debit card information on your site, but the actual form submission bypasses your server and goes directly to Authorize.net. This mechanism, as with PayPal Payments Advanced, lessens your responsibility for PCI compliance as the data is communicated directly between your customer and Authorize.net instead of passing through the Magento programming. In Magento 1.x, the regular Authorize.net gateway (AIM) was one of several default payment methods. We're not certain it will be added as a default in Magento 2, although we would imagine someone will build an extension. Regardless, we think Direct Post is a wonderful way to use Authorize.net and meet your PCI compliance obligations. Shipping methods Once you get paid for a sale, you need to fulfill the order and that means you have to ship the items purchased. How you ship products is largely a function of what shipping methods you make available to your customers. Shipping is one of the most complex aspects of e-commerce, and one where you can lose money if you're not careful. As you work through your shipping configurations, it's important to keep in mind: What you charge your customers for shipping does not have to be exactly what you're charged by your carriers. Just as you can offer free shipping, you can also charge flat rates based on weight or quantity, or add a surcharge to live rates. By default, Magento does not provide you with highly sophisticated shipping rate calculations, especially when it comes to dimensional shipping. Consider shipping rate calculations as estimates only. Consult with whomever is actually doing your shipping to determine if any rate adjustments should be made to accommodate dimensional shipping. Dimensional shipping refers to a recent change by UPS, FedEx and others to charge you the greater of two rates: the cost based on weight or the cost based on a formula to determine the equivalent weight of a package based on its size: (Length x Width x Height) ÷ 166 (for US domestic shipments; other factors apply for other countries and exports). Therefore, if you have a large package that doesn't weigh much, the live rate quoted in Magento might not be reflective of your actual cost once the dimensional weight is calculated. If your packages may be large and lightweight, consult your carrier representative or shipping fulfillment partner for guidance. If your shipping calculations need more sophistication than provided natively in Magento 2, consider an add-on. However, remember that what you charge to your customers does not have to be what you pay. For that reason — and to keep it simple for your customers — consider offering Table rates (as described later). Each method you choose will be displayed to your customers if their cart and shipping destination matches the conditions of the method. Take care not to confuse your customers with too many choices: simpler is better. Keeping these insights in mind, let's explore the various shipping methods available by default in Magento 2. Before we go over the shipping methods, let's go over some basic concepts that will apply to most, if not all, shipping methods. Origin From where you ship your products will determine shipping rates, especially for carrier rates (e.g. UPS, FedEx). To set your origin, go to Stores | Configuration | Sales | Shipping Settings and expand the Origin panel. At the very least, enter the Country, Region/State and ZIP/Postal Code field. The others are optional for rate calculation purposes. At the bottom of this panel is the choice to Apply custom Shipping Policy. If enabled, a field will appear where you can enter text about your overall shipping policy. For instance, you may want to enter Orders placed by 12:00p CT will be processed for shipping on the same day. Applies only to orders placed Monday-Friday, excluding shipping holidays. Handling fee You can add an invisible handling fee to all shipping rate calculations. Invisible in that it does not appear as a separate line item charge to your customers. To add a handling fee to a shipping method: Choose whether you wish to add a fixed amount or a percentage of the shipping cost If you choose to add a percentage, enter the amount as a decimal number instead of a percentage (example: 0.06 instead of 6%) Allowed countries As you configure your shipping methods, don't forget to designate to which countries you will ship. If you only ship to the US and Canada, for instance, be sure to have only those countries selected. Otherwise, you'll have customers from other countries placing orders you will have to cancel and refund. Method not available In some cases, the method you configured may not be applicable to a customer based on destination, type of product, weight or any number of factors. For these instances, you can choose to: Show the method (e.g. UPS, USPS, DHL, etc.), but with an error message that the method is not applicable Don't show the method at all Depending on your shipping destinations and target customers, you may want to show an error message just so the customer knows why no shipping solution is being displayed. If you don't show any error message and the customer disqualifies for any shipping method, the customer will be confused. Free shipping There are several ways to offer free shipping to your customers. If you want to display a Free Shipping option to all customers whose carts meet a minimum order amount (not including taxes or shipping), enable this panel. However, you may want to be more judicious in how and when you offer free shipping. Other alternatives include: Creating Shopping Cart Promotions Include a free shipping method in your Table Rates (see later in this section) Designate a specific free shipping method and minimum qualifying amount within a carrier configuration (such as UPS and FedEx) If you choose to use this panel, note that it will apply to all orders. Therefore, if you want to be more selective, consider one of the above methods. Flat Rate As with Free Shipping, above, the Flat Rate panel allows you to charge one, singular flat rate for all orders regardless of weight or destination. You can apply the rate on a per item or per order basis, as well. Table Rates While using live carrier rates can provide more accurate shipping quotes for your customers, you may find it more convenient to offer a series of rates for your customers at certain break points. For example, you might only need something as simple as for any domestic destination: 0-5 lbs, $5.99 6-10 lbs, $8.99 11+ lbs, $10.99 Let's assume you're a US-based shipper. While these rates will work for you when shipping to any of the contiguous 48 states, you need to charge more for shipments to Alaska and Hawaii. For our example, let's assume tiered pricing of $7.99, $11.99 and $14.99 at the same weight breaks. All of these conditions can be handled using the Table Rates shipping method. Based on our example, we would first start by creating a spreadsheet (in Excel or Numbers) similar to the following: Country Region/State Zip/Postal Code Weight (and above) Shipping Price USA * * 0 5.99 USA * * 6 8.99 USA * * 11 10.99 USA AK * 0 7.99 USA AK * 6 11.99 USA AK * 11 14.99 USA HI * 0 7.99 USA HI * 6 11.99 USA HI * 11 14.99 Let's review the columns in this chart: Country. Here, you would enter the 3-character country code (for a list of valid codes, see http://goo.gl/6A1woj). Region/State. Enter the 2-character code for any state or province. Zip/Postal Code. Enter any specific postal codes for which you wish the rate to apply. Weight (and above). Enter the minimum applicable weight for the range. The assigned rate will apply until the weight of the cart products combined equals a higher weight tier. Shipping Price. Enter the shipping charge you wish to provide to the customer. Do not include the currency prefix (example: "$" or "€"). Now, let's discuss the asterisk (*) and how to limit the scope of your rates. As you can see in the chart, we have only indicated rates for US destinations. That's because there are no rows for any other countries. We could easily add rates for all other countries, simply by adding rows with an asterisk in the first column. By adding those rows, we're telling Magento to use the US rates if the customer's ship-to address is in the US, and to use other rates for all other country destinations. Likewise for the states column: Magento will first look for matches for any state codes listed. If it can't find any, then it will look for any rates with an asterisk. If no asterisk is present for a qualifying weight, then no applicable rate will be provided to the customer. The asterisk in the Zip/Postal Code column means that the rates apply to all postal codes for all states. To get a sample file with which to configure your rates, you can set your configuration scope to one of your Websites (Furniture or Sportswear in our examples) and click Export CSV in the Table Rates panel. Quantity and price based rates In the preceding example, we used the weight of the items in the cart to determine shipping rates. You can also configure table rates to use calculations based on the number of items in the cart or the total price of all items (less taxes and shipping). To set up your chart, simply rename the fourth column "Quantity (and above)" or "Subtotal (and above)." Save your rate table To upload your table rates, you'll need to save/export your spreadsheet as a CSV file. You can name it whatever you like. Save it to your computer where you can find it for the next steps. Table rate settings Before you upload your new rates, you should first set your Table Rates configurations. To do so, you can set your default settings at the Default configuration scope. However, to upload your CSV file, you will need to switch your Store View to the appropriate Website scope. When changing to a Website scope, you will see the Export CSV button and the ability to upload your rate table file. You'll note that all other settings may have Use Default checked. You can, of course, uncheck this box beside any field and adjust the settings according to your preferences. Let's review the unique fields in this panel: Enabled. Set to "Yes" to enable table rates. Title. Enter the name you wish displayed to customers when they're presented with a table rate-based shipping charge in the checkout process. Method Name. This name is presented to the customer in the shopping cart. You should probably change the default "Table Rate" to something more descriptive, as this term is likely irrelevant to customers. We have used terms "Standard Ground," "Economy," or "Saver" as names. The Title should probably be the same, as well, so that the customer, during checkout, has a visual confirmation of their shipping choice. Condition. This allows you to choose the calculation method you want to use. Your choices, as we described earlier, are "Weight vs. Destination," "Price vs. Destination," and "# of items vs. Destination." Include Virtual Products in Price Calculation. Since virtual products have no weight, this will have no effect on rate calculations for weight-based rates. However, it will affect rate calculations for price or quantity-based rates. Once you have your settings, click Save Config. Upload Rate Table Once you have saved your settings, you can now click the button next to Import and upload your rate table. Be sure to test your rates to see that you have properly constructed your rate table. Carrier Methods The remaining shipping methods involve configuring UPS, USPS, FedEx and/or DHL to provide "live" rate calculations. UPS is the only one that is set to query for live rates without the need for you to have an account with the carrier. This is both good and bad. It's good, as you only have to enable the shipping method to have it begin querying rates for your customers. On the flip side, the rates that are returned are not negotiated rates. Negotiated rates are those you may have been offered as discounted rates based on your shipping volume. FedEx, USPS and DHL require account-specific information in order to activate. This connection with your account should provide rates based on any discounts you have established with your carrier. If you wish to use negotiated rates for UPS, you may have to find a Magento add-on that will accommodate or have your developer extend your Magento installation to make a modified rate query. If you have some history with shipping, you should negotiate rates with the carriers. We have found most are willing to offer some discount from "published rates." Shipping integrations Unless you have your own sophisticated warehouse operation, it may be wise to partner with a fulfillment provider that can not only store, pick, pack and ship your orders, but also offers deep discounts on shipping rates due to their large volumes. Amazon FBA (Fulfillment By Amazon) is a very popular solution. Shipping is a low flat rate based on weight (http://goo.gl/UKjg7). ShipWire is another fulfillment provider that is well integrated with Magento. In fact, their integration can provide real-time rate quotes for your customers based on the products selected, warehouse availability and destination (http://www.ShipWire.com). We have not heard if they have updated their integration for Magento 2, yet, but we suspect they will. Summary Selling is the primary purpose of building an online store. As you've seen in this article, Magento 2 arms you with a very rich array of features to help you give your customers the ability to purchase using a variety of payment methods. You're able to customize your shipping options and manage complex tax rules. All of this combines to make it easy for your customers to complete their online purchases. Resources for Article: Further resources on this subject: Social Media and Magento [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 34638

article-image-how-to-build-12-factor-design-microservices-on-docker-part-1
Cody A.
26 Jun 2015
9 min read
Save for later

How to Build 12 Factor Microservices on Docker - Part 1

Cody A.
26 Jun 2015
9 min read
As companies continue to reap benefits of the cloud beyond cost savings, DevOps teams are gradually transforming their infrastructure into a self-serve platform. Critical to this effort is designing applications to be cloud-native and antifragile. In this post series, we will examine the 12 factor methodology for application design, how this design approach interfaces with some of the more popular Platform-as-a-Service (PaaS) providers, and demonstrate how to run such microservices on the Deis PaaS. What began as Service Oriented Architectures in the data center are realizing their full potential as microservices in the cloud, led by innovators such as Netflix and Heroku. Netflix was arguably the first to design their applications to not only be resilient but to be antifragile; that is, by intentionally introducing chaos into their systems, their applications become more stable, scalable, and graceful in the presence of errors. Similarly, by helping thousands of clients building cloud applications, Heroku recognized a set of common patterns emerging and set forth the 12 factor methodology. ANTIFRAGILITY You may have never heard of antifragility. This concept was introduced by Nassim Taleb, the author of Fooled by Randomness and The Black Swan. Essentially, antifragility is what gains from volatility and uncertainty (up to a point). Think of the MySQL server that everyone is afraid to touch lest it crash vs the Cassandra ring which can handle the loss of multiple servers without a problem. In terms more familiar to the tech crowd, a “pet” is fragile while “cattle” are antifragile (or at least robust, that is, they neither gain nor lose from volatility). Adrian Cockroft seems to have discovered this concept with his team at Netflix. During their transition from a data center to Amazon Web Services, they claimed that “the best way to avoid failure is to fail constantly.” (http://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.html) To facilitate this process, one of the first tools Netflix built was Chaos Monkey, the now-infamous tool which kills your Amazon instances to see if and how well your application responds. By constantly injecting failure, their engineers were forced to design their applications to be more fault tolerant, to degrade gracefully, and to be better distributed so as to avoid any Single Points Of Failure (SPOF). As a result, Netflix has a whole suite of tools which form the Netflix PaaS. Many of these have been released as part of the Netflix OSS ecosystem. 12 FACTOR APPS Because many companies want to avoid relying too heavily on tools from any single third-party, it may be more beneficial to look at the concepts underlying such a cloud-native design. This will also help you evaluate and compare multiple options for solving the core issues at hand. Heroku, being a platform on which thousands or millions of applications are deployed, have had to isolate the core design patterns for applications which operate in the cloud and provide an environment which makes such applications easy to build and maintain. These are described as a manifesto entitled the 12-Factor App. The first part of this post walks through the first five factors and reworks a simple python webapp with them in mind. Part 2 continues with the remaining seven factors, demonstrating how this design allows easier integration with cloud-native containerization technologies like Docker and Deis. Let’s say we’re starting with a minimal python application which simply provides a way to view some content from a relational database. We’ll start with a single-file application, app.py. from flask import Flask import mysql.connector as db import json app = Flask(__name__) def execute(query): con = None try: con = db.connect(host='localhost', user='testdb', password='t123', database='testdb') cur = con.cursor() cur.execute(query) return cur.fetchall() except db.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) return None finally: if con: con.close() def list_users(): users = execute("SELECT id, username, email FROM users") or [] return [{"id": user_id, "username": username, "email": email} for (user_id, username, email) in users] @app.route("/users") def users_index(): return json.dumps(list_users()) if __name__ == "__main__": app.run(host='0.0.0.0', port=5000, debug=True) We can assume you have a simple mysql database setup already. CREATE DATABASE testdb; CREATE TABLE users ( id INT NOT NULL AUTO_INCREMENT, username VARCHAR(80) NOT NULL, email VARCHAR(120) NOT NULL, PRIMARY KEY (id), UNIQUE INDEX (username), UNIQUE INDEX (email) ); INSERT INTO users VALUES (1, "admin", "admin@example.com"); INSERT INTO users VALUES (2, "guest", "guest@example.com"); As you can see, the application is currently implemented as about the most naive approach possible and contained within this single file. We’ll now walk step-by-step through the 12 Factors and apply them to this simple application. THE 12 FACTORS: STEP BY STEP Codebase. A 12-factor app is always tracked in a version control system, such as Git, Mercurial, or Subversion. If there are multiple codebases, its a distributed system in which each component may be a 12-factor app. There are many deploys, or running instances, of each application, including production, staging, and developers' local environments. Since many people are familiar with git today, let’s choose that as our version control system. We can initialize a git repo for our new project. First ensure we’re in the app directory which, at this point, only contains the single app.py file. cd 12factor git init . After adding the single app.py file, we can commit to the repo. git add app.py git commit -m "Initial commit" Dependencies. All dependencies must be explicitly declared and isolated. A 12-factor app never depends on packages to be installed system-wide and uses a dependency isolation tool during execution to stop any system-wide packages from “leaking in.” Good examples are Gem Bundler for Ruby (Gemfile provides declaration and `bundle exec` provides isolation) and Pip/requirements.txt and Virtualenv for Python (where pip/requirements.txt provides declaration and `virtualenv --no-site-packages` provides isolation). We can create and use (source) a virtualenv environment which explicitly isolates the local app’s environment from the global “site-packages” installations. virtualenv env --no-site-packages source env/bin/activate A quick glance at the code we’ll show that we’re only using two dependencies currently, flask and mysql-connector-python, so we’ll add them to the requirements file. echo flask==0.10.1 >> requirements.txt echo mysql-python==1.2.5 >> requirements.txt Let’s use the requirements file to install all the dependencies into our isolated virtualenv. pip install -r requirements.txt Config. An app’s config must be stored in environment variables. This config is what may vary between deploys in developer environments, staging, and production. The most common example is the database credentials or resource handle. We currently have the host, user, password, and database name hardcoded. Hopefully you’ve at least already extracted this to a configuration file; either way, we’ll be moving them to environment variables instead. import os DATABASE_CREDENTIALS = { 'host': os.environ['DATABASE_HOST'], 'user': os.environ['DATABASE_USER'], 'password': os.environ['DATABASE_PASSWORD'], 'database': os.environ['DATABASE_NAME'] } Don’t forget to update the actual connection to use the new credentials object: con = db.connect(**DATABASE_CREDENTIALS) Backing Services. A 12-factor app must make no distinction between a service running locally or as a third-party. For example, a deploy should be able to swap out a local MySQL database with a third-party replacement such as Amazon RDS without any code changes, just by updating a URL or other handle/credentials inside the config. Using a database abstraction layer such as SQLAlchemy (or your own adapter) lets you treat many backing services similarly so that you can switch between them with a single configuration parameter. In this case, it has the added advantage of serving as an Object Relational Mapper to better encapsulate our database access logic. We can replace the hand-rolled execute function and SELECT query with a model object from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL'] db = SQLAlchemy(app) class User(db.Model): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) def __init__(self, username, email): self.username = username self.email = email def __repr__(self): return '<User %r>' % self.username @app.route("/users") def users_index(): to_json = lambda user: {"id": user.id, "name": user.username, "email": user.email} return json.dumps([to_json(user) for user in User.query.all()]) Now we set the DATABASE_URL environment property to something like export DATABASE_URL=mysql://testdb:t123@localhost/testdb But its should be easy to switch to Postgres or Amazon RDS (still backed by MySQL). DATABASE_URL=postgresql://testdb:t123@localhost/testdb We’ll continue this demo using a MySQL cluster provided by Amazon RDS. DATABASE_URL=mysql://sa:mypwd@mydbinstance.abcdefghijkl.us-west-2.rds.amazonaws.com/mydb As you can see, this makes attaching and detaching from different backing services trivial from a code perspective, allowing you to focus on more challenging issues. This is important during the early stages of code because it allows you to performance test multiple databases and third-party providers against one another, and in general keeps with the notion of avoiding vendor lock-in. In Part 2, we'll continue reworking this application so that it fully conforms to the 12 Factors. The remaining eight factors concern the overall application design and how it interacts with the execution environment in which its operated. We’ll assume that we’re operating the app in a multi-container Docker environment. This container-up approach provides the most flexibility and control over your execution environment. We’ll then conclude the article by deploying our application to Deis, a vertically integrated Docker-based PaaS, to demonstrate the tradeoff of configuration vs convention in selecting your own PaaS. About the Author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that’s changing the service model underlying the Internet.
Read more
  • 0
  • 0
  • 34595
article-image-python-ldap-applications-part-1-installing-and-configuring-python-ldap-library-and-bin
Packt
22 Oct 2009
16 min read
Save for later

Configuring and securing PYTHON LDAP Applications Part 1

Packt
22 Oct 2009
16 min read
This article mini-series by Matt Butcher will look at the Python application programmers interface (API) for the LDAP libraries, and using this API, we will connect to our OpenLDAP server and manipulate the directory information tree. More specifically, we will cover the following in this article series: Installing and configuring the Python-LDAP library. Binding to an LDAP directory. Comparing attributes between the client and server. Performing searches on the directory. Modifying the directory information tree with add, delete, and modify operations. Modifying directory passwords. Working with LDAP schemas. This first part will deal with installation and configuration of the Python-LDAP library. We will then see how the binding operation is performed. Installing Python-LDAP There are a couple of LDAP libraries available for Python, but the most popular is the Python-LDAP module, which (as with the PHP API) uses the OpenLDAP C library as a base for providing network access to an LDAP server. Like OpenLDAP, the Python-LDAP API is Open Source. It works on Linux, Windows, Mac OS X, BSD, and probably other UNIX operating systems as well (platforms that have both Python and OpenLDAP available). The source code is available at the official Python-LDAP website: http://python-ldap.sourceforge.net. Here pre-compiled binaries for many platforms are available, but we will install the version in the Ubuntu repository. Before installing Python-LDAP, you will need to have the Python scripting language installed. Typically, this is installed by default on Ubuntu (and on most flavors of Linux). Installing Python-LDAP requires only one command: $ sudo apt-get install python-ldap This will choose the module or modules that match the installed Python version. That is, if you are running Python 2.4 (the stable version, at the time of writing), this will install the python2.4-ldap package. The library, which consists of several Python packages, will be installed into /usr/lib/python2.4/site-packages/. In Ubuntu, there is no need to run further configuration in order to make use of the Python-LDAP library. We are ready to dive into the API. If you install by hand, either from source or from the binary packages, you may need to add the Python-LDAP library to your Python path. See the Python documentation for details. The Python-LDAP API is well documented. The documentation is available online at the official Python-LDAP website: http://python-ldap.sourceforge.net/docs.shtml. You may find it more convenient to download a copy of the documentation and use it locally. In previous versions of Ubuntu Python-LDAP documentation was available in the package python-ldap-doc, which could be installed with apt-get. Also, many of the Python-LDAP functions and objects have documentation strings that can be accessed from the Python interpreter like this: >>> print ldap.initialize.__doc__ Return LDAPObject instance by opening LDAP connection to LDAP host specified by LDAP URL Parameters: uri LDAP URL containing at least connection scheme and hostport, e.g. ldap://localhost:389 trace_level If non-zero a trace output of LDAP calls is generated. trace_file File object where to write the trace output to. Default is to use stdout. The documentation string usually contains a brief description of the function or object, and is a useful quick reference. A Quick Overview of the Python LDAP API Now that the package is installed, let's take a quick look at what was installed. The Python-LDAP package comes with nine different modules: ldap: This is the main LDAP module. It contains the functions  necessary for performing LDAP operations, such as binding, searching,  adding, and modifying. ldap.async: Python can do synchronous and asynchronous transactions.  This module provides utilities that are useful when performing asynchronous  operations. ldap.cidict: This contains the cidict class, which is a case-insensitive  dictionary. Although LDAP is case-insensitive when it comes to attribute  names, it is often necessary to perform case-insensitive operations on  dictionary keys. ldap.modlist: Utility functions for creating modification records (for  performing the LDAP modify operation) are in this package. ldap.filter: This module provides a couple of utility functions for  creating LDAP search filters. ldap.sasl: Python-LDAP's SASL support is partially contained in this  package. It is not documented in the online documentation, but there are  plenty of notes in the doc strings in this module. ldap.schema: This module contains classes that describe the subschema  subentry records. It can be used to access schema information. ldapurl: This module provides a class for generating and parsing LDAP  URLs. ldif: This module is used to parse or write LDIF-formatted LDAP records. Most of the commonly used LDAP features are in the ldap module, and we will be focused mainly on using that. Since many of the submodules have only a couple of functions, we will use them in passing but treat them as separate objects of discussion. A Note on the Python Examples The Python interpreter (python) can be run interactively. Running Python in an interactive mode can be very useful for discovery and debugging. Further, since it prints useful information directly to the console, it can be useful for demonstration purposes. In many of the examples below, the code is shown as it would be entered in the interactive shell. Here is an example: >>> h = "Hello World">>> h'Hello World'>>> print hHello World Lines that begin with >>> and ... are interpreter prompts (similar to $ in shell). Examples with the >>> are run in the interpreter interactively. Some code, however, will be typed into a file (as usual). This code will not have lines beginning with the interpreter prompt. They tend to look more like this: h = “Hello World”print h Most of the time, features are introduced using the interpreter, but lengthier examples are done in the form of a Python script. Where it might be confusing, I will explicitly say in the text which of the two methods I am using. Connecting and Binding to the Directory Now that we have the library installed, we are ready to use the API. The Python-LDAP API connects and binds in two stages. Initializing the LDAP system is done with the ldap.initialize() function. The initialize() method returns an LDAPObject object, which contains methods for performing LDAP operations and retrieving information about the LDAP connection and transactions. A basic initialization is done like this: >>> import ldap>>> con = ldap.initialize('ldap://localhost') The first line of this example imports the ldap module, that contains the initialize() method as well as the LDAPObject that we will make frequent use of. The second line initializes the LDAP code, and returns an LDAPObject that we will use to connect to the server. The initialize() function takes a simple LDAP URL (protocol://host:port) as a parameter. Sometimes, you may prefer to pass in simply host and port information. This can be done with the connect(host, port) function, that also returns an LDAPObject object. In addition, if you need to check or set any LDAP options, you should use the get_option() and set_option() functions before binding. For instance, we can set the connection to require a TLS certificate by setting the OPT_X_TLS_DEMAND option: >>> con.get_option(ldap.OPT_X_TLS_DEMAND)0>>> con.set_option(ldap.OPT_X_TLS_DEMAND, True)>>> con.get_option(ldap.OPT_X_TLS_DEMAND)1 A Safe Connection In most production environments, security is a major concern. As we have seen in previous chapters, one major component of security in network-based LDAP services is the use of SSL/TLS-based connections. There are two ways to get transport-layer security with the Python-LDAP module. The first is to connect to the LDAPS (LDAP over SSL) port. This is done by passing the correct parameter to the initialize() function. Instead of using the ldap:// protocol, which will make an unverified unencrypted connection to port 389, use an ldaps:// protocol, which will make an SSL connection to port 636 (you can specify alternate an alternate port by appending a colon (:) and then the port number to the end of the URL). Or, instead of using LDAPS, you can perform a Start TLS operation before binding to the server: >>> import ldap>>> con = ldap.initialize('ldap://localhost')>>> con.start_tls_s() Note that while the call to ldap.initialize() does not actually open a connection, the call to ldap.start_tls_s() does create a connection. Exceptions Connecting to an LDAP server may result in the raising of an exception, so in production code, it is best to wrap the connection attempt inside of a try/except block. Here is a fragment of a script: #!/usr/bin/env pythonimport ldap, sysserver = 'ldap://localhost'l = ldap.initialize(server)try: l.start_tls_s()except ldap.LDAPError, e: print e.message['info'] if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In the case above, if the start_tls_s() method results in an error, it will be caught. The except clause checks if the returned message is a dict (which it should always be), and also checks if it has the description ('desc') field. If so, it prints the description. Otherwise, it prints the entire message. There are a few dozen exceptions that the Python-LDAP library might raise, but all of them are subclasses of the LDAPError class, and can be caught by the line: except ldap.LDAPError, e: Within an LDAPError object, there is a dictionary, called message, which contains the 'info' and 'desc' fields. The 'info' field contains the information returned from the server, and the 'desc' field contains a description of the error. In general, it is best to use try/except blocks around LDAP operations in order to catch any errors that might occur during processing. Binding Once we have an LDAPObject instance, we can bind to the LDAP directory. The Python-LDAP API supports both simple and SASL binding methods, and there are five different bind methods: bind(): Takes three required parameters: a DN, a password (or credential, for SASL), and a string indicating what type of bind method to use. Currently, only ldap.AUTH_SIMPLE is supported. This is asynchronous. Example: con.bind(dn, pw, ldap.AUTH_SIMPLE) bind_s(): This one is same as above, but it is synchronous, and returns  information about the status of the bind. simple_bind(): This performs a simple bind. This has two optional  parameters: DN and password. If no parameter is specified, this will bind as  anonymous. This is asynchronous. simple_bind_s(): This is the synchronous version of the above. sasl_interactive_bind_s(): This performs an SASL bind, and it takes two parameters: an SASL identifier and an SASL authentication string. First, for many Python LDAP functions, including almost all of the LDAP operations, there are both synchronous and asynchronous versions. Synchronous versions, which will block until the server returns a result, have method names that end with _s. The other operations – those that do not end with _s – are asynchronous. An asynchronous message will begin an operation, and then return control to the program. The operation will continue in the background. It is the responsibility of the program to periodically check on the operation to see if it has been completed. Since they wait to return any results until the operation has been completed, synchronous methods will often have different return values than their asynchronous counterparts. Synchronized methods may return the results obtained from the server, or they may have void returns. Asynchronous methods, on the other hand, will always return a message identifier. This identifier can be used to access the results of the operation. Here's an example of the different results for the two different forms of simple bind. First, the synchronous bind: >>> dn = "uid=matt,ou=users,dc=example,dc=com">>> pw = "secret">>> con.simple_bind_s( dn, pw ) (97, [])>>> Notice that this method returns a tuple. Now, look at the asynchronous version: >>> con.simple_bind( dn, pw )8>>> con.result(8)(97, []) In this case, the simple_bind() method returned 8 – the message identification number for the result. We can use the result() method to fetch the resulting information. The result() method returns a two-item tuple, where the first item is the status code (97 means success), and the second is a list of messages from the server. In this case, the list is empty. Notes on Getting ResultsThere are two noteworthy caveats about fetching results. First, a particular result can only be fetched once. You cannot call result() with the same message ID multiple times. Second, you can execute multiple asynchronous operations without checking the results. The consequence of doing this is that all of the results will be stored until they are fetched. This consumes memory, and can lead to confusing results if result() or result( ldap.RES_ANY ) is called. Later in this chapter, we will see more sophisticated uses of synchronous and asynchronous methods, but for now we will continue looking at methods of binding. The bind() and bind_s() methods work the same way, but they require a third parameter, specifying which sort of authentication mechanism to use. Unfortunately, at the time of this writing, only the AUTH_SIMPLE form of binding (plain old simple bind) is supported by this mechanism: >>> con.bind_s( dn, pw, ldap.AUTH_SIMPLE ) (97, []) This performs a simple bind to the server. Exceptions A bind can fail for a number of reasons, the most common being that the connection failed (the CONNECT_ERROR exception) or authentication failed (INVALID_CREDENTIALS). In production code, it is a good idea to check for these exceptions using try/except blocks. By checking for them separately, you can distinguish between, say, authentication failures and other, more serious failures: l = ldap.initialize(server)try: #l.start_tls_s() l.bind_s(user_dn, user_pw)except ldap.INVALID_CREDENTIALS: print "Your username or password is incorrect." sys.exit()except ldap.LDAPError, e: if type(e.message) == dict and e.message.has_key('desc'): print e.message['desc'] else: print e sys.exit() In this case, if the failure is due to the user entering the wrong DN or password, a message to that effect is printed. Otherwise, the error description provided by the LDAP library is printed. SASL Interactive Binds SASL is a robust authentication mechanism, but the flexibility and adaptability of SASL comes at the cost of additional complexity. This additional complexity is evident in the Python-LDAP module. SASL binding is implemented differently than the other bind methods. First, there is no asynchronous version of the SASL bind method (not all thread safety issues have been worked out in this module, yet). Since the SASL code is not as stable as the rest of the API, you may want to stick to simple binding (with SSL/TLS protection) rather than rely upon SASL support. There is only one SASL binding method, sasl_interactive_bind_s(). This method takes two arguments. The first is a DN string. It is almost always left blank, since with SASL, we usually authenticate with some other identifier. The second argument is an sasl object (or a subclass of an sasl object). The sasl object contains a dictionary of information that the SASL subsystem uses to perform authentication. Each different SASL mechanism is implemented as a class that is a subclass of the sasl object. There are a handful of different subclasses that come with the Python-LDAP module, though you can create your own if you need support for a different mechanism. cram_md5: This class implements the CRAM-MD5 SASL mechanism. A new cram_md5 object can be created with a constructor that passes in the authentication ID, a password, and an optional authorization ID. digest_md5: This implements the DIGEST-MD5 SASL mechanism. Like  cram_md5(), this object can be constructed with an authentication ID, a  password, and an optional authorization ID. gssapi: This implements the GSSAPI mechanism, an its constructor has only the optional authorization ID. It is used to perform Kerberos V authentication. external: This implements the EXTERNAL SASL mechanism, that uses an underlying transport security mechanism (like SSL/TLS). Its constructor only takes the optional authorization ID. Our LDAP server is configured to allow DIGEST-MD5 SASL connections, so we will walk through an example of performing this sort of SASL authentication. >>> import ldap>>> import ldap.sasl>>> user_name = "matt">>> pw = "secret">>> >>> con = ldap.initialize("ldap://localhost")>>> auth_tokens = ldap.sasl.digest_md5( user_name, pw )>>> >>> con.sasl_interactive_bind_s( "", auth_tokens )0 To begin with, we import the ldap and ldap.sasl packages, and we store the user name and password information in a couple of variables. After initializing a connection, we need to create a new sasl object – on that will contain the information necessary to perform DIGEST-MD5 authentication. We do this by constructing a new digest_md5 object: >>> auth_tokens = ldap.sasl.digest_md5( user_name, pw ) Now, auth_tokens points to our new SASL object. Next, we need to bind. This is done with the sasl_interactive_bind_s() method of the LDAPObject: >>> con.sasl_interactive_bind_s( "", auth_tokens ) If a SASL interactive bind is successful, then this method will return an integer. Otherwise, an INVALID_CREDENTIALS exception will be raised: >>> auth_tokens = ldap.sasl.digest_md5( "foo", pw )>>> try:... con.sasl_interactive_bind_s( "", auth_tokens )... except ldap.INVALID_CREDENTIALS, e :... print e... {'info': 'SASL(-13): user not found: no secret in database', 'desc': 'Invalid credentials'} In this case, the user foo was not found in the SASL DB, and the SASL subsystem returned an error.
Read more
  • 1
  • 0
  • 34578

article-image-chaos-engineering-managing-complexity-by-breaking-things
Richard Gall
20 Apr 2018
7 min read
Save for later

Chaos Engineering: managing complexity by breaking things

Richard Gall
20 Apr 2018
7 min read
Chaos Engineering is based on a fundamental assertion about software infrastructure today: that it is inherently chaotic. Or, to be more specific, it is chaotic because it is complex. Whereas software infrastructure used to be centralized, owned and licensed by large enterprise vendors, today much of the software that comprises infrastructure is open source. This is where we get back to chaos - because software infrastructure is comprised of many different parts, the way these parts can be unpredictable. Chaos Engineering is an attempt to acknowledge that fact and develop software accordingly. Who invented Chaos Engineering? Chaos Engineering began at Netflix. That makes sense when you consider the complexity of the Netflix technology stack and the way the company have scaled over the last 5 years or so. It built a number of tools to help adopt this chaos-first approach, the most prominent being Chaos Monkey. First launched in 2011 and open-sourced in 2012, Chaos Monkey was a tool that randomly selects instances in production and pulls them down; a little bit like monkeys pulling off your windscreen wipers in a safari park. However, Chaos Monkey became part of a wider suite of tools - called the Simian Army - that were built by Netflix to cause chaos in different part of its infrastructure. Here are the other two components used to simulate chaos: Chaos Gorilla causes big trouble by pulling down an entire AWS availability zone Latency monkey delays communication, essentially simulating poor network performance From that point Chaos Engineering grew. A number of large Silicon Valley organizations have adopted a similar approaches. For example, Facebook's Project Storm simulates data center failures on a huge scale, while Uber uses a tool called uDestroy. Slack has recently spoken in detail on the importance of stress testing their software too; the company is looking to build an engineering team simply to perform Chaos Engineering and improve Slack's reliability. One of the most interesting figures in Chaos Engineering is a man called Kolton Andrus. Andrus used to work at Amazon and Google, but today he is the CEO and founder of Gremlin, a startup that "helps engineers build resilient systems". Essentially, Andrus helped to develop the concept of Chaos Engineering while he was working at Netflix. Gremlin is his vehicle that is making it accessible to others. Chaos Engineering in practice Now the conceptual stuff is out of the way, here's how chaos engineering works. It's actually quite straightforward: Chaos Engineering simulates all sorts of unpredictable situations and scenarios in order to see how the system responds. It's effectively a form of stress testing. As we've seen, over the past few years companies have built their own tools to allow them to stress test their infrastructure. But Gremlin is taking the approach of offering this as a service. It's product is described as 'resiliency-as-a-service.' Its' product is a whole library of 'attacks' which can replicate different types of outages within a system. These are what it calls 'chaos experiments' that allows you to 'identify weak points in your system and fix them before they become a problem'. In this sense, Chaos Engineering is a bit like using the principles of penetration testing an applying it to software testing more broadly. By simulating everything that could possibly go wrong it allows you to make much better optimization decisions. The principles of Chaos Engineering are documented here. This is effectively its 'manifesto'. There's a lot in there worth reading, but here are the 5 principles that any sort of testing or experimentation should aspire to: Base your testing hypothesis on steady state behavior. Consider your infrastructure holistically, making individual parts work is important but not the priority. Simulate a variety of real-world events. This could be hardware or software failures, or simply external changes like spikes in traffic. What's important is that they're all unpredictable. Test in production. Your tests should be authentic. Automate! Testing things could be laborious and require a lot of manual work. Make use of automation tools to do lots of different tests without taking up too much of your time. Don't cause unnecessary pain. While it's important that your stress-tests are authentic, the impact must be contained and minimized by the engineer. Why Chaos Engineering now? Chaos Engineering isn't particularly new. As you've seen, Netflix has been doing it since 2011. But it does feel more urgent and relevant today. That's because the complexity of the software infrastructure behind many of the biggest Silicon Valley companies is now mainstream. It's normal. Cloud isn't an exotic buzzword any more - it's a reality (a reality that often has failures). Microservices are common - they're a commonsense way of building better applications and websites. Alongside this increased complexity, there is also a growing awareness of how much software outages can cost businesses. In a white paper, Gremlin make a big deal out of how much money is lost due to outages. Gremlin cite BAs system failure in summer 2017, which led to passengers stranded all over the world. This outage was estimated to have cost BA $135 million. It also refers to the Amazon S3 outage in March 2017, which is believed to have cost Amazon's customers $150 million. So - outages cost money. Yes, it's marketing spiel from Gremlin, but it's also true. It doesn't take a genius to work out that if you're eCommerce site is down for an hour, you're going to have lost a lot of money. Because software performance is so tied up with business performance, it feels incredibly fragile. That's why Chaos Engineering is perhaps more important and popular than ever. It's a way of countering that fragility. The key challenges of Chaos Engineering Chaos Engineering poses many challenges to software engineering teams. First and foremost, it requires a big cultural change. If you're intent on breaking everything, there are no rules about how things should work or what you're trying to build. Instead you're looking for the best way to build software that performs for the user. More practically, Chaos Engineering isn't that easy to do in a cost-effective manner. Everything Gremlin details in its white paper is very much true - of course outages cost a hell of a lot. But creative destruction and experimentation feels like an expensive route through software projects. It's not hard to see how it might appear self-indulgent, especially to a company or organization where software isn't properly understood. And more to the point, how often do businesses actually do the smart thing when they're building software? Long term projects are always difficult. So much software evolves pragmatically - often for the worse.  Adding in an extra layer of experimentation and detailed testing is a weird mix of bacchanalian and hyper-organized, something that many organizations just couldn't process or properly understand. Chaos engineering and the future of software development Chaos Engineering certainly looks like the future of software development. The only question is whether services like those provided by Gremlin will take off. To understand the true value of stress testing your infrastructure you do need at least a modicum awareness of the complexity of your infrastructure. Indeed, you probably need to have a conversation about what services and dependencies are most business critical. Or rather, which ones most impact the user. That's something this TechCrunch piece addresses: "Testing can... be very political. Finding the points of failure in a system might force deep conversations about a particular software architecture and its robustness in the face of tough situations. A particular company might be deeply invested in a specific technical roadmap (e.g. microservices) that chaos engineering tests show is not as resilient to failures as originally predicted." This means there is going to be a question mark over the extent to which Chaos Engineering ever really enters the mainstream. How many businesses want to have these conversations? It's not just about the inclination - it's also about the time and money. It's an innovative software engineering approach that really calls people's bluff when they talk about innovation. It asks difficult questions about how and why you innovate: do you do new things because you think you should? Is this new thing going to be good for the business? And how well will it work for users? Of course these questions are vital when you're building software. But they rarely make building software easier.
Read more
  • 0
  • 0
  • 34553

article-image-ex-amazon-employee-hacks-capital-ones-firewall-to-access-its-amazon-s3-database-100m-us-and-60m-canadian-users-affected
Savia Lobo
30 Jul 2019
8 min read
Save for later

Ex-Amazon employee hacks Capital One's firewall to access its Amazon S3 database; 100m US and 60m Canadian users affected

Savia Lobo
30 Jul 2019
8 min read
Update: On 28th August, an indictment was filed in a US federal district court, which mentioned Thompson allegedly hacked and stole information from an additional 30 AWS-hosted organizations and will face computer abuse charges. Capital One Financial Corp., one of the largest banks in the United States, has been subject to a massive data breach affecting 100 million customers in the U.S and an additional 6 million in Canada. Capital One said the hacker exploited a configuration vulnerability in its firewall that allowed access to the data. In its official statement released yesterday, Capital One revealed that on July 19, it determined an "unauthorized access by an outside individual who obtained certain types of personal information relating to people who had applied for its credit card products and to Capital One credit card customers." Paige A. Thompson, 33, the alleged hacker who broke into Capital One server, was arrested yesterday and appeared in federal court in Seattle. She was an ex-employee from Amazon's Cloud service (AWS), Amazon confirms. The Capital One hacker, an ex-AWS employee, “left a trail online for investigators to follow” FBI Special Agent Joel Martini wrote in a criminal complaint filed on Monday that a “GitHub account belonging to Thompson showed that, earlier this year, someone exploited a firewall vulnerability in Capital One’s network that allowed an attacker to execute a series of commands on the bank’s servers”, according to Ars Technica. IP addresses and other evidence ultimately showed that Thompson was the person who exploited the vulnerability and posted the data to Github, Martini said. “Thompson allegedly used a VPN from IPredator and Tor in an attempt to cover her tracks. At the same time, Martini said that much of the evidence tying her to the intrusion came directly from things she posted to social media or put in direct messages”, Ars Technica reports. On  July 17, a tipster wrote to a Capital One security hotline, warning that some of the bank’s data appeared to have been “leaked,” the criminal complaint said. According to The New York Times, Thompson “left a trail online for investigators to follow as she boasted about the hacking, according to court documents in Seattle”. She is listed as the organizer of a group on Meetup, a social network, called Seattle Warez Kiddies, a gathering for “anybody with an appreciation for distributed systems, programming, hacking, cracking.” The F.B.I. noticed her activity on Meetup and used it to trace her other online activities, eventually linking her to posts boasting about the data theft on Twitter and the Slack messaging service.  “I’ve basically strapped myself with a bomb vest, dropping capital ones dox and admitting it,” Thompson posted on Slack, prosecutors say. Highly sensitive financial and social insurance data compromised The stolen data was stored in Amazon S3, "An AWS spokesman confirmed that the company’s cloud had stored the Capital One data that was stolen, and said it wasn’t accessed through a breach or vulnerability in AWS systems", Bloomberg reports. Capital One said the largest category of information accessed was information on consumers and small businesses as of the time they applied for one of its credit card products from 2005 through early 2019. The breached data included personal information Capital One routinely collects at the time it receives credit card applications, including names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income. The hacker also obtained customer status data, e.g., credit scores, credit limits, balances, payment history, contact information including fragments of transaction data from a total of 23 days during 2016, 2017 and 2018. For the Canadian credit card customers, approximately 1 million Social Insurance Numbers were compromised in this incident. About 140,000 Social Security numbers of Capital One's credit card customers and about 80,000 linked bank account numbers of our secured credit card customers were compromised. Richard D. Fairbank, Capital One’s chief executive officer, said in a statement, "I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected.” Thompson is charged with computer fraud and faces a maximum penalty of five years in prison and a $250,000 fine. U.S. Magistrate Judge Mary Alice Theiler ordered Thompson to be held. A bail hearing is set for Aug 1. Capital One said, it “will notify affected individuals through a variety of channels. We will make free credit monitoring and identity protection available to everyone affected”. Capital One's justification of "Facts" is unsatisfactory Users are very skeptical about trusting Capital One with their data going ahead. A user on Hacker News writes, “Obviously this person committed a criminal act, however, Capital One should also shoulder responsibility for not securing customer data. I have a feeling we'd be waiting a long time for accountability on C1's part.” Security experts are surprised with Capital One’s stating of “facts that say “no Social Security numbers were breached’ and say this cannot be true. https://twitter.com/zackwhittaker/status/1156027826912428032 https://twitter.com/DavidAns/status/1156014432511643649 https://twitter.com/GossiTheDog/status/1156232048975273986 Similar to Capital One, there were other data breaches in the past where the companies have agreed on a settlement to help the affected customers like the Equifax or have been levied with huge fines like the Marriott International and British Airways. The Equifax data breach that affected 143 million U.S. consumers on September 7, 2017, resulted in a global settlement including up to $425 million to help people affected by the data breach amounting to approximately $125 per affected victim, should they apply for compensation. This global settlement was done with the Federal Trade Commission, the Consumer Financial Protection Bureau, and 50 U.S. states and territories. The Marriott data breach occurred in Marriott’s Starwood guest database that compromised 383 million user data was revealed on November 19, 2018. Recently, the Information Commissioner’s Office (ICO) in the UK announced its plans to impose a fine of more than £99 million ($124 million) under GDPR. The British Airways data breach compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. Early this month, the ICO also announced it will fine British Airways with more than £183m fine. As a major data breach in one of the largest banks, Capital One could feel the pinch by regulators soon. What sets this case apart from the above breaches is that the affected customers are from the US and Canada and not from the EU. In the absence of regulatory action by the ICO or the EU commission, it is yet to be seen if regulators in the US and Canada will rise to the challenge. Also, now that the alleged hacker has been arrested, does this mean Capital One could slip by without paying any significant fine? Only time can tell if Capital One will pay a huge sum to the regulators for not being watchful of their customers' data in two different states. If the Equifax-FTC case and the Facebook-FTC proceedings are any sign of things to come, Capital One has not much to be concerned about. To know more about this news in detail, read Capital One’s official announcement. Thompson faces additional charges for hacking into the AWS accounts of about 30 organizations On 28th August, an indictment was filed in a US federal district court, where the investigators mentioned they have identified most of the companies and institutions allegedly hit by Thompson. The prosecutors said Thompson wrote software that scanned for customer accounts hosted by a “cloud computing company,” which is believed to be her former employer, AWS or Amazon Web Services. "It is claimed she specifically looked for accounts that suffered a common security hole – specifically, a particular web application firewall misconfiguration – and exploited this weakness to hack into the AWS accounts of some 30 organizations, and siphon their data to her personal server. She also used the hacked cloud-hosted systems to mine cryptocurrency for herself, it is alleged," The Register reports. “The object of the scheme was to exploit the fact that certain customers of the cloud computing company had misconfigured web application firewalls on the servers that they rented or contracted from the cloud computing company,” the indictment reads. The indictment further reads, “The object was to use that misconfiguration in order to obtain credentials for accounts of those customers that had permission to view and copy data stored by the customers on their cloud computing company servers. The object then was to use those stolen credentials in order to access and copy other data stored by the customers.” Thus, she also faces a computer abuse charge over the 30 other AWS-hosted organizations she allegedly hacked and stole information from. Facebook fails to fend off a lawsuit over a data breach of nearly 30 million users US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images Over 19 years of ANU(Australian National University) students’ and staff data breached
Read more
  • 0
  • 0
  • 34533
article-image-overview-unreal-engine-4
Packt
18 Sep 2015
2 min read
Save for later

Overview of Unreal Engine 4

Packt
18 Sep 2015
2 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will discuss and evaluate the basic 3D physics and mathematics concepts in an effort to gain a basic understanding of Unreal Engine 4 physics and real-world physics. To start with, we will discuss the units of measurement, what they are, and how they are used in Unreal Engine 4. In addition, we will cover the following topics: The scientific notation 2D and 3D coordinate systems Scalars and vectors Newton's laws or Newtonian physics concepts Forces and energy For the purpose of this chapter, we will want to open Unreal Engine 4 and create a simple project using the First Person template by following these steps. (For more resources related to this topic, see here.) Launching Unreal Engine 4 When we first open Unreal Engine 4, we will see the Unreal Engine Launcher, which contains a News tab, a Learn tab, a Marketplace tab, and a Library tab. As the first title suggests, the News tab provides you with the latest news from Epic Games, ranging from Marketplace Content releases to Unreal Dev Grant winners, Twitch Stream Recaps, and so on. The Learn tab provides you with numerous resources to learn more about Unreal Engine 4, such as Written Documentation, Video Tutorials, Community Wikis, Sample Game Projects, and Community Contributions. The Marketplace tab allows you to purchase content, such as FX, Weapons Packs, Blueprint Scripts, Environmental Assets, and so on, from the community and Epic Games. Lastly, the Library tab is where you can download the newest versions of Unreal Engine 4, open previously created projects, and manage your project files. Let's start by first launching the Unreal Engine Launcher and choosing Launch from the Library tab, as seen in the following image: For the sake of consistency, we will use the latest version of the editor. At the time of writing this book, the version is 4.7.6. Next, we will select the New Project tab that appears at the top of the window, select the First Person project template with Starter Content, and name the project Unreal_PhyProject: Summary In this article we had an an overview of Unreal Engine 4 and how to launch Unreal Engine 4. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints [article] Unreal Development Toolkit: Level Design HQ [article] Configuration and Handy Tweaks for UDK [article]
Read more
  • 0
  • 0
  • 34533

article-image-frontend-development-bootstrap-4
Packt
06 Oct 2016
19 min read
Save for later

Frontend development with Bootstrap 4

Packt
06 Oct 2016
19 min read
In this article by Bass Jobsen author of the book Bootstrap 4 Site Blueprints explains Bootstrap's popularity as a frontend web development framework is easy to understand. It provides a palette of user-friendly, cross-browser-tested solutions for the most standard UI conventions. Its ready-made, community-tested combination of HTML markup, CSS styles, and JavaScript plugins greatly speed up the task of developing a frontend web interface, and it yields a pleasing result out of the gate. With the fundamental elements in place, we can customize the design on top of a solid foundation. (For more resources related to this topic, see here.) However, not all that is popular, efficient, and effective is good. Too often, a handy tool can generate and reinforce bad habits; not so with Bootstrap, at least not necessarily so. Those who have followed it from the beginning know that its first release and early updates have occasionally favored pragmatic efficiency over best practices. The fact is that some best practices, including from semantic markup, mobile-first design, and performance-optimized assets, require extra time and effort for implementation. Quantity and quality If handled well, I feel that Bootstrap is a boon for the web development community in terms of quality and efficiency. Since developers are attracted to the web development framework, they become part of a coding community that draws them increasingly to the current best practices. From the start, Bootstrap has encouraged the implementation of tried, tested, and future-friendly CSS solutions, from Nicholas Galagher's CSS normalize to CSS3's displacement of image-heavy design elements. It has also supported (if not always modeled) HTML5 semantic markup. Improving with age With the release of v2.0, Bootstrap took responsive design into the mainstream, ensuring that its interface elements could travel well across devices, from desktops to tablets to handhelds. With the v3.0 release, Bootstrap stepped up its game again by providing the following features: The responsive grid was now mobile-first friendly Icons now utilize web fonts and, thus, were mobile- and retina-friendly With the drop of the support for IE7, markup and CSS conventions were now leaner and more efficient Since version 3.2, autoprefixer was required to build Bootstrap This article is about the v4.0 release. This release contains many improvements and also some new components, while some other components and plugins are dropped. In the following overview, you will find the most important improvements and changes in Bootstrap 4: Less (Leaner CSS) has been replaced with Sass. CSS code has been refactored to avoid tag and child selectors. There is an improved grid system with a new grid tier to better target the mobile devices. The navbar has been replaced. It has an opt-in flexbox support. It has a new HTML reset module called Reboot. Reboot extends Nicholas Galagher's CSS normalize and handles the box-sizing: border-box declarations. jQuery plugins are written in ES6 now and come with a UMD support. There is an improved auto-placement of tooltips and popovers, thanks to the help of a library called Tether. It has dropped the support for Internet Explorer 8, which enables us to swap pixels with rem and em units. It has added the Card component, which replaces the Wells, thumbnails, and Panels in earlier versions. It has dropped the icons in the font format from the Glyphicon Halflings set. The Affix plugin is dropped, and it can be replaced with the position: sticky polyfill (https://github.com/filamentgroup/fixed-sticky). The power of Sass When working with Bootstrap, there is the power of Sass to consider. Sass is a preprocessor for CSS. It extends the CSS syntax with variables, mixins, and functions and helps you in DRY (Don't Repeat Yourself) coding your CSS code. Sass has originally been written in Ruby. Nowadays, a fast port of Sass written in C++, called libSass, is available. Bootstrap uses the modern SCSS syntax for Sass instead of the older indented syntax of Sass. Using Bootstrap CLI You will be introduced to Bootstrap CLI. Instead of using Bootstrap's bundled build process, you can also start a new project by running the Bootstrap CLI. Bootstrap CLI is the command-line interface for Bootstrap 4. It includes some built-in example projects, but you can also use it to employ and deliver your own projects. You'll need the following software installed to get started with Bootstrap CLI: Node.js 0.12+: Use the installer provided on the NodeJS website, which can be found at http://nodejs.org/ With Node installed, run [sudo] npm install -g grunt bower Git: Use the installer for your OS Windows users can also try Git Gulp is another task runner for the Node.js system. Note that if you prefer Gulp over Grunt, you should install gulp instead of grunt with the following command: [sudo] npm install -g gulp bower The Bootstrap CLI is installed through npm by running the following command in your console: npm install -g bootstrap-cli This will add the bootstrap command to your system. Preparing a new Bootstrap project After installing the Bootstrap CLI, you can create a new Bootstrap project by running the following command in your console: bootstrap new --template empty-bootstrap-project-gulp Enter the name of your project for the question "What's the project called? (no spaces)". A new folder with the project name will be created. After the setup process, the directory and file structure of your new project folder should look as shown in the following figure: The project folder also contains a Gulpfile.js file. Now, you can run the bootstrap watch command in your console and start editing the html/pages/index.html file. The HTML templates are compiled with Panini. Panini is a flat file compiler that helps you to create HTML pages with consistent layouts and reusable partials with ease. You can read more about Panini at http://foundation.zurb.com/sites/docs/panini.html. Responsive features and breakpoints Bootstrap has got four breakpoints at 544, 768, 992, and 1200 pixels by default. At these breakpoints, your design may adapt to and target specific devices and viewport sizes. Bootstrap's mobile-first and responsive grid(s) also use these breakpoints. You can read more about the grids later on. You can use these breakpoints to specify and name the viewport ranges. The extra small (xs) range is for portrait phones with a viewport smaller than 544 pixels, the small (sm) range is for landscape phones with viewports smaller than 768pixels, the medium (md) range is for tablets with viewports smaller than 992pixels, the large (lg) range is for desktop with viewports wider than 992pixels, and finally the extra-large (xl) range is for desktops with a viewport wider than 1200 pixels. The breakpoints are in pixel values, as the viewport pixel size does not depend on the font size and modern browsers have already fixed some zooming bugs. Some people claim that em values should be preferred. To learn more about this, check out the following link: http://zellwk.com/blog/media-query-units/. Those who still prefer em values over pixels value can simply change the $grid-breakpointsvariable declaration in the scss/includes/_variables.scssfile. To use em values for media queries, the SCSS code should as follows: $grid-breakpoints: ( // Extra small screen / phone xs: 0, // Small screen / phone sm: 34em, // 544px // Medium screen / tablet md: 48em, // 768px // Large screen / desktop lg: 62em, // 992px // Extra large screen / wide desktop xl: 75em //1200px ); Note that you also have to change the $container-max-widths variable declaration. You should change or modify Bootstrap's variables in the local scss/includes/_variables.scss file, as explained at http://bassjobsen.weblogs.fm/preserve_settings_and_customizations_when_updating_bootstrap/. This will ensure that your changes are not overwritten when you update Bootstrap. The new Reboot module and Normalize.css When talking about cascade in CSS, there will, no doubt, be a mention of the browser default settings getting a higher precedence than the author's preferred styling. In other words, anything that is not defined by the author will be assigned a default styling set by the browser. The default styling may differ for each browser, and this behavior plays a major role in many cross-browser issues. To prevent these sorts of problems, you can perform a CSS reset. CSS or HTML resets set a default author style for commonly used HTML elements to make sure that browser default styles do not mess up your pages or render your HTML elements to be different on other browsers. Bootstrap uses Normalize.css written by Nicholas Galagher. Normalize.css is a modern, HTML5-ready alternative to CSS resets and can be downloaded from http://necolas.github.io/normalize.css/. It lets browsers render all elements more consistently and makes them adhere to modern standards. Together with some other styles, Normalize.css forms the new Reboot module of Bootstrap. Box-sizing The Reboot module also sets the global box-sizing value from content-box to border-box. The box-sizing property is the one that sets the CSS-box model used for calculating the dimensions of an element. In fact, box-sizing is not new in CSS, but nonetheless, switching your code to box-sizing: border-box will make your work a lot easier. When using the border-box settings, calculation of the width of an element includes border width and padding. So, changing the border width or padding of an element won't break your layouts. Predefined CSS classes Bootstrap ships with predefined CSS classes for everything. You can build a mobile-first responsive grid for your project by only using div elements and the right grid classes. CSS classes for styling other elements and components are also available. Consider the styling of a button in the following HTML code: <button class="btn btn-warning">Warning!</button> Now, your button will be as shown in the following screenshot: You will notice that Bootstrap uses two classes to style a single button. The first is the .btn class that gives the button the general button layout styles. The second class is the .btn-warning class that sets the custom colors of the buttons. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file called app.scss. Then, create a new subdirectory in the new scss directory named includes. Now, you will have to copy bootstrap.scss and _variables.scss from the Bootstrap source code in the bower_components directory to the new scss/includes directory as follows: cp bower_components/bootstrap/scss/bootstrap.scss scss/includes/_bootstrap.scss cp bower_components/bootstrap/scss/_variables.scss scss/includes/ You will notice that the bootstrap.scss file has been renamed to _bootstrap.scss, starting with an underscore, and has become a partial file now. Import the files you have copied in the previous step into the app.scss file, as follows: @import "includes/variables"; @import "includes/bootstrap"; Then, open the scss/includes/_bootstrap.scss file and change the import part for the Bootstrap partial files so that the original code in the bower_components directory will be imported here. Note that we will set the include path for the Sass compiler to the bower_components directory later on. The @import statements should look as shown in the following SCSS code: // Core variables and mixins @import "bootstrap/scss/variables"; @import "bootstrap/scss/mixins"; // Reset and dependencies @import "bootstrap/scss/normalize"; You're importing all of Bootstrap's SCSS code in your project now. When preparing your code for production, you can consider commenting out the partials that you do not require for your project. Modification of scss/includes/_variables.scss is not required, but you can consider removing the !default declarations because the real default values are set in the original _variables.scss file, which is imported after the local one. Note that the local scss/includes/_variables.scss file does not have to contain a copy of all of the Bootstrap's variables. Having them all just makes it easier to modify them for customization; it also ensures that your default values do not change when you are updating Bootstrap. Setting up your project and requirements For this project, you'll use the Bootstrap CLI again, as it helps you create a setup for your project comfortably. Bootstrap CLI requires you to have Node.js and Gulp already installed on your system. Now, create a new project by running the following command in your console: bootstrap new Enter the name of your project and choose the An empty new Bootstrap project. Powered by Panini, Sass and Gulp. template. Now your project is ready to start with your design work. However, before you start, let's first go through the introduction to Sass and the strategies for customization. The power of Sass in your project Sass is a preprocessor for CSS code and is an extension of CSS3, which adds nested rules, variables, mixins, functions, selector inheritance, and more. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file and name it app.scss. Using the CLI and running the code from GitHub Install the Bootstrap CLI using the following commands in your console: [sudo] npm install -g gulp bower npm install bootstrap-cli --global Then, use the following command to set up a Bootstrap 4 Weblog project: bootstrap new --repo https://github.com/bassjobsen/bootstrap-weblog.git The following figure shows the end result of your efforts: Turning our design into a WordPress theme WordPress is a very popular CMS (Content Management System) system; it now powers 25 percent of all sites across the web. WordPress is a free and open source CMS system and is based on PHP. To learn more about WordPress, you can also visit Packt Publishing’s WordPress Tech Page at https://www.packtpub.com/tech/wordpress. Now let's turn our design into a WordPress theme. There are many Bootstrap-based themes that we could choose from. We've taken care to integrate Bootstrap's powerful Sass styles and JavaScript plugins with the best practices found for HTML5. It will be to our advantage to use a theme that does the same. We'll use the JBST4 theme for this exercise. JBST4 is a blank WordPress theme built with Bootstrap 4. Installing the JBST 4 theme Let's get started by downloading the JBST theme. Navigate to your wordpress/wp-content/themes/ directory and run the following command in your console: git clone https://github.com/bassjobsen/jbst-4-sass.git jbst-weblog-theme Then navigate to the new jbst-weblog-theme directory and run the following command to confirm whether everything is working: npm install gulp Download from GitHub You can download the newest and updated version of this theme from GitHub too. You will find it at https://github.com/bassjobsen/jbst-weblog-theme. JavaScript events of the Carousel plugin Bootstrap provides custom events for most of the plugins' unique actions. The Carousel plugin fires the slide.bs.carousel (at the beginning of the slide transition) and slid.bs.carousel (at the end of the slide transition) events. You can use these events to add custom JavaScript code. You can, for instance, change the background color of the body on the events by adding the following JavaScript into the js/main.js file: $('.carousel').on('slide.bs.carousel', function () { $('body').css('background-color','#'+(Math.random()*0xFFFFFF<<0).toString(16)); }); You will notice that the gulp watch task is not set for the js/main.js file, so you have to run the gulp or bootstrap watch command manually after you are done with the changes. For more advanced changes of the plugin's behavior, you can overwrite its methods by using, for instance, the following JavaScript code: !function($) { var number = 0; var tmp = $.fn.carousel.Constructor.prototype.cycle; $.fn.carousel.Constructor.prototype.cycle = function (relatedTarget) { // custom JavaScript code here number = (number % 4) + 1; $('body').css('transform','rotate('+ number * 90 +'deg)'); tmp.call(this); // call the original function }; }(jQuery); The preceding JavaScript sets the transform CSS property without vendor prefixes. The autoprefixer only prefixes your static CSS code. For full browser compatibility, you should add the vendor prefixes in the JavaScript code yourself. Bootstrap exclusively uses CSS3 for its animations, but Internet Explorer 9 doesn’t support the necessary CSS properties. Adding drop-down menus to our navbar Bootstrap’s JavaScript Dropdown Plugin enables you to create drop-down menus with ease. You can also add these drop-down menus in your navbar too. Open the html/includes/header.html file in your text editor. You will notice that the Gulp build process uses the Panini HTML compiler to compile our HTML templates into HTML pages. Panini is powered by the Handlebars template language. You can use helpers, iterations, and custom data in your templates. In this example, you'll use the power of Panini to build the navbar items with drop-down menus. First, create a html/data/productgroups.yml file that contains the titles of the navbar items: Shoes Clothing Accessories Women Men Kids All Departments The preceding code is written in the YAML format. YAML is a human-readable data serialization language that takes concepts from programming languages and ideas from XML; you can read more about it at http://yaml.org/. Using the data described in the preceding code, you can use the following HTML and template code to build the navbar items: <ul class="nav navbar-nav navbar-toggleable-sm collapse" id="collapsiblecontent"> {{#each productgroups}} <li class="nav-item dropdown {{#ifCond this 'Shoes'}}active{{/ifCond}}"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> {{ this }} </a> <div class="dropdown-menu"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Separated link</a> </div> </li> {{/each}} </ul> The preceding code uses a (for) each loop to build the seven navbar items; each item gets the same drop-down menu. The Shoes menu got the active class. Handlebars, and so Panini, does not support conditional comparisons by default. The if-statement can only handle a single value, but you can add a custom helper to enable conditional comparisons. The custom helper, which enables us to use the ifCond statement can be found in the html/helpers/ifCond.js file. Read my blog post, How to set up Panini for different environment, at http://bassjobsen.weblogs.fm/set-panini-different-environments/, to learn more about Panini and custom helpers. The HTML code for the drop-down menu is in accordance with the code for drop-down menus as described for the Dropdown plugin at http://getbootstrap.com/components/dropdowns/. The navbar collapses for smaller screen sizes. By default, the drop-down menus look the same on all grids: Now, you will use your Bootstrap skills to build an Angular 2 app. Angular 2 is the successor of AngularJS. You can read more about Angular 2 at https://angular.io/. It is a toolset for building the framework that is most suited to your application development; it lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. Angular is maintained by Google and a community of individuals and corporations. I have also published the source for Angular 2 with Bootstrap 4 starting point at GitHub. You will find it at the following URL: https://github.com/bassjobsen/angular2-bootstrap4-website-builder. You can install it by simply running the following command in your console: git clone https://github.com/bassjobsen/angular2-bootstrap4-website-builder.git yourproject Next, navigate to the new folder and run the following commands and verify that it works: npm install npm start Other tools to deploy Bootstrap 4 A Brunch skeleton using Bootstrap 4 is available at https://github.com/bassjobsen/brunch-bootstrap4. Brunch is a frontend web app build tool that builds, lints, compiles, concatenates, and shrinks your HTML5 apps. Read more about Brunch at the official website, which can be found at http://brunch.io/. You can try Brunch by running the following commands in your console: npm install -g brunch brunch new -s https://github.com/bassjobsen/brunch-bootstrap4 Notice that the first command requires administrator rights to run. After installing the tool, you can run the following command to build your project: brunch build The preceding command will create a new public/index.html file, after which you can open it in your browser. You'll find that it look like this: Yeoman Yeoman is another build tool. It’s a command-line utility that allows the creation of projects utilizing scaffolding templates, called generators. A Yeoman generator that scaffolds out a frontend Bootstrap 4 web app can be found at the following URL: https://github.com/bassjobsen/generator-bootstrap4 You can run the Yeoman Bootstrap 4 generator by running the following commands in your console: npm install -g yo npm install -g generator-bootstrap4 yo bootstrap4 grunt serve Again, note that the first two commands require administrator rights. The grunt serve command runs a local web server at http://localhost:9000. Point your browser to that address and check whether it look as follows: Summary Beyond this, there are a plethora of resources available for pushing further with Bootstrap. The Bootstrap community is an active and exciting one. This is truly an exciting point in the history of frontend web development. Bootstrap has made a mark in history, and for a good reason. Check out my GitHub pages at http://github.com/bassjobsen for new projects and updated sources or ask me a question on Stack Overflow (http://stackoverflow.com/users/1596547/bass-jobsen). Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 34510
Modal Close icon
Modal Close icon