Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-react-dashboard-and-visualizing-data
Xavier Bruhiere
26 Nov 2015
8 min read
Save for later

React Dashboard and Visualizing Data

Xavier Bruhiere
26 Nov 2015
8 min read
I spent the last six months working on data analytics and machine learning to feed my curiosity and prepare for my new job. It is a challenging mission and I chose to give up for a while on my current web projects to stay focused. Back then, I was coding a dashboard for an automated trading system, powered by an exciting new framework from Facebook : React. In my opinion, Web Components was the way to go and React seemed gentler with my brain than, say, Polymer. One just needed to carefully design components boundaries, properties and states and bam, you got a reusable piece of web to plug anywhere. Beautiful. This is quite a naive way to put it of course but, for an MVP, it actually kind of worked. Fast forward to last week, I was needing a new dashboard to monitor various metrics from my shiny new infrastructure. Specialized requirements kept me away from a full-fledged solution like InfluxDB and Grafana combo, so I naturally starred at my old code. Well, it turned out I did not reuse a single line of code. Since the last time I spent in web development, new tools, frameworks and methodologies had taken over the world : es6 (and transpilers), isomorphic applications, one-way data flow, hot reloading, module bundler, ... Even starter kits are remarkably complex (at least for me) and I got overwhelmed. But those new toys are also truly empowering and I persevered. In this post, we will learn to leverage them, build the simplest dashboard possible and pave the way toward modern, real-time metrics monitoring. Tooling & Motivations I think the points of so much tooling are productivity and complexity management. New single page applications usually involve a significant number of moving parts : front and backend development, data management, scaling, appealing UX, ... Isomorphic webapps with nodejs and es6 try to harmonize this workflow sharing one readable language across the stack. Node already sells the "javascript everywhere" argument but here, it goes even further, with code that can be executed both on the server and in the browser, indifferently. Team work and reusability are improved, as well as SEO (Search Engine optimization) when rendering HTML on server-side. Yet, applications' codebase can turn into a massive mess and that's where Web Components come handy. Providing clear contracts between modules, a developer is able to focus on subpart of the UI with an explicit definition of its parameters and states. This level of abstraction makes the application much more easy to navigate, maintain and reuse. Working with React gives a sense of clarity with components as Javascript objects. Lifecycle and behavior are explicitly detailed by pre-defined hooks, while properties and states are distinct attributes. We still need to glue all of those components and their dependencies together. That's where npm, Webpack and Gulp join the party. Npm is the de facto package manager for nodejs, and more and more for frontend development. What's more, it can run for you scripts and spare you from using a task runner like Gulp. Webpack, meanwhile, bundles pretty much anything thanks to its loaders. Feed it an entrypoint which require your js, jsx, css, whatever ... and it will transform and package them for the browser. Given the steep learning curve of modern full-stack development, I hope you can see the mean of those tools. Last pieces I would like to introduce for our little project are metrics-graphics and react-sparklines (that I won't actually describe but worth noting for our purpose). Both are neat frameworks to visualize data and play nicely with React, as we are going to see now. Graph Component When building components-based interfaces, first things to define are what subpart of the UI those components are. Since we start a spartiate implementation, we are only going to define a Graph. // Graph.jsx // new es6 import syntax import React from 'react'; // graph renderer import MG from 'metrics-graphics'; export default class Graph extends React.Component { // called after the `render` method below componentDidMount () { // use d3 to load data from metrics-graphics samples d3.json('node_modules/metrics-graphics/examples/data/confidence_band.json', function(data) { data = MG.convert.date(data, 'date'); MG.data_graphic({ title: {this.props.title}, data: data, format: 'percentage', width: 600, height: 200, right: 40, target: '#confidence', show_secondary_x_label: false, show_confidence_band: ['l', 'u'], x_extended_ticks: true }); }); } render () { // render the element targeted by the graph return <div id="confidence"></div>; } } This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band.json I stole on Mozilla official examples. Now let's actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). // main.jsx // tell webpack to bundle style along with the javascript import 'metrics-graphics/dist/metricsgraphics.css'; import 'metrics-graphics/examples/css/metricsgraphics-demo.css'; import 'metrics-graphics/examples/css/highlightjs-default.css'; import React from 'react'; import Graph from './components/Graph'; function main() { // it is recommended to not directly render on body var app = document.createElement('div'); document.body.appendChild(app); // key/value pairs are available under `this.props` hash within the component React.render(<Graph title={Keep calm and build a dashboard}/>, app); } main(); Now that we defined in plain javascript the web page, it's time for our tools to take over and actually build it. Build workflow This is mostly a matter of configuration. First, create the following structure. $ tree . ├── app │ ├── components │ │ ├── Graph.jsx │ ├── main.jsx ├── build └── package.json Where package.json is defined like below. { "name": "react-dashboard", "scripts": { "build": "TARGET=build webpack", "dev": "TARGET=dev webpack-dev-server --host 0.0.0.0 --devtool eval-source --progress --colors --hot --inline --history-api-fallback" }, "devDependencies": { "babel-core": "^5.6.18", "babel-loader": "^5.3.2", "css-loader": "^0.15.1", "html-webpack-plugin": "^1.5.2", "node-libs-browser": "^0.5.2", "react-hot-loader": "^1.2.7", "style-loader": "^0.12.3", "webpack": "^1.10.1", "webpack-dev-server": "^1.10.1", "webpack-merge": "^0.1.2" }, "dependencies": { "metrics-graphics": "^2.6.0", "react": "^0.13.3" } } A quick npm install will download every package we need for development and production. Two scripts are even defined to build a static version of the site, or serve a dynamic one that will be updated on file changes detection. This formidable feature becomes essential once tasted. But we have yet to configure Webpack to enjoy it. var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var webpack = require('webpack'); var merge = require('webpack-merge'); // discern development server from static build var TARGET = process.env.TARGET; // webpack prefers abolute path var ROOT_PATH = path.resolve(__dirname); // common environments configuration var common = { // input main.js we wrote earlier entry: [path.resolve(ROOT_PATH, 'app/main')], // import requirements with following extensions resolve: { extensions: ['', '.js', '.jsx'] }, // define the single bundle file output by the build output: { path: path.resolve(ROOT_PATH, 'build'), filename: 'bundle.js' }, module: { // also support css loading from main.js loaders: [ { test: /.css$/, loaders: ['style', 'css'] } ] }, plugins: [ // automatically generate a standard index.html to attach on the React app new HtmlWebpackPlugin({ title: 'React Dashboard' }) ] }; // production specific configuration if(TARGET === 'build') { module.exports = merge(common, { module: { // compile es6 jsx to standard es5 loaders: [ { test: /.jsx?$/, loader: 'babel?stage=1', include: path.resolve(ROOT_PATH, 'app') } ] }, // optimize output size plugins: [ new webpack.DefinePlugin({ 'process.env': { // This has effect on the react lib size 'NODE_ENV': JSON.stringify('production') } }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }); } // development specific configuration if(TARGET === 'dev') { module.exports = merge(common, { module: { // also transpile javascript, but also use react-hot-loader, to automagically update web page on changes loaders: [ { test: /.jsx?$/, loaders: ['react-hot', 'babel?stage=1'], include: path.resolve(ROOT_PATH, 'app'), }, ], }, }); } Webpack configuration can be hard to swallow at first but, given the huge amount of transformations to operate, this style scales very well. Plus, once setup, the development environment becomes remarkably productive. To convince yourself, run webpack-dev-server and reach localhost:8080/assets/bundle.js in your browser. Tweak the title argument in main.jsx, save the file and watch the browser update itself. We are ready to build new components and extend our modular dashboard. Conclusion We condensed in a few paragraphs a lot of what makes the current web ecosystem effervescent. I strongly encourage the reader to deepen its knowledge on those matters and consider this post as it is : an introduction. Web components, like micro-services, are fun, powerful and bleeding edges. But also complex, fast-moving and unstable. The tooling, especially, is impressive. Spend a hard time to master them and craft something cool ! About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 12873

article-image-go-extensions-fetching-data-and-more
Xavier Bruhiere
24 Nov 2015
6 min read
Save for later

Go Extensions: Fetching Data and More

Xavier Bruhiere
24 Nov 2015
6 min read
The choice of Go for my last project was driven by its ability to cross-compile code into static binary. A script pushes stable versions on Github releases or Bintray and anyone can wget the package and use it right away. One of the important distinctions between Influx and some other time series solutions is that it doesn’t require any other software to install and run. This is one of the many wins that Influx gets from choosing Go as its implementation language. - Paul Dix This "static linking" awesomeness has a cost though. No evaluation at runtime, every piece of features are frozen once compiled. However a developer might happen to need more flexibility. In this post, we will study several use-cases and implementations where Go dynamic extensions unlock great features for your projects. Configuration Gulp is a great example of the benefits of configuration as code (more control, easier to extend). Thanks to gopher-lua, we're going to implement this behavior. Being our first step, let's write a skeleton for our investigations. package main import ( "log" "github.com/yuin/gopher-lua" ) // LuaPlayground exposes a bridge to Lua. type LuaPlayground struct { VM *lua.LState } func main() { // initialize lua VM 5.1 and compiler L := lua.NewState() defer L.Close() } Gopher-lua let us call Lua code from Go and share information between each environments. The idea is to define the app configuration as a convenient scripting language like the one below. -- save as conf.lua print("[lua] defining configuration") env = os.getenv("ENV") log = "debug" plugins = { "plugin.lua" } Now we can read those variables from Go. // DummyConf is a fake configuration we want to fill type DummyConf struct { Env string LogLevel string Plugins *lua.LTable } // Config evaluates a Lua script to build a configuration structure func (self *LuaPlayground) Config(filename string) *DummyConf { if err := self.VM.DoFile(filename); err != nil { panic(err) } return &DummyConf{ Env: self.VM.GetGlobal("env").String(), LogLevel: self.VM.GetGlobal("log").String(), Plugins: self.VM.GetGlobal("plugins").(*lua.LTable), } } func main() { // [...] playground := LuaPlayground{ VM: L } conf := playground.Config("conf.lua") log.Printf("loaded configuration: %vn", conf) } Using a high level scripting language gives us great flexibility to initialize an application. While we only exposed simple assignments, properties could be fetched from services or computed at runtime. Scripting Heka 's sandbox constitutes a broader approach to Go plugins. It offers an isolated environment where developers have access to specific methods and data to control Heka's behavior. This strategy exposes an higher level interface to contributors without recompilation. The following code snippet extends our existing LuaPlayground structure with such skills. // Log is a go function lua will be able to run func Log(L *lua.LState) int { // lookup the first argument msg := L.ToString(1) log.Println(msg) return 1 } // Scripting exports Go objects to Lua sandbox func (self *LuaPlayground) Scripting(filename string) { // expose the log function within the sandbox self.VM.SetGlobal("log", self.VM.NewFunction(Log)) if err := self.VM.DoFile(filename); err != nil { panic(err) } } func main() { // [...] playground.Scripting("script.lua") } Lua code are now able to leverage the disruptive Go function Log. -- save as script.lua log("Hello from lua !") This is obviously a scarce example intended to show the way. Following the same idiom, gopher-lua let us export to Lua runtime complete modules, channels, Go structures. Therefor we can hide and compile implementation details as a Go library, while business logic and data manipulation is left to a productive and safe scripting environment. This idea leads us toward another pattern : hooks. As an illustration, Git is able to execute arbitrary scripts when such files are found under a specific directory, on specific events (like running tests before pushing code). In the same spirit, we could program a routine to list and execute files in a pre-defined directory. Moving a script in this folder, therefore, would activate a new hook. This is also the strategy Dokku leverages. Extensions This section takes things upside down. The next piece of code expects a Lua script to define its methods. Those components become plug-and-play extensions or components one could replace, activate or deactivate. // [...] // Call executes a function defined in Lua namespace func (self *LuaPlayground) Call(method string, arg string) string { if err := self.VM.CallByParam(lua.P{ Fn: self.VM.GetGlobal(method), NRet: 1, Protect: true, }, lua.LString(arg) /* method argument */ ); err != nil { panic(err) } // returned value ret := self.VM.Get(-1) // remove last value self.VM.Pop(1) return ret.String() } // Extend plugs new capabilities to this program by loading the given script func (self *LuaPlayground) Extend(filename string) { if err := self.VM.DoFile(filename); err != nil { panic(err) } log.Printf("Identity: %vn", self.Call("lookupID", "mario")) } func main() { // [...] playground.Extend("extension.lua") } An interesting use-case for such feature would be swappable backend. A service discovery application, for example, might use a key/value storage. One extension would perform requests against Consul, while another one would fetch data from etcd. This setup would allow an easier integration into existing infrastructures. Alternatives Executing arbitrary code at runtime brings the flexibility we can expect from language like Python or Node.js, and popular projects developed their own framework. Hashicorp reuses the same technic throughout its Go projects. Plugins are standalone binaries only a master process can run. Once launched, both parties use RPC to communicate data and commands. This approach proved to be a great fit in the open-source community, enabling experts to contribute drivers for third-party services. An other take on Go plugins was recently pushed by InfluxDB with Telegraf, a server agent for reporting metrics. Much closer to OOP, plugins must implement an interface provided by the project. While we still need to recompile to register new plugins, it eases development by providing a dedicated API. Conclusion The release of Docker A.7 and previous debates show the potential of Go extensions, especially in open-source projects where author wants other developers to contribute features in a manageable fashion. This article skimmed several approach to bypass static go binaries and should feed some further ideas. Being able to just drop-in an executable and instantly use a new tool is a killer feature of the language and one should be careful if scripts became dependencies to make it work. However dynamic code execution and external plugins keep development modular and ease developers on-boarding. Having those trade-off in mind, the use-cases we explored could unlock worthy features for your next Go project. About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 2922

article-image-internet-peas-gardening-javascript-part-1
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 1

Anna Gerber
23 Nov 2015
6 min read
Who wouldn't want an army of robots to help out around the home and garden? It's not science fiction: Robots are devices that sense and respond to the world around us, so with some off-the-shelf hardware, and the power of the Johnny-Five JavaScript Robotics framework, we can build and program simple "robots" to automate every day tasks. In this two part article series, we'll build an internet-connected device for monitoring plants. Bill of materials You'll need these parts to build this project Part Source Particle Core (or Photon) Particle 3xAA Battery holder e.g. with micro USB connector from DF Robot Jumper wires Any electronics supplier e.g. Sparkfun Solderless breadboard Any electronics supplier e.g. Sparkfun Photo resistor Any electronics supplier e.g. Sparkfun 1K resistor Any electronics supplier e.g. Sparkfun Soil moisture sensor e.g. Sparkfun Plants   Particle (formerly known as Spark) is a platform for developing devices for the Internet of Things. The Particle Core was their first generation Wifi development board, and has since been supeceded by the Photon. Johnny-Five supports both of these boards, as well as Arduino, BeagleBone Black, Raspberry Pi, Edison, Galileo, Electric Imp, Tessel and many other device platforms, so you can use the framework with your device of choice. The Platform Support page lists the features currently supported on each device. Any device with Analog Read support is suitable for this project. Setting up the Particle board Make sure you have a recent version of Node.js installed. We're using npm (Node Package Manager) to install the tools and libraries required for this project. Install the Particle command line tools with npm (via the Terminal on Mac, or Command Prompt on Windows): npm install -g particle-cli Particle boards need to be registered with the Particle Cloud service, and you must also configure your device to connect to your wireless network. So the first thing you'll need to do is connect it to your computer via USB and run the setup program. See the Particle Setup docs. The LED on the Particle Core should be blinking blue when you plug it in for the first time (if not, press and hold the mode button). Sign up for a Particle Account and then follow the prompts to setup your device via the Particle website, or if you prefer you can run the setup program from the command line. You'll be prompted to sign in and then to enter your Wifi SSID and password: particle setup After setup is complete, the Particle Core can be disconnected from your computer and powered by batteries or a separate USB power supply - we will connect to the board wirelessly from now on. Flashing the board We also need to flash the board with the Voodoospark firmware. Use the CLI tool to sign in to the Particle Cloud and list your devices to find out the ID of your board: particle cloud login particle list Download the firmware.cpp file and use the flash command to write the Voodoospark firmware to your device: particle cloud flash <Your Device ID> voodoospark.cpp See the Voodoospark Getting Started page for more details. You should see the following message: Flash device OK: Update started The LED on the board will flash magenta. This will take about a minute, and will change back to green when the board is ready to use. Creating a Johnny-Five project We'll be installing a few dependencies from npm, so to help manage these, we'll set up our project as an npm package. Run the init command, filling in the project details at the prompts. npm init After init has completed, you'll have a package.json file with the metadata that you entered about your project. Dependencies for the project can also be saved to this file. We'll use the --save command line argument to npm when installing packages to persist dependencies to our package.json file. We'll need the Johnny-Five npm module as well as the Particle-IO IO Plugin for Johnny-Five. npm install johnny-five --save npm install particle-io --save Johnny-Five uses the Firmata protocol to communicate with Arduino-based devices. IO Plugins provide Firmata compatible interfaces to allow Johnny-Five to communicate with non-Arduino-based devices. The Particle-IO Plugin allows you to run Node.js applications on your computer that communicate with the Particle board over Wifi, so that you can read from sensors or control components that are connected to the board. When you connect to your board, you'll need to specify your Device ID and your Particle API Access Token. You can look up your access token under Settings in the Particle IDE. It's a good idea to copy these to environment variables rather than hardcoding them into your programs. If you are on Mac or Linux, you can create a file called .particlerc then run source .particlerc: export PARTICLE_TOKEN=<Your Token Here> export PARTICLE_DEVICE_ID=<Your Device ID Here> Reading from a sensor Now we're ready to get our hands dirty! Let's confirm that we can communicate with our Particle board using Johnny-Five, by taking a reading from our soil moisture sensor. Using jumper wires, connect one pin on the soil sensor to pin A0 (analog pin 0) and the other to GND (ground). The probes go into the soil in your plant pot. Create a JavaScript file named sensor.js using your preferred text editor or IDE. We use require statements to include the Johnny-Five module and the Particle-IO plugin. We're creating an instance of the Particle IO plugin (with our token and deviceId read from our environment variables) and providing this as the io config option when creating our Board object. var five = require("johnny-five"); var Particle = require("particle-io"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { console.log("CONNECTED"); var soilSensor = new five.Sensor("A0"); soilSensor.on("change", function() { console.log(this.value); }); }); After the board is ready, we create a Sensor object to monitor changes on pin A0, and then print the value from the sensor to the Node.js console whenever it changes. Run the program using Node.js: node sensor.js Try pulling the sensor out of the soil or watering your plant to make the sensor reading change. See the Sensor API for more methods that you can use with Sensors. You can hit control-C to end the program. In the next installment we'll connect our light sensor and extend our Node.js application to monitor our plant's environment. Continue reading now! About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3297

article-image-internet-peas-gardening-javascript-part-2
Anna Gerber
23 Nov 2015
6 min read
Save for later

The Internet of Peas? Gardening with JavaScript Part 2

Anna Gerber
23 Nov 2015
6 min read
In this two-part article series, we're building an internet-connected garden bot using JavaScript. In part one, we set up a Particle Core board, created a Johnny-Five project, and ran a Node.js program to read raw values from a soil moisture sensor. Adding a light sensor Let's connect another sensor. We'll extend our circuit to add a photo resistor to measure the ambient light levels around our plants. Connect one lead of the photo resistor to ground, and the other to analog pin 4, with a 1K pull-down resistor from A4 to the 3.3V pin. The value of the pull-down resistor determines the raw readings from the sensor. We're using a 1K resistor so that the sensor values don't saturate under tropical sun conditions. For plants kept inside a dark room, or in a less sunny climate, a 10K resistor might be a better choice. Read more about how pull-down resistors work with photo resistors at AdaFruit. Now, in our board's ready callback function, we add another sensor instance, this time on pin A4: var lightSensor = new five.Sensor({ pin: "A4", freq: 1000 }); lightSensor.on("data", function() { console.log("Light reading " + this.value); }); For this sensor we are logging the sensor value every second, not just when it changes. We can control how often sensor events are emitted by specifying the number of milliseconds in the freq option when creating the sensor. We can use the threshold config option can be used to control when the change callback occurs. Calibrating the soil sensor The soil sensor uses the electrical resistance between two probes to provide a measure of the moisture content of the soil. We're using a commercial sensor, but you could make your own simply using two pieces of wire spaced about an inch apart (using galvinized wire to avoid rust). Water is a good conductor of electricity, so a low reading means that the soil is moist, while a high amount of resistance indicates that the soil is dry. Because these aren't very sophisticated sensors, the readings will vary from sensor to sensor. In order to do anything meaningful with the readings within our application, we'll need to calibrate our sensor. Calibrate by making a note of the sensor values for very dry soil, wet soil, and in between to get a sense of what the optimal range of values should be. For an imprecise sensor like this, it also helps to map the raw readings onto ranges that can be used to display different messages (e.g. very dry, dry, damp, wet) or trigger different actions. The scale method on the Sensor class can come in handy for this. For example, we could convert the raw readings from 0 - 1023 to a 0 - 5 scale: soilSensor.scale(0, 5).on("change", function() { console.log(this.value); }); However, the raw readings for this sensor range between about 50 (wet) to 500 (fairly dry soil). If we're only interested in when the soil is dry, i.e. when readings are above 300, we could use a conditional statement within our callback function, or use the within method so that the function is only triggered when the values are inside a range of values we care about. soilSensor.within([ 300, 500 ], function() { console.log("Water me!"); }); Our raw soil sensor values will vary depending on the temperature of the soil, so this type of sensor is best for indoor plants that aren't exposed to weather extremes. If you are installing a soil moisture sensor outdoors, consider adding a temperature sensor and then calibrate for values at different temperature ranges. Connecting more sensors We have seven analog and seven digital IO pins on the Particle Core, so we could attach more sensors, perhaps more of the same type to monitor additional planters, or different types of sensors to monitor additional conditions. There are many kinds of environmental sensors available through online marketplaces like AliExpress and ebay. These include sensors for temperature, humidity, dust, gas, water depth, particulate detection etc. Some of these sensors are straightforward analog or digital devices that can be used directly with the Johnny-Five Sensor class, as we have with our soil and light sensors. The Johnny-Five API also includes subclasses like Temperature, with controllers for some widely used sensor components. However, some sensors use protocols like SPI, I2C or OneWire, which are not as well supported by Johnny-Five across all platforms. This is always improving, for example, I2C was added to the Particle-IO plugin in October 2015. Keep an eye on I2C component backpacks which are providing support for additional sensors via secondary microcontrollers. Automation If you are gardening at scale, or going away on extended vacation, you might want more than just monitoring. You might want to automate some basic garden maintenance tasks, like turning on grow lights on overcast days, or controlling a pump to water the plants when the soil moisture level gets low. This can be acheived with relays. For example, we can connect a relay with a daylight bulb to a digital pin, and use it to turn lights on in response to the light readings, but only between certain hours: var five = require("johnny-five"); var Particle = require("particle-io"); var moment = require("moment"); var board = new five.Board({ io: new Particle({ token: process.env.PARTICLE_TOKEN, deviceId: process.env.PARTICLE_DEVICE_ID }) }); board.on("ready", function() { var lightSensor = new five.Sensor("A4"); var lampRelay = new five.Relay(2); lightSensor.scale(0, 5).on("change", function() { console.log("light reading is " + this.value) var now = moment(); var nightCurfew = now.endOf('day').subtract(4,'h'); var morningCurfew = now.startOf('day').add(6,'h'); if (this.value > 4) { if (!lampRelay.isOn && now.isAfter(morningCurfew) && now.isBefore(nightCurfew)) { lampRelay.on(); } } else { lampRelay.off(); } }); }); And beyond... One of the great things about using Node.js with hardware is that we can extend our apps with modules from npm. We could publish an Atom feed of sensor readings over time, push the data to a web UI using socket-io, build an alert system or create a data visualization layer, or we might build an API to control lights or pumps attached via relays to our board. It's never been easier to program your own internet-connected robot helpers and smart devices using JavaScript. Build more exciting robotics projects with servos and motors – click here to find out how. About the author Anna Gerber is a full-stack developer with 15 years’ experience in the university sector, formerly a Technical Project Manager at The University of Queensland ITEE eResearchspecializing in Digital Humanities and Research Scientist at the Distributed System Technology Centre (DSTC). Anna is a JavaScript robotics enthusiast and maker who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 3488

article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 12597

article-image-building-your-app-creating-executables-nwjs
Adam Lynch
17 Nov 2015
5 min read
Save for later

Building Your App: Creating Executables for NW.js

Adam Lynch
17 Nov 2015
5 min read
How hard can it be to package up your NW.js app into real executables? To be a true desktop app, it should be a self-contained .exe, .app, or similar. There are a few ways to approach this. Let's start with the simplest approach with the least amount of code or configuration. It's possible to run your app by creating a ZIP archive containing your app code, changing the file extension to .nw and then launching it using the official npm module like this: nw myapp.nw. Let's say you wanted to put your app out there as a download. Anyone looking to use it would have to have nw installed globally too. Unless you're making an app for NW.js users, that's not a great idea. Use an existing executable You could substitute one of the official NW.js executables in for the nw module. You could download a ZIP from the NW.js site containing an executable (nw.exe for example) and a few other bits and pieces. If you already have the nw module, then if you go to where it's installed on your machine (e.g. /usr/local/lib/node_modules/nw on Mac OS X), the executable can be found in the nwjs directorty. If you wanted, you could keep things really simple and leave it at that. Just use the official executable to open your .nw archive; i.e. nw.exe myapp.nw. Merging them Ideally though, you want as few files as possible. Think of your potential end users, they deserve better. One way to do this is to mash the NW.js executable and your .nw archive together to produce a single executable. This is achieved differently per platform though. On Windows, you need to run copy /b nw.exe+myapp.nw nw.exe on the command-line. Now we have a single nw.exe. Even though we now have a single executable, it still requires the DLLs and everything else which comes with the official builds to be in the same directory as the .exe for it to work correctly. You could rename nw.exe to something nicer but it's not advised as native modules will not work if the executable isn't named nw.exe. This is expected to be fixed in NW.js 0.13.0 when NW.js will come with an nw.dll (along with nw.exe) which modules will link to instead. On Linux, the command would be cat path/to/nw myapp.nw > myapp && chmod +x myapp (where nw is the NW.js executable). Since .app executables are just directories on Mac OS X, you could just copy the offical nwjs executable and edit it. Rename your .nw archive to app.nw, put it in the Contents/Resources inner directory, and you're done. Actually, a .nw archive isn't even necessarily. You could create an Contents/Resources/app.nw directory and add your raw app files there. Other noteworthy files which you could edit are Contents/Resources/nw.icns which is your app's icon and Contents/Info.plist, Apple's app package description file. nw-builder There are a few downsides to all of that; it's platform-specific, very manual, and is very limited. The nw-builder module will handle all of that for you, and more. Either from the command-line or programmatically, it makes building executables light work. Once you install it globally by running npm install -g nw-builder, then you could run the following command to generate executables: nwbuild your/app/files/directory -o destination/directory nw-builder will go and grab the latest NW.js version and generate self-contained executables for you. You can specify a lot of options here via flags too; the NW.js version you'd like, which platforms to build for, etc. Yes, you can build for multiple platforms. By default it builds 32-bit and 64-bit Windows and Mac executables, but Linux 32-bit and 64-bit executables can also be generated. E.g. nwbuild appDirectory -v 0.12.2 -o dest -p linux64. Note: I am a maintainer of nw-builder. Ignoring my bias, that was surprisingly simple. right? Using the API I personally prefer to use it programmatically though. That way I can have a build script which passes all of the options and so on. Let's say you create a simple file called build.js; var NwBuilder = require('nw-builder'); var nw = new NwBuilder({ files: './path/to/app/files/**/**' // use the glob format }); // .build() returns a promise but also supports a plain callback approach as well nw.build().then(function () { console.log('all done!'); }).catch(function (error) { console.error(error); }); Running node build.js will produce your executables. Simples. Gulp If you already use Gulp like I do and would like to slot this into your tasks, it's easy. Just use the same nw-builder module; var gulp = require('gulp'); var NwBuilder = require('nw-builder'); var nw = newNwBuilder({     files: './path/to/app/files/**/**'// use the glob format }); gulp.task('default', function(){     returnnw.build(); }); Grunt Yep, there's a plugin for that; run npm install grunt-nw-builder to get it. Then add something like the following to your Gruntfile: grunt.initConfig({   nwjs: {     options: {},     src: ['./path/to/app/files/**/*']   } }); Then running grunt nwjs will produce your executables. All nw-builder options are available to Grunt users too. Options There are a lot of options which pretty granular control. Aside from the ones I've mentioned already and options already available in the app manifest, there are ones for controlling the structure and or compression of inner files, your executables' icons, Mac OS X specific options concerning the plist file and so on. Go check out nw-builder for yourself and see how quickly you can package your Web app into real executables. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010. 
Read more
  • 0
  • 0
  • 10540
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-make-generic-typealiases-swift
Alexander Altman
16 Nov 2015
4 min read
Save for later

How to Make Generic typealiases in Swift

Alexander Altman
16 Nov 2015
4 min read
Swift's typealias declarations are a good way to clean up our code. It's generally considered good practice in Swift to use typealiases to give more meaningful and domain-specific names to types that would otherwise be either too general-purpose or too long and complex. For example, the declaration: typealias Username = String gives a less vague name to the type String, since we're going to be using strings as usernames and we want a more domain-relevant name for that type. Similarly, the declaration: typealias IDMultimap = [Int: Set<Username>] gives a name for [Int: Set<Username>] that is not only more meaningful, but somewhat shorter. However, we run into problems when we want to do something a little more advanced; there is a possible application of typealias that Swift doesn't let us make use of. Specifically, Swift doesn't accept typealiases with generic parameters. If we try it the naïvely obvious way, typealias Multimap<Key: Hashable, Value: Hashable> = [Key: Set<Value>] we get an error at the begining of the type parameter list: Expected '=' in typealias declaration. Swift (as of version 2.1, at least) doesn't let us directly declare a generic typealias. This is quite a shame, as such an ability would be very useful in a lot of different contexts, and languages that have it (such as Rust, Haskell, Ocaml, F♯, or Scala) make use of it all the time, including in their standard libraries. Is there any way to work around this linguistic lack? As it turns out, there is! The Solution It's actually possible to effectively give a typealias type parameters, despite Swift appearing to disallow it. Here's how we can trick Swift into accepting a generic typealias: enum Multimap<Key: Hashable, Value: Hashable> { typealias T = [Key: Set<Value>] } The basic idea here is that we declare a generic enum whose sole purpose is to hold our (now technically non-generic) typealias as a member. We can then supply the enum with its type parameters and project out the actual typealias inside, like so: let idMMap: Multimap<Int, Username>.T = [0: ["alexander", "bob"], 1: [], 2: ["christina"]] func hasID0(user: Username) -> Bool { return idMMap[0]?.contains(user) ?? false } Notice that we used an enum rather than a struct; this is because an enum with no cases cannot have any instances (which is exactly what we want), but a struct with no members still has (at least) one instance, which breaks our layer of abstraction. We are essentially treating our caseless enum as a tiny generic module, within which everything (that is, just the typealias) has access to the Key and Value type parameters. This pattern is used in at least a few libraries dedicated to functional programming in Swift, since such constructs are especially valuable there. Nonetheless, this is a broadly useful technique, and it's the best method currently available for creating generic typealias in Swift. The Applications As sketched above, this technique works because Swift doesn't object to an ordinary typealias nested inside a generic type declaration. However, it does object to multiple generic types being nested inside each other; it even objects to either a generic type being nested inside a non-generic type or a non-generic type being nested inside a generic type. As a result, type-level currying is not possible. Despite this limitation, this kind of generic typealias is still useful for a lot of purposes; one big one is specialized error-return types, in which Swift can use this technique to imitate Rust's standard library: enum Result<V, E> { case Ok(V) case Err(E) } enum IO_Result<V> { typealias T = Result<V, ErrorType> } Another use for generic typealiases comes in the form of nested collections types: enum DenseMatrix<Element> { typealias T = [[Element]] } enum FlatMatrix<Element> { typealias T = (width: Int, elements: [Element]) } enum SparseMatrix<Element> { typealias T = [(x: Int, y: Int, value: Element)] } Finally, since Swift is a relatively young language, there are sure to be undiscovered applications for things like this; if you search, maybe you'll find one! Super-charge your Swift development by learning how to use the Flyweight pattern – Click here to read more About the author Alexander Altman (https://pthariensflame.wordpress.com) is a functional programming enthusiast who enjoys the mathematical and ergonomic aspects of programming language design. He's been working with Swift since the language's first public release, and he is one of the core contributors to the TypeLift (https://github.com/typelift) project.
Read more
  • 0
  • 0
  • 13545

article-image-using-jupyter-write-documentation
Marin Gilles
13 Nov 2015
5 min read
Save for later

How to Write Documentation with Jupyter

Marin Gilles
13 Nov 2015
5 min read
The Jupyter notebook is an interactive notebook allowing you to write documents with embedded code, and execute this code on the fly. It was originally developed as a part of the Ipython project, and could only be used for Python code at that time. Nowadays, the Jupyter notebook integrates multiple languages, such as R, Julia, Haskell and much more - the notebook supports about 50 languages. One of the best features of the notebook is to be able to mix code and markdown with embedded HTML/CSS. It allows an easy creation of beautiful interactive documents, such as documentation examples. It can also help with the writing of full documentation using its export mechanism to HTML, RST (ReStructured Text), markdown and PDF. Interactive documentation examples When writing library documentation, a lot of time should be dedicated to writing examples for each part of the library. However, those examples are quite often static code, each part being explained through comments. To improve the writing of those examples, a solution could be using a Jupyter notebook, which can be downloaded and played with by anyone reading your library documentation. Solutions also exist to have the notebook running directly on your website, as seen on the Jupyter website, where you can try the notebooks. This will not be explained in this post, but the notebook was designed on a server-client pattern, making this easy to get running. Using the notebook cells capabilities, you can separate each part of your code, or each example, and describe it nicely and properly outside the code cell, improving readability. From the Jupyter Python notebook example, we see what the following code does, execute it (and even get graphics back directly on the notebook!). Here is an example of a Jupyter notebook, for the Python language, with matplotlib integration: Even more than that, instead of just having your example code in a file, people downloading your notebook will directly get the full example documentation, giving them a huge advantage in understanding what the example is and what it does when opening the notebook again after six months. And they can just hit the run button, and tinker with your example to understand all its details, without having to go back and forth between the website and their code editor, saving them time. They will love you for that! Generate documentation from your notebooks Developing mainly in Python, I am used to the Sphinx library as a documentation generator. It can export your documentation to HTML from RST files, and scoops your code library to generate documentation from docstring, all with a single command, making it quite useful in the process of writing. As Jupyter notebooks can be exported to RST, why not use this mechanism to create your RST files with Jupyter, then generate your full documentation with Sphinx? To manually convert your notebook, you can click on File -> Download As -> reST. You will be prompted to download the file. That's it! Your notebook was exported. However, while this method is good for testing purposes, this will not be good for an automatic generation of documentation with sphinx. To be able to convert automatically, we are going to use a tool named nbconvert with which can do all the required conversions from the command line. To convert your notebook to RST, you just need to do the following $ jupyter nbconvert --to rst *your_notebook*.ipynb or to convert every notebook in the current folder: $ jupyter nbconvert --to rst *.ipynb Those commands can easily be integrated in a Makefile for your documentation, making the process of converting your notebooks completely invisible. If you want to keep your notebooks in a folder notebooks and your generated files in a folder rst, you can run assuming you have the following directory tree: Current working directory | |-rst/ |-notebooks/ |-notebook1.ipynb |-notebook2.ipynb |-... the following commands: $ cd rst $ jupyter nbconvert --to rst ../notebooks/*.ipynb This will convert all the notebooks in notebooks and place them in the rst folder. A Python API is also available if you want to generate your documentation from Python (Documentation). A lot more export options are available on the nbconvert documentation. You can create PDF, HTML or even slides, if you want to make a presentation based on a notebook, and can even pull a presentation from a remote location. Jupyter notebooks are very versatile documents, allowing interactive code exploration, export to a large number of formats, remote work, collaborative work and more. You can find more information on the official Jupyter website where you will also be able to try it. I mainly focused this post on the Python language, in which the IPython notebooks, ancestor of Jupyter were developed, but with the integration of more than 50 languages, it makes it a tool that every developer should be aware of, to create documentation, tutorials, or just to try code and keep notes at the same place. Dive deeper into the Jupyter Notebook by navigating your way around the dashboard and interface in our article. Read now! About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or Ipython.
Read more
  • 0
  • 0
  • 35103

article-image-creating-simple-3d-multiplayer-game-threejs
Marco Stagni
12 Nov 2015
15 min read
Save for later

Creating a simple 3D Multiplayer Game with THREE.js

Marco Stagni
12 Nov 2015
15 min read
This post will teach you how to build a simple 3D multiplayer game using THREE.js and socket.io. This guide is intended to help you create a very simple yet perfectly functional multiplayer FPS ( First Person Shooter). Its name is "Dodgem" and will feature a simple arena, random destructible obstacles and your opponent. I've already done this, and you can check it out on github. The playable version of this project can be found here. Explanation First of all, we need a brief description of how the project is built. Speaking technically, we have a random number of clients (our actual players), and a server, which randomly select an opponent for each new player. Every player who joins the game is put in a "pending" status, until he/she is randomly selected for a new match. Before the match starts, the player will be able to wander around the arena, and when the server communicates the beginning of the fight, both clients will receive informations on how to create the obstacles in the world. Each player is represented like a blue sphere with two guns on the sides (this has been done for simplicity's sake, we all know a humanoid figure would be more interesting). Every obstacle is destructible, if you shoot them, and you life is represented as a red bar on top the screen. Once one of the players dies, both will be prompted to join another match or simply continue to move, jump and shoot around. Let's code! We can now start our project. We'll use a easy-to-use "Game Engine" I wrote, since it provides a lot of useful things for our project. Its name is "Wage", it's fully open source (you can check the github repository here) and it's available to install via npm. So, first things first, prompt this to your shell: npm install -g wage This will install wage as global package on your machine. You will be now able to create a new project wherever you want, using the "wage" command. Keep in mind that this Game Engine is still an in-dev version, so please use it very carefully, and submit any issue you want to the repository if needed. Now, run: wage create dodgem This will create a folder named "dodgem" in your current directory, with everything you need to start the project. We're now ready to start: I won't explain every single line, just the basic informations required to start and the skeleton of the app (The entire source code is available on github, and you're free to clone the repo on your machine). Only the server code is fully exaplained. Now, we can create our server. Server First of all, create a "server" folder beside the "app" folder. Add a "package.json" inside of it, with this content: { "name":"dodgem", "version":"1.0.0", "description":"", "main":"", "author":"Your name goes here, thanks to Marco Stagni", "license":"MIT", "dependencies":{ "socket.io":"*", "underscore":"*" } } This file will tell npm that our server uses socket.io and underscore as modules (no matter what version they are), and running npm install inside the "server" folder will install the dependencies inside a "node_modules" folder. Speaking about the modules, socket.io is obviously used as main communication system between server and clients, while underscore is used because it provides a LOT of useful tools when you're working with data sets. If you don't know what socket.io and underscore are, just click on the links for a deep explanation. I won't explain how socket.io works, because I assume that you're already aware of its functionalities. We'll now create the server.js file (you must create it inside the server folder): //server.js // server requirements var util = require("util"), io = require("socket.io"), _ = require("underscore"), Player = require("./Player.js").Player; // server variables var socket, players, pendings, matches; // init method functioninit() { players = []; pendings = []; matches = {}; // socket.io setup socket = io.listen(8000); // setting socket io transport socket.set("transports", ["websocket"]); // setting socket io log level socket.set("log lever", 2); setEventHandlers(); } var setEventHandlers = function() { // Socket.IO socket.sockets.on("connection", onSocketConnection); }; Util module is only used for logging purposes, and you don't need to install it via npm, since it's a system module. The Player variable refers to the Player model, which will be explained later. The other variables (socket, players, pendings an matches) will be used to store informations about pending players, matches and socket.io instance. init and setEventHandlers This method is used to instanciate socket.io and set a few options, such as the transport method (we're using only websocket, but socket.io provides also a lot of transports more than websocket) and log level. The socket.io server is set to listen on port 8000, but you can choose whatever port you desire. This method also instanciate players, pendings and matches variables, and calls the "setEventHandlers" method, which will attach a "connection" event listener to the socket.io instance. The init method is called at the end of the server code. We can now add a few lines after the "setEventHandlers" method. socket.io listeners // New socket connection functiononSocketConnection(client) { util.log("New player has connected: "+client.id); // Listen for client disconnected client.on("disconnect", onClientDisconnect); // Listen for new player message client.on("new player", onNewPlayer); // Listen for move player message client.on("move player", onMovePlayer); // Listen for shooting player client.on("shooting player", onShootingPlayer); // Listen for died player client.on("Idied", onDeadPlayer); // Listen for another match message client.on("anothermatch", onAnotherMatchRequested); }; This function is the "connection" event of socket.io, and what it does is setting every event listener we need for our server. The events listed are: "disconnect" (when our client closes his page, or reloads it), "new player" (called when a client connects to the server), "move player" (called every time the player moves around the arena), "shooting player" (the player is shooting), "Idied" (the player who sent this message has died) and finally "anothermatch" (our player is requesting another match to the server). The most important part of the listeners is the one which listen for new players. // New player has joined functiononNewPlayer(data) { // Create a new player var newPlayer = newPlayer(data.x, data.y, data.z, this); newPlayer.id = this.id; console.log("creating new player"); // Add new player to the players array players.push(newPlayer); // searching for a pending player var id = _.sample(pendings); if (!id) { // we didn't find a player console.log("added " + this.id + " to pending"); pendings.push(newPlayer.id); // sending a pending event to player newPlayer.getSocket().emit("pending", {status: "pending", message: "waiting for a new player."}); } else { // creating match pendings = _.without(pendings, id); matches[id] = newPlayer.id; matches[newPlayer.id] = id; console.log(matches); // generating world for this match var numObstacles = _.random(10, 30); var height = _.random(70, 100); var MAX_X = 490 var MINUS_MAX_X = -490 var MAX_Z = 990 var MINUS_MAX_Z = -990 var positions = []; for (var i=0; i<numObstacles; i++) { positions.push({ x: _.random(MINUS_MAX_X, MAX_X), z: _.random(MINUS_MAX_Z, MAX_Z) }); } console.log(numObstacles, height, positions); // sending both player info that they're connected newPlayer.getSocket().emit("matchstarted", {status: "matchstarted", message: "Player found!", numObstacles: numObstacles, height: height, positions: positions}); playerById(id).getSocket().emit("matchstarted", {status: "matchstarted", message: "Player found!", numObstacles: numObstacles, height: height, positions: positions}); } }; What it does, is creating a new player using the Player module imported at the beginning, storing the socket associated with the client. The new player is now stored inside the players list. The most important thing now is the research of a new match: the server will randomly pick a player from the pendings list, and if it's able to find one, it will create a match. If the server doesn't find a suitable player for the match, the new player is put inside the pendings list. Once the match is created, the server creates the informations needed by clients in order to create a common world. The informations are the sent to both clients. As you can see, I'm using a playerById method, which essentially is a function which search inside the players list for a player with the id equal to the one given. // Find player by ID functionplayerById(id) { var i; for (i = 0; i < players.length; i++) { if (players[i].id == id) return players[i]; }; returnfalse; }; The other functions used as socket listeners are: onMovePlayer This function is called when the "move player" event is received. This will find the player associated with the socket id, find its opponent using the "matches" oject, then emit a socket event to the opponent, proving the right informations about the player movement. Using pseudo code, the onMovePlayer function is something like this: onMovePlayer: function(data) { movingPlayer = findPlayerById(this.id) opponent = matches[movingPlayer.id] if !opponent console.log"error else opponent.socket.send(data.movement) } onShootingPlayer This function is called when the "shooting player" event is received. This will find the player associated with the socket id, find its opponent using the "matches" oject, then emit a socket event to the opponent, proving the right informations about the shooting player (such as starting point of the bullet and bullet direction). Using pseudo code, the onShootingPlayer function is something like this: onShootingPlayer: function(data) { shootingPlayer = findPlayerById(this.id) opponent = matches[shootingPlayer.id] if !opponent console.log"error else opponent.socket.send(data.startingpoint, data.direction) } onDeadPlayer, onAnotherMatchRequested This function is called every time a player dies. When this happen, the dead user is removed from matches, players and pendings references, and he's prompted to join a new match (his/her opponent is informed that he/she won the match). If this happen, the procedure is nearly the same as when he connects for the first time: another player is randomly picked from pendings list, and another match is created from scratch. onClientDisconnect Last but not least, onClientDisconnect is the function called when a user disconnects from server: this can happen when the user reloads the page, or when he/she closes the client. The corresponding opponent is informed of the situation, and put back to pending status. We now must see how the "Player" model is implemented, since it's used to create new players, and to retrieve informations about their connection, movements or behaviour. Player var Player = function(startx, starty, startz, socket) { var x = startx, y = starty, z = startz, socket = socket, rotx, roty, rotz, id; // getters var getX = function() { return x; } var getY = function() { return y; } var getZ = function() { return z; } var getSocket = function() { return socket; } var getRotX = function() { return rotx; } var getRotY = function() { return roty; } var getRotZ = function() { return rotz; } // setters var setX = function(value) { x = value; } var setY = function(value) { y = value; } var setZ = function(value) { z = value; } var setSocket = function(socket) { socket = socket; } var setRotX = function(value) { rotx = value; } var setRotY = function(value) { roty = value; } var setRotZ = function(value) { rotz = value; } return { getX: getX, getY: getY, getZ: getZ, getRotX: getRotX, getRotY: getRotY, getRotZ: getRotZ, getSocket: getSocket, setX: setX, setY: setY, setZ: setZ, setRotX: setRotX, setRotY: setRotY, setRotZ: setRotZ, setSocket: setSocket, id: id } }; exports.Player = Player; The Player model is pretty straightforward: you just have getter and setters for every parameter of the Player object, but not all of them are used inside this project. So, this was the server code. This is not obviously the complete source code, but I explained all of the characteristics it has. For the complete code, you can check the github repository here. Client The client is pretty easy to understand. Once you create the project using Wage, you will find a file, named "main.js": this is the starting point of your game, and can contain almost every single aspect of the game logic. The very first time you create something with Wage, you will find a file like this: include("app/scripts/cube/mybox") Class("MyGame", { MyGame: function() { App.call(this); }, onCreate: function() { var geometry = newTHREE.CubeGeometry(20, 20, 20); var material = newTHREE.MeshBasicMaterial({ color: 0x00ff00, wireframe : true }); var cube = newMesh(geometry, material, {script : "mybox", dir : "cube"}); console.log("Inside onCreate method"); document.addEventListener( 'mousemove', app.onDocumentMouseMove, false ); document.addEventListener( 'touchstart', app.onDocumentTouchStart, false ); document.addEventListener( 'touchmove', app.onDocumentTouchMove, false ); document.addEventListener( 'mousewheel', app.onDocumentMouseWheel, false); //example for camera movement app.camera.addScript("cameraScript", "camera"); } })._extends("App"); I know you need a brief description of what is going on inside the main.js file. Wage is using a library I built, wich provides a easy to use implementation of inheritance: this will allow you to create, extend and implement classes in an easy and readable way. The library is named "classy", and you can find every information you need on github. A Wage application needs an implementation of the "App" class, as you can see from the example above. The constructor is the function MyGame, and it simply calls the super class "App". The most important method you have to check is "onCreate", because it's the method that you have to use in order to star adding elements to your scene. First, you need a description of Wage is, and what is capable to do for you. Wage is a "Game Engine". It automatically creates a THREE.js Scene for you, and gives you a huge set of features that allow you to easily control your game. The most important things are a complete control of meshes (both animated and not), lights, sounds, shaders, particle effects and physics. Every single mesh and light is "scriptable", since you're able to modify their behaviour by "attaching" a custom script to the object itself (if you know how Unity3D works, then you know what I'm talking about. Maybe you can have a look a this, to better understand.). The scene created by Wage is added to index.html, which is the layout loaded by your app. Of course, index.html behave like a normal html page, so you can import everything you want, such as stylesheets or external javascript libraries. In this case, you have to import socket.io library inside index.html, like this: <head> ... <script type="text/javascript" src="http:/YOURIP:8000/socket.io/socket.io.js"></script> ... </head> I will now provide a description of what the client does, describing each feature in pseudo code. Class("Dodgem", { Dodgem: function() { super() }, onCreate: function() { this.socket = socketio.connect("IPADDRESS"); // setting listeners this.socket.on("shoot", shootListener); this.socket.on("move", moveListener); this.socket.on(message3, listener3); // creating platform this.platform = newPlatform(); }, shootListener: function(data) { // someone is shooting createBulletAgainstMe(data) }, moveListener: function(data) { // our enemy is moving around moveEnemyAround(data) } }); Game.update = function() { handleBulletsCollisions() updateBullets() updateHealth() if health == 0 Game.Die() } As you can see, the onCreate method takes care of creating the socket.io instance, adding event listeners for every message incoming from the game server. The events we need to handle are: "shooting" (our opponent is shooting at us), "move" (our opponent is moving, we need to update its position), "pending" (we've been added to the pending list, we're waiting for new player), "matchstarted" (our match is started), "goneplayer" (our opponent is no longer online), "win" (guess what this event means..). Every event has its own listener. onMatchStarted onMatchStarted: function(data) { alert(data.message) app.platform.createObstacles(data) app.opponent = newPlayer() } This function creates the arena's obstacles, and creates our opponent. A message is shown to user, telling that the match is started. onShooting onShooting: function(data) { createEnemyBullet() } This function only creates enemy bullets with informations coming from the server. The created bullet is then handled by the Game.update method, to check collisions with obstacles, enemy and our entity. onMove onMove: function(data) { app.opponent.move(data.movement) } This function handles the movements of our opponent. Every time he moves, we need to update his position on our screen. Player movements are updated at the highest rate possible. onGonePlayer onGonePlayer: function(data) { alert(data.message) } This function only shows a message, telling the user that his opponents has just left the arena. onPending onPending: function(data) { alert(data.message) } This function is called when we join the game server for the first time and the server is not able to find a suitable player for our match. We're able to move around the arena, randomly shooting. Conclusions Ok, that's pretty much everything you need to know on how to create a simple multiplayer game. I hope this guide gave you the informations you need to start creating you very first multiplayer game: the purpose of this guide wasn't to give you every line of code needed, but instead provide a useful guide line on how to create something funny and nicely playable. I didn't cover the graphic aspect of the game, because it's completely available on the github repository, and it's easily understandable. However, covering the graphic aspect of the game is not the main purpose of this tutorial. This is a guide that will let you understand what my simple multiplayer game does. The playable version of this project can be found here. About the author Marco Stagni is a Italian frontend and mobile developer, with a Bachelor's Degree in Computer Engineering. He's completely in love with JavaScript, and he's trying to push his knowledge of the language in every possible direction. After a few years as frontend and Android developer, working both with Italian Startups and Web Agencies, he's now deepening his knowledge about Game Programming. His efforts are currently aimed to the completion of his biggest project: a Javascript Game Engine, built on top of THREE.js and Physijs (the project is still in alpha version, but already downloadable via http://npmjs.org/package/wage. You can follow him also on twitter @marcoponds or on github at http://github.com/marco-ponds. Marco is a big NBA fan.
Read more
  • 0
  • 0
  • 22466

article-image-using-cloud-applications-and-containers
Xavier Bruhiere
10 Nov 2015
7 min read
Save for later

Using Cloud Applications and Containers

Xavier Bruhiere
10 Nov 2015
7 min read
We can find a certain comfort while developing an application on our local computer. We debug logs in real time. We know the exact location of everything, for we probably started it by ourselves. Make it work, make it right, make it fast - Kent Beck Optimization is the root of all devil - Donald Knuth So hey, we hack around until interesting results pop up (ok that's a bit exaggerated). The point is, when hitting the production server our code will sail a much different sea. And a much more hostile one. So, how to connect to third party resources ? How do you get a clear picture of what is really happening under the hood ? In this post we will try to answer those questions with existing tools. We won't discuss continuous integration or complex orchestration. Instead, we will focus on what it takes to wrap a typical program to make it run as a public service. A sample application Before diving into the real problem, we need some code to throw on remote servers. Our sample application below exposes a random key/value store over http. // app.js // use redis for data storage var Redis = require('ioredis'); // and express to expose a RESTFul API var express = require('express'); var app = express(); // connecting to redis server var redis = new Redis({ host: process.env.REDIS_HOST || '127.0.0.1', port: process.env.REDIS_PORT || 6379 }); // store random float at the given path app.post('/:key', function (req, res) { var key = req.params.key var value = Math.random(); console.log('storing', value,'at', key) res.json({set: redis.set(key, value)}); }); // retrieve the value at the given path app.get('/:key', function (req, res) { console.log('fetching value at ', req.params.key); redis.get(req.params.key).then(function(err, result) { res.json({ result: result || err }); }) }); var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Example app listening at http://%s:%s', host, port); }); And we define the following package.json and Dockerfile. { "name": "sample-app", "version": "0.1.0", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.12.4", "ioredis": "^1.3.6", }, "devDependencies": {} } # Given a correct package.json, those two lines alone will properly install and run our code FROM node:0.12-onbuild # application's default port EXPOSE 3000 A Dockerfile ? Yeah, here is a first step toward cloud computation under control. Packing our code and its dependencies into a container will allow us to ship and launch the application with a few reproducible commands. # download official redis image docker pull redis # cd to the root directory of the app and build the container docker build -t article/sample . # assuming we are logged in to hub.docker.com, upload the resulting image for future deployment docker push article/sample Enough for the preparation, time to actually run the code. Service Discovery The server code needs a connection to redis. We can't hardcode it because host and port are likely to change under different deployments. Fortunately The Twelve-Factor App provides us with an elegant solution. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; Indeed, this strategy integrates smoothly with an infrastructure composed of containers. docker run --detach --name redis redis # 7c5b7ff0b3f95e412fc7bee4677e1c5a22e9077d68ad19c48444d55d5f683f79 # fetch redis container virtual ip export REDIS_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis) # note : we don't specify REDIS_PORT as the redis container listens on the default port (6379) docker run -it --rm --name sample --env REDIS_HOST=$REDIS_HOST article/sample # > sample-app@0.1.0 start /usr/src/app # > node app.js # Example app listening at http://:::3000 In another terminal, we can check everything is working as expected. export SAMPLE_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' sample)) curl -X POST $SAMPLE_HOST:3000/test # {"set":{"isFulfilled":false,"isRejected":false}} curl -X GET $SAMPLE_HOST:3000/test # {"result":"0.5807915225159377"} We didn't precise any network informations but even so, containers can communicate. This method is widely used and projects like etcd or consul let us automate the whole process. Monitoring Performances can be a critical consideration for end-user experience or infrastructure costs. We should be able to identify bottlenecks or abnormal activities and once again, we will take advantage of containers and open source projects. Without modifying the running server, let's launch three new components to build a generic monitoring infrastructure. Influxdb is a fast time series database where we will store containers metrics. Since we properly defined the application into two single-purpose containers, it will give us an interesting overview of what's going on. # default parameters export INFLUXDB_PORT=8086 export INFLUXDB_USER=root export INFLUXDB_PASS=root export INFLUXDB_NAME=cadvisor # Start database backend docker run --detach --name influxdb --publish 8083:8083 --publish $INFLUXDB_PORT:8086 --expose 8090 --expose 8099 --env PRE_CREATE_DB=$INFLUXDB_NAME tutum/influxdb export INFLUXDB_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' influxdb) cadvisor Analyzes resource usage and performance characteristics of running containers. The command flags will instruct it how to use the database above to store metrics. docker run --detach --name cadvisor --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 google/cadvisor:latest --storage_driver=influxdb --storage_driver_user=$INFLUXDB_USER --storage_driver_password=$INFLUXDB_PASS --storage_driver_host=$INFLUXDB_HOST:$INFLUXDB_PORT --log_dir=/ # A live dashboard is available at $CADVISOR_HOST:8080/containers # We can also point the brower to $INFLUXDB_HOST:8083, with credentials above, to inspect containers data. # Query example: # > list series # > select time,memory_usage from stats where container_name='cadvisor' limit 1000 # More infos: https://github.com/google/cadvisor/blob/master/storage/influxdb/influxdb.go Grafana is a feature rich metrics dashboard and graph editor for Graphite, InfluxDB and OpenTSB. From its web interface, we will query the database and graph the metrics cadvisor collected and stored. docker run --detach --name grafana -p 8000:80 -e INFLUXDB_HOST=$INFLUXDB_HOST -e INFLUXDB_PORT=$INFLUXDB_PORT -e INFLUXDB_NAME=$INFLUXDB_NAME -e INFLUXDB_USER=$INFLUXDB_USER -e INFLUXDB_PASS=$INFLUXDB_PASS -e INFLUXDB_IS_GRAFANADB=true tutum/grafana # Get login infos generated docker logs grafana  Now we can head to localhost:8000 and build a custom dashboard to monitor the server. I won't repeat the comprehensive documentation but here is a query example: # note: cadvisor stores metrics in series named 'stats' select difference(cpu_cumulative_usage) where container_name='cadvisor' group by time 60s Grafana's autocompletion feature shows us what we can track : cpu, memory and network usage among other metrics. We all love screenshots and dashboards so here is a final reward for our hard work. Conclusion Development best practices and a good understanding of powerful tools gave us a rigorous workflow to launch applications with confidence. To sum up: Containers bundle code and requirements for flexible deployment and execution isolation. Environment stores third party services informations, giving developers a predictable and robust solution to read them. InfluxDB + Cadvisor + Grafana feature a complete monitoring solution independently of the project implementation. We fullfilled our expections but there's room for improvements. As mentioned, service discovery could be automated, but we also omitted how to manage logs. There are many discussions around this complex subject and we can expect shortly new improvements in our toolbox. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 14686
article-image-adding-custom-meter-ceilometer
John Belamaric
09 Nov 2015
9 min read
Save for later

Adding a Custom Meter to Ceilometer

John Belamaric
09 Nov 2015
9 min read
OpenStack Ceilometer is a useful tool for monitoring your instances. It includes built-in monitoring for basic instance measures like CPU utilization and interface utilization. It includes alarm evaluation and notification infrastructure that works with the Heat orchestration engine’s AutoScalingGroups to enable automatic scaling of services. This all works nicely right out-of-the-box when your measures are already built into Ceilometer. But what if you want to scale on some other criteria? For example, at Infoblox we provide a virtual instance that is serving DNS, and we make the DNS queries/second rate available via SNMP. You may want to provide similar, application-level metrics for other applications – for example, you may want to poll a web application for internal metrics via HTTP. In this blog post, I will show you how to add your own meter in Ceilometer. Let’s start with getting an understanding of the components involved in a meter, and how they interact. The most basic version of the data collection service in Ceilometer consists of agents, a collector service, a message queue, and a database. Typically there is a central agent that runs on the controller, and a compute agent that runs on each compute node. The agents gather the data, and publish it to the message queue. The collector receives this data and stores it into the database. Periodically, the agent attempts to collect each meter. The frequency is controlled by the /etc/ceilometer/pipeline.yaml file, and can be configured on a per meter basis. If a specific meter is not configured, it will use the global interval configured in pipeline.yaml, which by default is 10 minutes. To add a new meter, we will add a package that plugs into one of the agents. Let’s pick the compute agent, which runs locally on each compute node. We will build a Python package that can be installed on each compute node using pip. After installing this package, you simply restart the Ceilometer compute agent, and you will start to see the new meter appear in the database. For reference, you can take a look at https://github.com/infobloxopen/ceilometer-infoblox. This is a package that installs a number of custom, SNMP-based meters for Infoblox virtual appliances (which run the NIOS operating system, which you will see references to in class names below). The package will deliver three basic classes: a discovery class, an inspector, and a pollster. In Ceilometer terminology, “discovery” refers to the process by which the agent identifies the resources to poll, and then each “pollster” will utilize the “inspector” to generate “samples” for those resources. When the agent initiates a polling cycle, it looks through all pollster classes defined for that agent. When you define a new meter that uses a new class for polling, you specify that meter class in your [entry_points] section of the setup.cfg: ceilometer.poll.compute = nios.dns.qps = ceilometer_infoblox.pollsters.dns:QPSPollster Similarly, the discovery class should be registered in setup.cfg: ceilometer.discover = nios_instances = ceilometer_infoblox.discovery:NIOSDiscovery The pollster class really ties the pieces together. It will identify the discovery class to use by specifying one of the values defined in setup.cfg: @property def default_discovery(self): return 'nios_instances' Then, it will directly use an inspector class that was delivered with the package. You can base your discovery, inspector, and pollster classes on those already defined as part of the core Ceilometer project. In the case of the ceilometer-infoblox code, the discovery class is based on the core instance discovery code in Ceilometer, as we see in discovery.py: class NIOSDiscovery(discovery.InstanceDiscovery): def __init__(self): super(NIOSDiscovery, self).__init__() The InstanceDiscovery class will use the Nova API to query for all instances defined on this compute node. This makes things very simple, because we are interested in polling the subset of those instances that are Infoblox virtual appliances. In this case, we identify those via a metadata tag. During the discovery process we loop through all the instances, rejecting those without the tag: def discover(self, manager, param=None): instances = super(NIOSDiscovery, self).discover(manager, param) username = cfg.CONF['infoblox'].snmp_community_or_username password = cfg.CONF['infoblox'].snmp_password port = cfg.CONF['infoblox'].snmp_port metadata_name = cfg.CONF['infoblox'].metadata_name resources = [] for instance in instances: try: metadata_value = instance.metadata.get(metadata_name, None) if metadata_value is None: LOG.debug("Skipping instance %s; not tagged with '%s' " "metadata tag." % (instance.id, metadata_name)) continue This code first calls the superclass to get all the Nova instances on this host, and then pulls in some necessary configuration data. The meat of the method starts with the loop through the instances; it rejects those that are not appropriately tagged. In the end, the discover method is expected to return an array of dictionary objects, containing the resources to poll, all information needed to poll them, and any metadata that should be included in the sample. If you follow along in the code, you will see another issue that needs to be dealt with in this case. Since we are polling SNMP from the instances, we need to be able to access the IP address of the instance via UDP. But the compute agent is running in the network namespace for the host, not for the instances. This means we need a floating IP to poll the instance; the code in the _instance_ip method figures out which IP to use in the polling. This will likely be important for any application-based meter, which will face a similar problem, even if you are not using SNMP. For example, if you use an HTTP method to gather data about the internal performance of a web application, you will still need to directly access the IP of the instance. If a floating IP is out-of-the-question, the polling agent would have to utilize the appropriate namespace; this is possible but much more complex. Ok, let’s review the process and see where we are. First, the agent looks at the installed pollster list. It finds our pollster, and calls the discovery process. This produces a list of resources. The next step is to use those resources to generate samples, using the get_samples method of the pollster. This method will loop through the resource list provided by the discover method, calling the inspector for each of those resources, resulting in one or more samples. In the SNMP case, we inherit most of the functionality from the parent class, ceilometer.hardware.plugin.HardwarePollster. The get_samples method in that class handles calling the inspector and then calls a generate_samples method to convert the data returned by the inspector into a Sample object, which in turn calls generate_one_sample. This is pretty typical through the Ceilometer code, and makes it easy to override and customize the behavior - we simply needed to override the generate_one_sample method. The inspector class in our case was also largely provided by the existing Ceilometer code. We simply subclass that, and define the specific SNMP OIDs to poll, and make sure that the _get_inspector call of the pollster returns our custom inspector. If you are using another method like HTTP, you may have to define a truly new inspector. So, that is all there is to it: define a discovery class, an inspector class, and a pollster class. Register those in a setup.cfg for your package, and it can be installed and start polling new data from your instances. That data will show up via the normal Ceilometer API and CLI calls – for example, here is a call returning the queries/sec meter: dev@ubuntu:~/devstack$ ceilometer sample-list -m nios.dns.qps -l 10 +--------------------------------------+--------------+-------+--------+-----------+----------------------------+ | Resource ID | Name | Type | Volume | Unit | Timestamp | +--------------------------------------+--------------+-------+--------+-----------+----------------------------+ | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:01:53.779767 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 303.0 | queries/s | 2015-10-23T23:01:53.779680 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:01:00.138366 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 366.0 | queries/s | 2015-10-23T23:01:00.138267 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T23:00:58.571506 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 366.0 | queries/s | 2015-10-23T23:00:58.571431 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:58:25.940403 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:58:25.940289 | | e5611555-df6b-4c34-a16e-5bca04ada36c | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:57:55.988727 | | ccb589ca-72a9-4ebe-85d3-914832ea0e81 | nios.dns.qps | gauge | 0.0 | queries/s | 2015-10-23T22:57:55.988633 | +--------------------------------------+--------------+-------+--------+-----------+----------------------------+    Click here to further your OpenStack skillset by setting up VPNaaS with our new article.   About the author John Belamaric is a software and systems architect with nearly 20 years of software design and development experience, his current focus is on cloud network automation. He is a key architect of the Infoblox Cloud products, concentrating on OpenStack integration and development. He brings to this his experience as the lead architect for the Infoblox Network Automation product line, along with a wealth of networking, network management, software, and product design knowledge. He is a contributor to both the OpenStack Neutron and Designate projects. He lives in Bethesda, Maryland with his wife Robin and two children, Owen and Audrey.
Read more
  • 0
  • 0
  • 3698

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 3409

article-image-task-automation
Packt
05 Nov 2015
33 min read
Save for later

Task Automation

Packt
05 Nov 2015
33 min read
In this article by Kerri Shotts, author of the Mastering PhoneGap Mobile Application Development, you will learn the following topics: Logology, our demonstration app Why Gulp for Task Automation Setting up your app's directory structure Installing Gulp Creating your first Gulp configuration file Performing substitutions Executing Cordova tasks Managing version numbers Supporting ES2015 Linting your code Minifying/uglifying your code (For more resources related to this topic, see here.) Before we begin Before you continue with this article, ensure that you have the following tools installed. The version that was used in this article is listed as well, for your reference: Git (http://git-scm.com, v2.8.3) Node.js (http://nodejs.org, v0.12.2) npm, short for Node Package Manager (typically installed with Node.js, v2.7.4) Cordova 5.x (http://cordova.apache.org, v5.2.0) or PhoneGap 5.x (http://www.phonegap.com, v5.2.2) You'll need to execute the following in each directory in order to build the projects: # On Linux / Mac OS X $ npm install && gulp init % On Windows >npm install > gulp init If you're not intending to build the sample application in the code bundle, be sure to create a new directory that can serve as a container for all the work you'll be doing in this article.. Just remember, each time you create a new directory and copy the prior version to it, you'll need to execute npm install and gulp init to set things up. About Logology I'm calling it Logology—and if you're familiar with any Greek words, you might have already guessed what the app will be: a dictionary. Now, I understand that this is not necessarily the coolest app, but it is sufficient for our purposes. It will help you learn how advanced mobile development is done. By the time we're done, the app will have the following features: Search: The user will be able to search for a term Browse: The user will be able to browse the dictionary Responsive design: The app will size itself appropriately to any display size Accessibility: The app will be usable even if the user has visual difficulties Persistent storage: The app will persist settings and other user-generated information File downloads: The app will be able to download new content Although the app sounds relatively simple, it's complex enough to benefit from task automation. Since it is useful to have task automation in place from the very beginning, we'll install Gulp and verify that it is working with some simple files first, before we really get to the meat of implementing Logology. As such, the app we build in this article is very simple: it exists to verify that our tasks are working correctly. Once we have verified our workflow, we can go on to the more complicated project at hand. You may think that working through this is very time-consuming, but it pays off in the long run. Once you have a workflow that you like, you can take that workflow and apply it to the other apps you may build in the future. This means that future apps can be started almost immediately (just copy the configuration from a previous app). Even if you don't write other apps, the time you saved from having a task runner outweigh the initial setup time. Why Gulpfor task automation? Gulp (http://gulpjs.com) is a task automation utility using the Node.js platform. Unlike some other task runners, one configures Gulp by writing the JavaScript code. The configuration for Gulp is just like any other JavaScript file, which means that if you know JavaScript, you can start defining tasks quickly. Gulp also uses the concept of "streams" (again, from Node.js). This makes Gulp very efficient. Plugins can be inserted within these steams to perform many different transformations, including beautification or uglification, transpilation (for example, ECMAScript 6 to ECMAScript 2015), concatenation, packaging, and much more. If you've performed any sort of piping on the command line, Gulp should feel familiar, because it operates on a similar concept. The output from one process is piped to the next process, which performs any number of transformations, and so on, until the final output is written to another location. Gulp also tries to run as many dependent tasks in parallel as possible. Ideally, this makes running Gulp tasks faster, although it really depends on how your tasks are structured. Other task runners such as Grunt will perform their task steps in sequence, which may result in slower output, although tracing the steps from input to output may be easier to follow when the steps are performed sequentially. That's not to say that Gulp is the best task runner—there are many that are quite good, and you may find that you prefer one of them over Gulp. The skills you learn in this article can easily be transferred to other task running and build systems. Here are some other task runners that are useful: Grunt (http://www.gruntjs.com): This configuration is specified through settings, not code. Tasks are performed sequentially. Cake (http://coffeescript.org/documentation/docs/cake.html): This uses CoffeeScript and the configuration is specified via code, such as Gulp. If you like using CoffeeScript, you might prefer this over Gulp. Broccoli (https://github.com/broccolijs/broccoli): This also uses the configuration through code. Installing Gulp Installing Gulp is easy, but is actually a two-step process. The first step is to install Gulp globally. This installs the command-line utility, but Gulp actually won't work without also being installed locally within our project. If you aren't familiar with Node.js, packages can be installed locally and/or globally. A locally installed package is local to the project's root directory, while a globally installed package is specific to the developer's machine. Project dependencies are tracked in package.json, which makes it easy to replicate your development setup on another machine. Assuming you have Node.js installed and package.json created in your project directory, the installation of Gulp will go very easily. Be sure to be positioned in your project's root directory and then execute the following: $ npm install -g gulp $ npm install --save-dev gulp If you receive an error while running these commands on OS X, you may need to run them with sudo. For example: sudo install -g gulp. You can usually ignore any WARN messages. It's a good idea to be positioned in your project's root directory any time you execute an npm or gulp command. On Linux and OS X, these commands generally will locate the project's root directory automatically, but this isn't guaranteed on all platforms, so it's better to be safe than sorry. That's it! Gulp itself is very easy to install, but most workflows will require additional plugins that work with Gulp. In addition, we'll also install Cordova dependencies for this project. First, let's install the Cordova dependencies: $ npm install --save-dev cordova-lib cordova-ios cordova-android cordova-lib allows us to programmatically interact with Cordova. We can create projects, build them, and emulate them—everything we can do with the Cordova command line we can do with cordova-lib. cordova-ios and cordova-android refer to the iOS and Android platforms that cordova platform add ios android would add. We've made them dependencies for our project, so we can easily control the version we build with. While starting a new project, it's wise to start with the most recent version of Cordova and the requisite platforms. Once you begin, it's usually a good practice to stick with a specific platform version unless there are serious bugs or the like. Next, let's install the Gulp plugins we'll need: $ npm install --save-dev babel-eslint cordova-android cordova-ios cordova-lib cordova-tasks gulp gulp-babel gulp-bump gulp-concat gulp-eslint gulp-jscs gulp-notify gulp-rename gulp-replace-task gulp-sourcemaps gulp-uglify gulp-util               merge-stream rimraf These will take a few moments to install; but when you're done, take a look in package.json. Notice that all the dependencies we added were also added to the devDependencies. This makes it easy to install all the project's dependencies at a later date (say, on a new machine) simply by executing npm install. Before we go on, let's quickly go over what each of the above utility does. We'll go over them in more detail as we progress through the remainder of this article. gulp-babel: Converts ES2015 JavaScript into ES5. If you aren't familiar with ES2015, it has several new features and an improved syntax that makes writing mobile apps that much easier. Unfortunately, because most browsers don't yet natively support the ES2015 features and syntax, it must be transpiled to ES5 syntax. Of course, if you prefer other languages that can be compiled to ES5 JavaScript, you could use those as well (these would include CoffeeScript and similar). gulp-bump: This small utility manages version numbers in package.json. gulp-concat: Concatenates streams together. We can use this to bundle files together. gulp-jscs: Performs the JavaScript code style checks against your code. Supports ES2015. gulp-eslint: Lints your JavaScript code. Supports ES2015. babel-eslint: Provides ES2015 support to gulp-eslint. gulp-notify: This is an optional plugin, but it is handy especially when some of your tasks take a few seconds to run. This plugin will send a notification to your computer's notification panel when something of import occurs. If the plugin can't send it to your notification panel, it logs to the console. gulp-rename: Renames streams. gulp-replace-task: Performs search and replace within streams. gulp-sourcemaps: When transpiling ES2015 to ES5, it can be helpful to have a map between the original source and the transpiled source. This plugin creates them as a part of the workflow. gulp-uglify: Uglifies/minifies code. While useful for code obfuscation, it also reduces the size of your code. gulp-util: Additional utilities for Gulp, such as logging. merge-stream: Merges multiple tasks. rimraf: Easy file deletion. Akin to rm on the command line. Creating your first Gulp configuration file Gulp tasks are defined by the contents of the project's gulpfile.js file. This is a JavaScript program, so the same skills you have with JavaScript will apply here. Furthermore, it's executed by Node.js, so if you have any Node.js knowledge, you can use it to your advantage. This file should be placed in the root directory of your project, and must be named gulpfile.js. The first few lines of your Gulp configuration file will require the Gulp plugins that you'll need in order to complete your tasks. The following lines then specify how to perform various tasks. For example, a very simple configuration might look like this: var gulp = require("gulp"); gulp.task("copy-files", function () { gulp.src(["./src/**/*"])       .pipe(gulp.dest("./build")); }); This configuration only performs one task: it moves all the files contained within src/ to build/. In many ways, this is the simplest form of a build workflow, but it's a bit too simple for our purposes. Note the pattern we use to match all the files. If you need to see the documentation on what patterns are supported, see https://www.npmjs.com/package/glob. To execute the task, one can execute gulp copy-files. Gulp would then execute the task and copy all the files from src/ to build/. What makes Gulp so powerful is the concept of task composition. Tasks can depend on any number of other tasks, and those tasks can depend on yet more tasks. This makes it easy to create complex workflows out of simpler pieces. Furthermore, each task is asynchronous, so it is possible for many tasks with no shared dependencies to operate in parallel. Each task, as you can see in the prior code, is comprised of selecting a series of source files (src()), optionally performing some additional processing on each file (via pipe()), and then writing those files to a destination path (dest()). If no additional processing is specified (as in the prior example), Gulp will simply copy the files that match the wildcard pattern. The beauty of streams, however, is that one can execute any number of transformations before the final data is saved to storage, and so workflows can become very complex. Now that you've seen a simple task, let's get into some more complicated tasks in the next section. How to execute Cordova tasks It's tempting to use the Cordova command-line interface directly, but there's a problem with this: there's no great way to ensure that what you write will work across multiple platforms. If you are certain you'll only work with a specific platform, you can go ahead and execute shell commands instead; but what we're going to do is a bit more flexible. The code in this section is inspired by https://github.com/kamrik/CordovaGulpTemplate. The Cordova CLI is really just a thin wrapper around the cordova-lib project. Everything the Cordova CLI can do, cordova-lib can do as well. Because the Cordova project will be a build artifact, we need to be able to create a Cordova project in addition to building the project. We'll also need to emulate and run the app. To do this, we first require cordova-lib at the top of our Gulp configuration file (following the other require statements): var cordovaLib = require("cordova-lib"); var cordova = cordovaLib.cordova.raw; var rimraf = require("rimraf"); Next, let's create the code to create a new Cordova project in the build directory: var cordovaTasks = {     // CLI: cordova create ./build com.example.app app_name     //              --copy-from template_path create: function create() { return cordova.create(BUILD_DIR, pkg.cordova.id,                               pkg.cordova.name,                 { lib: { www: { url: path.join(__dirname,                                     pkg.cordova.template), link: false                     }                   }                 });     } } Although it's a bit more complicated than cordova create is on the command line, you should be able to see the parallels. The lib object that is passed is simply to provide a template for the project (equivalent to --copy-from on the command line). In our case, package.json specifies that this should come from the blank/ directory. If we don't do this, all our apps would be created with the sample Hello World app that Cordova installs by default. Our blank project template resides in ../blank, relative from the project root. Yours may reside elsewhere (since you're apt to reuse the same template), so package.json can use whatever path you need. Or, you might want the template to be within your project's root; in which case, package.json should use a path inside your project's root directory. We won't create a task to use this just yet — we need to define several other methods to build and emulate Cordova apps: var gutil = require("gulp-util"); var PLATFORM = gutil.env.platform ? gutil.env.platform :"ios";                                                   // or android var BUILD_MODE = gutil.env.mode ? gutil.env.mode :"debug";                                                   // or release var BUILD_PLATFORMS = (gutil.env.for ? gutil.env.for                                     : "ios,android").split(","); var TARGET_DEVICE = gutil.env.target ? "--target=" + gutil.env.target :""; var cordovaTasks = { create: function create() {/* as above */}, cdProject: function cdProject() { process.chdir(path.join(BUILD_DIR, "www"));     }, cdUp: function cdUp() { process.chdir("..");     }, copyConfig: function copyConfig() { return gulp.src([path.join([SOURCE_DIR], "config.xml")])                 .pipe(performSubstitutions())                 .pipe(gulp.dest(BUILD_DIR"));     },     // cordova plugin add ... addPlugins: function addPlugins() { cordovaTasks.cdProject(); return cordova.plugins("add", pkg.cordova.plugins)             .then(cordovaTasks.cdUp);     },     // cordova platform add ... addPlatforms: function addPlatforms() { cordovaTasks.cdProject(); function transformPlatform(platform) { return path.join(__dirname, "node_modules", "cordova-" + platform);         } return cordova.platforms("add", pkg.cordova.platforms.map(transformPlatform))               .then(cordovaTasks.cdUp);     },     // cordova build <platforms> --release|debug     //                           --target=...|--device build: function build() { var target = TARGET_DEVICE; cordovaTasks.cdProject(); if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordova.build({platforms:BUILD_PLATFORMS, options: ["--" + BUILD_MODE, target] })             .then(cordovaTasks.cdUp);     },     // cordova emulate ios|android --release|debug emulate: function emulate() { cordovaTasks.cdProject(); return cordova.emulate({ platforms: [PLATFORM], options: ["--" + BUILD_MODE,                                         TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     },     // cordova run ios|android --release|debug run: function run() { cordovaTasks.cdProject(); return cordova.run({ platforms: [PLATFORM], options: ["--" + BUILD_MODE, "--device",                                     TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     }, init: function() { return this.create()             .then(cordovaTasks.copyConfig)             .then(cordovaTasks.addPlugins)             .then(cordovaTasks.addPlatforms);     } }; Place cordovaTasks prior to projectTasks in your Gulp configuration. If you aren't familiar with promises, you might want to learn more about them. http://www.html5rocks.com/en/tutorials/es6/promises/ is a fantastic resource. Before we explain the preceding code, there's another change you need to make, and that's to projectTasks.copyConfig, because we move copyConfig to cordovaTasks: var projectTasks = { …, copyConfig: function() { return cordovaTasks.copyConfig();     }, ... } Most of the earlier mentioned tasks should be fairly self-explanatory — they correspond directly with their Cordova CLI counterparts. A few, however, need a little more explanation. cdProject / cdUp: These change the current working directory. All the cordova-lib commands after create need to be executed from within the Cordova project directory, not our project's root directory. You should notice them in several of the tasks. addPlatforms: The platforms are added directly from our project's dependencies, rather than from the Cordova CLI. This allows us to control the platform versions we are using. As such, addPlatforms has to do a little more work to specify the actual directory name of each platform. build: This executes the cordova build command. By default, the CLI will build every platform, but it's possible that we might want to control the platforms that are built, hence the use of BUILD_PLATFORMS. On iOS, the build for an emulator is different than the build for a physical device, so we also need a way to specify that, which is what TARGET_DEVICE is for. This will look for emulators with the name specified for the TARGET_DEVICE, but we might want to build for a physical device; in which case, we will look for device (or no target specified at all) and switch over to the --device flag, which forces Cordova to build for a physical device. init: This does the hard work of creating the Cordova project, copying the configuration file (and performing substitutions), adding plugins to the Cordova project, and then adding the platforms. Now is also a good time to mention that we can specify various settings with switches on the Gulp command line. In the earlier snippet, we're supporting the use of --platform to specify the platform to emulate or run, --mode to specify the build mode (debug or release), --for to determine what platforms Cordova will build for, and --target for specifying the target device. The code specifies sane defaults if these switches aren't specified, but they also allow the developer extra control over the workflow, which is very useful. For example, we'll be able to use commands like these: $ gulp build --for ios,android --target device $ gulp emulate --platform ios --target iPhone-6s $ gulp run --platform ios --mode release Next, let's write the code to actually perform various Cordova tasks — it's pretty simple: var projectTasks = {     ..., init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate();     }, runCordova: function runCordova() { return cordovaTasks.run();     }, buildCordova: function buildCordova() { return cordovaTasks.build();     }, clean: function clean(cb) { rimraf(BUILD_DIR, cb);     },     ... }   … gulp.task("clean", projectTasks.clean); gulp.task("init", ["clean"], projectTasks.init); gulp.task("build", ["copy"], projectTasks.buildCordova); gulp.task("emulate", ["copy"], projectTasks.emulateCordova); gulp.task("run", ["copy"], projectTasks.runCordova); There's a catch with the cordovaTasks.create method — it will fail if anything is already in the build/ directory. As you can guess, this could easily happen, so we also created a projectTasks.clean method. This clean method uses rimraf to delete a specified directory. This is equivalent to using rm -rf build. We then build a Gulp task named init that depends on clean. So, whenever we execute gulp init, the old Cordova project will be removed and a new one will be created for us. Finally, note that the build (and other) tasks all depend on copy. This means that all our files in src/ will be copied (and transformed, if necessary) to build/ prior to executing the desired Cordova command. As you can see, our tasks are already becoming very complex, while also remaining graspable when taken singularly. This means we can now use the following tasks in Gulp: $ gulp init                   # create the cordova project;                               # cleaning first if needed $ gulp clean                  # remove the cordova project $ gulp build                  # copy src to build; apply                               # transformations; cordova build $ gulp build --mode release   # do the above, but build in                               # release mode $ gulp build --for ios        # only build for iOS $ gulp build --target=device  # build device versions instead of                               # emulator versions $ gulp emulate --platform ios # copy src to build; apply                               # transformations;                               # cordova emulate ios $ gulp emulate --platform ios --target iPhone-6                               # same as above, but open the                               # iPhone 6 emulator $ gulp run --platform ios     # copy src to build;                               # apply transformations;                               # cordova run ios --device Now, you're welcome to use the earlier code as it is, or you can use an NPM package that takes care of the cordovaTasks portion for you. This has the benefit of drastically shortening your Gulp configuration. We've already included this package in our package.json file — it's named cordova-tasks, and was created by the author, and shares a lot of similarities to the earlier code. To use it, the following needs to go at the top of our configuration file below all the other require statements: var cordova = require("cordova-tasks"); var cordovaTasks = new cordova.CordovaTasks(     {pkg: pkg, basePath: __dirname, buildDir: "build", sourceDir: "src", gulp: gulp, replace: replace}); Then, you can remove the entire cordovaTasks object from your configuration file as well. The projectTasks section needs to change only slightly: var projectTasks = { init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, runCordova: function runCordova() { return cordovaTasks.run({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, buildCordova: function buildCordova() { var target = TARGET_DEVICE; if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordovaTasks.build({buildMode: BUILD_MODE, platforms: BUILD_PLATFORMS, options: [target]});     },... } There's one last thing to do: in copyCode change .pipe(performSubstitutions()) to .pipe(cordovaTasks.performSubstitutions()). This is because the cordova-tasks package automatically takes care of all of the substitutions that we need, including version numbers, plugins, platforms, and more. This was one of the more complex sections, so if you've come this far, take a coffee break. Next, we'll worry about managing version numbers. Supporting ES2015 We've already mentioned ES2015 (or EcmaScript 2015) in this article. Now is the moment we actually get to start using it. First, though, we need to modify our copy-code task to transpile from ES2015 to ES5, or our code wouldn't run on any browser that doesn't support the new syntax (that is still quite a few mobile platforms). There are several transpilers available: I prefer Babel (https://babeljs.io). There is a Gulp plugin that we can use that makes this transpilation transformation extremely simple. To do this, we need to add the following to the top of our Gulp configuration: var babel = require("gulp-babel"); var sourcemaps = require("gulp-sourcemaps"); Source maps are an important piece of the debugging puzzle. Because our code will be transformed by the time it is running on our device, it makes debugging a little more difficult since line numbers and the like don't match. Sourcemaps provides the browser with a map between your ES2015 code and the final result so that debugging is a lot easier. Next, let's modify our projectTasks.copyCode method: var projectTasks = { …, copyCode: function copyCode() { var isRelease = (BUILD_MODE === "release"); gulp.src(CODE_FILES)             .pipe(cordovaTasks.performSubstitutions())             .pipe(isRelease ? gutil.noop() : sourcemaps.init())             .pipe(babel())             .pipe(concat("app.js"))             .pipe(isRelease ? gutil.noop() : sourcemaps.write())             .pipe(gulp.dest(CODE_DEST));     },... } Our task is now a little more complex, but that's only because we want to control when the source maps are generated. When babel() is called, it will convert ES2015 code to ES5 and also generate a sourcemap of those changes. This makes debugging easier, but it also increases the file size by quite a large amount. As such, when we're building in release mode, we don't want to include the sourcemaps, so we call gutil.noop instead, which will just do nothing. The sourcemap functionality requires us to call sourcemaps.init prior to any Gulp plugin that might generate sourcemaps. After the plugin that creates the sourcemaps executes, we also have to call sourcemaps.write to save the sourcemap back to the stream. We could also write the sourcemap to a separate .map file by calling sourcemaps.write("."), but you do need to be careful about cleaning that file up while creating a release build. babel is what is doing the actual hard work of converting ES2015 code to ES5. But it does need a little help in the form of a small support library. We'll add this library to src/www/js/lib/ by copying it from the gulp-babel module: $ cp node_modules/babel-core/browser-polyfill.js src/www/js/lib If you don't have the src/www/js/lib/directory yet, you'll need to create it before executing the previous command. Next, we need to edit src/www/index.html to include this script. While we're at it, let's make a few other changes: <!DOCTYPE html> <html> <head> <script src="cordova.js" type="text/javascript"></script> <script src="./js/lib/browser-polyfill.js" type="text/javascript"></script> <script src="./js/app/app.js" type="text/javascript"></script> </head> <body> <p>This is static content..., but below is dynamic content.</p> <div id="demo"></div> </body> </html> Finally, let's write some ES2015 code in src/www/js/app/index.js: function h ( elType, ...children ) { let el = document.createElement(elType); for (let child of children) { if (typeof child !== "object") {           el.textContent = child;       } else if (child instanceof Array) { child.forEach( el.appendChild.bind(el) );       } else { el.appendChild( child );       }   } return el; }   function startApp() { document.querySelector("#demo").appendChild( h("div", h("ul", h("li", "Some information about this app..."), h("li", "App name: {{{NAME}}}"), h("li", "App version: {{{VERSION}}}")       )     )   ); }   document.addEventListener("deviceready", startApp, false); This article isn't about how to write ES2015 code, so I won't bore you with all the details. Suffice it to say, the previous generates a few list items when the app is run using a very simple form of DOM templating. But it does so using the …​ (spread) syntax for variable parameters, the for … of loop and let instead of var. Although it looks a lot like JavaScript, it's definitely different enough that it will take some time to learn how best to use the new features. Linting your code You could execute a gulp emulate --platform ios (or android) right now, and the app should work. But how do we know our code will work when built? Better yet — how can we prevent a build if the code isn't valid? We do this by adding lint tasks to our Gulp configuration file. Linting is a lot like compiling — the linter checks your code for obvious errors and aborts if it finds any. There are various linters available (some better than others), but not all of them support ES2015 syntax yet. The best one that does is ESLint (http://www.eslint.org). Thankfully, there's a very simple Gulp plugin that uses it. We could stop at linting and be done, but code style is also important and can catch out potentially serious issues as well. As such, we're also going to be using the JavaScript Code Style checker or JSCS (https://github.com/jscs-dev/node-jscs). Let's create tasks to lint and check our coding style. First, add the following to the top of our Gulp configuration: var eslint = require("gulp-eslint"); var jscs = require("gulp-jscs"); var CONFIG_DIR = path.join(__dirname, "config"); var CODE_STYLE_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; var CODE_LINT_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; Now, let's create the tasks var projectTasks = { …, checkCodeStyle: function checkCodeStyle() { return gulp.src(CODE_STYLE_FILES) .pipe(jscs({ configPath: path.join(CONFIG_DIR, "jscs.json"), esnext: true })); }, lintCode: function lintCode() { return gulp.src(CODE_LINT_FILES) .pipe(eslint(path.join(CONFIG_DIR, "eslint.json"))) .pipe(eslint.format()) .pipe(eslint.failOnError()); } } … gulp.task("lint", projectTasks.lintCode); gulp.task("code-style", projectTasks.checkCodeStyle); Now, before you run this, you'll need two configuration files to tell each task what should be an error and what shouldn't be. If you want to change the settings, you can do so — the sites for ESLint and JSCS have information on how to modify the configuration files. config/eslint.json must contain "parser": "babel-eslint" in order to force it to use ES2015 syntax. This is set for JSCS in the Gulp configuration, however. config/jscs.json must exist and must not be empty. If you don't need to specify any rules, use an empty JSON object ({}). Now, if you were to execute gulp lint and our source code had a syntax error, you would receive an error message. The same goes for code styles — gulp code-style would generate an error if it didn't like the look of the code. Modify the build, emulate, and run tasks in the Gulp configuration as follows: gulp.task("build", ["lint", "code-style", "copy"], projectTasks.buildCordova); gulp.task("emulate", ["lint", "code-style", "copy"], projectTasks.emulateCordova); gulp.task("run", ["lint", "code-style", "copy"], projectTasks.runCordova); Now, if you execute gulp build and there is a linting or code style error, the build will fail with an error. This gives a little more assurance that our code is at least syntactically valid prior to distributing or running the code. Linting and style checks do not guarantee your code works logically. It just ensures that there are no syntax or style errors. If your program responds incorrectly to a gesture or processes some data incorrectly, a linter won't necessarily catch those issues. Uglifying your code Code uglification or minification sounds a bit painful, but it's a really simple step we can add to our workflow that will reduce the size of our applications when we build in the release mode. Uglification also tends to obfuscate our code a little bit, but don't rely on this for any security — obfuscation can be easily undone. To add the code uglification, add the following to the top of our Gulp configuration: var uglify = require("gulp-uglify"); We can then uglify our code by adding the following code immediately after .pipe(concat("app.js")) in our projectTasks.copyCode method: Next, modify our projectTasks.copyCode method to look like this: .pipe(isRelease ? uglify({preserveComments: "some"}) : gutil.noop()) Notice that we added the uglify method call, but only if the build mode is release. This means that we'll only trigger it if we execute gulp build --mode release. You can, of course, specify additional options. If you want to see all the documentation, visit https://github.com/mishoo/UglifyJS2/. Our options include certain comments (the ones most likely to be license-related) while stripping out all the other comments. Putting it all together You've accomplished quite a bit, but there's one last thing we want to mention: the default task. If gulp is run with no parameters, it looks for a default task to perform. This can be anything you want. To specify this, just add the following to your Gulp configuration: gulp.task("default", ["build"]); Now, if you execute gulp with no specific task, you'll actually start the build task instead. What you want to use for your default task is largely dependent upon your preferences. Your Gulp configuration is now quite large and complex. We've added a few additional features to it (mostly for config.xml). We've also added several other features to the configuration, which you might want to investigate further: BrowserSync for rapid iteration and testing The ability to control whether or not the errors prevent further tasks from being executed Help text Summary In this article, you've learned why a task runner is useful, how to install Gulp, and how to create several tasks of varying complexity to automate building your project and other useful tasks. Resources for Article: Further resources on this subject: Getting Ready to Launch Your PhoneGap App in the Real World [article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article]
Read more
  • 0
  • 0
  • 4604
article-image-transforming-data-pivot-transform-data-services
Packt
05 Nov 2015
7 min read
Save for later

Transforming data with the Pivot transform in Data Services

Packt
05 Nov 2015
7 min read
Transforming data with the Pivot transform in Data Services In this article by Iwan Shomnikov, author of the book SAP Data Services 4.x Cookbook, you will learn that the Pivot transform belongs to the Data Integrator group of transform objects in Data Services, which are usually all about the generation or transformation (meaning change in the structure) of data. Simply inserting the Pivot transform allows you to convert columns into rows. Pivot transformation increases the amount of rows in a dataset as for every column converted into a row, an extra row is created for every key (the non-pivoted column) pair. Converted columns are called pivot columns. Pivoting rows to columns or columns to rows is quite a common transformation operation in data migration tasks, and traditionally, the simplest way to perform it with the standard SQL language is to use the decode() function inside your SELECT statements. Depending on the complexity of the source and target datasets before and after pivoting, the SELECT statement can be extremely heavy and difficult to understand. Data Services provide a simple and flexible way of pivoting data inside the Data Services ETL code using the Pivot and Reverse_Pivot dataflow object transforms. The following steps show how exactly you can create, configure, and use these transforms in Data Services in order to pivot your data. (For more resources related to this topic, see here.) Getting ready We will use the SQL Server database for the source and target objects, which will be used to demonstrate the Pivot transform available in Data Services. The steps in this section describe the preparation of a source table and the data required in it for a demonstration of the Pivot transform in the Data Services development environment: Create a new database or import the existing test database, AdventureWorks OLTP, available for download and free test use at https://msftdbprodsamples.codeplex.com/releases/view/55330. We will download the database file from the preceding link and deploy it to SQL Server, naming our database AdventureWorks_OLTP. Run the following SQL statements against the AdventureWorks_OLTP database to create a source table and populate it with data: create table Sales.AccountBalance ( [AccountID] integer, [AccountNumber] integer, [Year] integer, [Q1] decimal(10,2), [Q2] decimal(10,2), [Q3] decimal(10,2), [Q4] decimal(10,2)); -- Row 1 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (1,100,2015,100.00,150.00,120.00,300.00); -- Row 2 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (2,100,2015,50.00,350.00,620.00,180.00); -- Row 3 insert into Sales.AccountBalance ([AccountID],[AccountNumber],[Year],[Q1],[Q2],[Q3],[Q4]) values (3,200,2015,333.33,440.00,12.00,105.50); So, the source table would look similar to the one in the following figure: Create an OLTP datastore in Data Services, referencing the AdventureWorks_OLTP database and AccountBalance import table created in the previous step in it. Create the DS_STAGE datastore in Data Services pointing to the same OLTP database. We will use this datastore as a target to stage in our environment, where we insert the resulting pivoted dataset extracted from the OLTP system. How to do it… This section describes the ETL development process, which takes place in the Data Services Designer application. We will not create any workflow or script object in our test jobs; we will keep things simple and have only one batch job object with a dataflow object inside it performing the migration and pivoting of data from the ACCOUNTBALANCE source table of our OLTP database. Here are the steps to do this: Create a new batch job object and place the new dataflow in it, naming it DF_OLTP_Pivot_STAGE_AccountBalance. Open the dataflow in the workspace window to edit it, and place the ACCOUNTBALANCE source table from the OLTP datastore created in the preparation steps. Link the source table to the Extract query transform, and propagate all the source columns to the target schema. Place the new Pivot transform object in a dataflow and link the Extract query to it. The Pivot transform can be found by navigating to Local Object Library | Transforms | Data Integrator. Open the Pivot transform in the workspace to edit it, and configure its parameters according to the following screenshot:   Close the Pivot transform and link it to another query transform named Prepare_to_Load. Propagate all the source columns to the target schema of the Prepare_to_Load transform, and finally link it to the target ACCOUNTBALANCE template table created in the DS_STAGE datastore. Choose the dbo schema when creating the ACCOUNTBALANCE template table object in this datastore. Before executing the job, open the Prepare_to_Load query transform in a workspace window, double-click on the PIVOT_SEQ column, and select the Primary key checkbox to specify the additional column as being the primary key column for the migrated dataset. Save and run the job, selecting the default execution options. Open the dataflow again and import the target table, putting the Delete data from table before loading flag in the target table loading options. How it works… Pivot columns are columns whose values are merged in one column after the pivoting operation, thus producing an extra row for every pivoted column. Non-pivot columns are columns that are not affected by the pivot operation. As you can see, the pivoting operation denormalizes the dataset, generating more rows. This is why ACCOUNTID does not define the uniqueness of the record anymore, and we have to specify the extra key column, PIVOT_SEQ.   So, you may wonder, why pivot? Why don't we just use data as it is and perform the required operation on data from the columns Q1-Q4? The answer, in the given example, is very simple—it is much more difficult to perform an aggregation when the amounts are spread across different columns. Instead of summarizing using a single column with the sum(AMOUNT) function, we would have to write the sum(Q1 + Q2 + Q3 + Q4) expression every time. Quarters are not the worst part yet; imagine a situation where a table has huge amounts of data stored in columns defining month periods and you have to filter data by these time periods. Of course, the opposite case exists as well; storing data across multiple columns instead of just in one is justified. In this case, if your data structure is not similar to this, you can use the Reverse_Pivot transform, which does exactly the opposite—it converts rows into columns. Look at the following example of a Reverse_Pivot configuration: Reverse pivoting or transformation of rows into columns leads us to introduce another term—Pivot axis column. This is a column that holds categories defining different columns after a reverse pivot operation. It corresponds to the Header column option in the Pivot transform configuration. Summary As you noted in this article, the Pivot and Reverse_Pivot transform objects available in Data Services Designer are a simple and easily configurable way to pivot data of any complexity. The GUI of the Designer tool makes maintaining the ETL process developed in Data Services easy and keeps it readable. If you make any changes to the pivot configuration options, Data Services automatically regenerates the output schema in pivot transforms accordingly.   Resources for Article: Further resources on this subject: Sabermetrics with Apache Spark [article] Understanding Text Search and Hierarchies in SAP HANA [article] Meeting SAP Lumira [article]
Read more
  • 0
  • 0
  • 5624

article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 15867
Modal Close icon
Modal Close icon