Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-build-universal-javascript-app-part-1
John Oerter
27 Sep 2016
8 min read
Save for later

Build a Universal JavaScript App, Part 1

John Oerter
27 Sep 2016
8 min read
In this two part post series, we will walk through how to write a universal (or isomorphic) JavaScript app. This first part will cover what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating the app, which are serving post data and adding React. In Part 2 of this series we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. What is a Universal JavaScript app? To put it simply, a universal JavaScript app is an application that can render itself on the client and the server. It combines the features of traditional server-side MVC frameworks (Rails, ASP.NET MVC, and Spring MVC), where markup is generated on the server and sent to the client, with the features of SPA frameworks (Angular, Ember, Backbone, and so on), where the server is only responsible for the data and the client generates markup. Universal or Isomorphic? There has been some debate in the JavaScript community over the terms "universal" and "isomorphic" to describe apps that can run on the client and server. I personally prefer the term "universal," simply because it's a more familiar word and makes the concept easier to understand. If you're interested in this discussion, you can read the below articles: Isomorphic JavaScript: The Future of Web Apps by Spike Brehm popularizes the term "isomorphic". Universal JavaScript by Michael Jackson puts forth the term "universal" as a better alternative. Is "Isomorphic JavaScript" a good term? by Dr. Axel Rauschmayer says that maybe certain applications should be called isomorphic and others should be called universal. What are the advantages? Switching between one language on the server and JavaScript on the client can harm your productivity. JavaScript is a unique language that, for better or worse, behaves in a very different way from most server-side languages. Writing universal JavaScript apps allows you to simplify your workflow and immerse yourself in JavaScript. If you're writing a web application today, chances are that you're writing a lot of JavaScript anyway. Why not dive in? Node continues to improve with better performance and more features thanks to V8 and it's well run community, and npm is a fantastic package manager with thousands of quality packages available. There is tremendous brain power being devoted to JavaScript right now. Take advantage of it! On top of that, maintainability of a universal app is better because it allows more code reuse. How many times have you implemented the same validation logic in your server and front end code? Or rewritten utility functions? With some careful architecture and decoupling, you can write and test code once that will work on the server and client. Performance SPAs are great because they allow the user to navigate applications without waiting for full pages to be sent down from the server. The cost, however, is longer wait times for the application to be initialized on the first load because the browser needs to receive all the assets needed to run the full app up front. What if there are rarely visited areas in your app? Why should every client have to wait for the logic and assets needed for those areas? This was the problem Netflix solved using universal JavaScript. MVC apps have the inverse problem. Each page only has the markup, assets, and JavaScript needed for that page, but the trade-off is round trips to the server for every page. SEO Another disadvantage of SPAs is their weakness on SEO. Although web crawlers are getting better at understanding JavaScript, a site generated on the server will always be superior. With universal JavaScript, any public-facing page on your site can be easily requested and indexed by search engines. Building an Example Universal JavaScript App Now that we've gained some background on universal JavaScript apps, let's walk through building a very simple blog website as an example. Here are the tools we'll use: Express React React Router Babel Webpack I've chosen these tools because of their popularity and ease of accomplishing our task. I won't be covering how to use Redux or other Flux implementations because, while useful in a production application, they are not necessary for demoing how to create a universal app. To keep things simple, we will forgo a database and just store our data in a flat file. We'll also keep the Webpack shenanigans to a minimum and only do what is necessary to transpile and bundle our code. You can grab the code for this walkthrough at here, and follow along. There are branches for each step along the way. Be sure to run npm install for each step. Let's get started! Step 1: Serving Post Data git checkout serving-post-data && npm install We're going to start off slow, and simply set up the data we want to serve. Our posts are stored in the posts.js file, and we just have a simple Express server in server.js that takes requests at /api/post/{id}. Snippets of these files are below. // posts.js module.exports = [ ... { id: 2, title: 'Expert Node', slug: 'expert-node', content: 'Street art 8-bit photo booth, aesthetic kickstarter organic raw denim hoodie non kale chips pour-over occaecat. Banjo non ea, enim assumenda forage excepteur typewriter dolore ullamco. Pickled meggings dreamcatcher ugh, church-key brooklyn portland freegan normcore meditation tacos aute chicharrones skateboard polaroid. Delectus affogato assumenda heirloom sed, do squid aute voluptate sartorial. Roof party drinking vinegar franzen mixtape meditation asymmetrical. Yuccie flexitarian est accusamus, yr 3 wolf moon aliqua mumblecore waistcoat freegan shabby chic. Irure 90's commodo, letterpress nostrud echo park cray assumenda stumptown lumbersexual magna microdosing slow-carb dreamcatcher bicycle rights. Scenester sartorial duis, pop-up etsy sed man bun art party bicycle rights delectus fixie enim. Master cleanse esse exercitation, twee pariatur venmo eu sed ethical. Plaid freegan chambray, man braid aesthetic swag exercitation godard schlitz. Esse placeat VHS knausgaard fashion axe cred. In cray selvage, waistcoat 8-bit excepteur duis schlitz. Before they sold out bicycle rights fixie excepteur, drinking vinegar normcore laboris 90's cliche aliqua 8-bit hoodie post-ironic. Seitan tattooed thundercats, kinfolk consectetur etsy veniam tofu enim pour-over narwhal hammock plaid.' }, ... ] // server.js ... app.get('/api/post/:id?', (req, res) => { const id = req.params.id if (!id) { res.send(posts) } else { const post = posts.find(p => p.id == id); if (post) res.send(post) else res.status(404).send('Not Found') } }) ... You can start the server by running node server.js, and then request all posts by going to localhost:3000/api/post or a single post by id such as localhost:3000/api/post/0. Great! Let's move on. Step 2: Add React git checkout add-react && npm install Now that we have the data exposed via a simple web service, let's use React to render a list of posts on the page. Before we get there, however, we need to set up webpack to transpile and bundle our code. Below is our simple webpack.config.js to do this: // webpack.config.js var webpack = require('webpack') module.exports = { entry: './index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } All we're doing is bundling our code with index.js as an entry point and writing the bundle to a public folder that will be served by Express. Speaking of index.js, here it is: // index.js import React from 'react' import { render } from 'react-dom' import App from './components/app' render ( <App />, document.getElementById('app') ) And finally, we have App.js: // components/App.js import React from 'react' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } componentDidMount() { const request = new XMLHttpRequest() request.open('GET', allPostsUrl, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { this.setState({ posts: JSON.parse(request.response) }); } } request.send(); } render() { const posts = this.state.posts.map((post) => { return <li key={post.id}>{post.title}</li> }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> </div> ) } } export default App Once the App component is mounted, it sends a request for the posts, and renders them as a list. To see this step in action, build the webpack bundle first with npm run build:client. Then, you can run node server.js just like before. http://localhost:3000 will now display a list of our posts. Conclusion Now that React has been added, take a look at Part 2 where we cover client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9257

article-image-deep-learning-image-generation-getting-started-generative-adversarial-networks
Mohammad Pezeshki
27 Sep 2016
5 min read
Save for later

Deep Learning and Image generation: Get Started with Generative Adversarial Networks

Mohammad Pezeshki
27 Sep 2016
5 min read
In machine learning, a generative model is one that captures the observable data distribution. The objective of deep neural generative models is to disentangle different factors of variation in data and be able to generate new or similar-looking samples of the data. For example, an ideal generative model on face images disentangles all different factors of variation such as illumination, pose, gender, skin color, and so on, and is also able to generate a new face by the combination of those factors in a very non-linear way. Figure 1 shows a trained generative model that has learned different factors, including pose and the degree of smiling. On the x-axis, as we go to the right, the pose changes and on y-axis as we move upwards, smiles turn to frowns. Usually these factors are orthogonal to one another, meaning that changing one while keeping the others fixed leads to a single change in data space; e.g. in the first row of Figure 1, only the pose changes with no change in the degree of smiling. The figure is adapted from here.   Based on the assumption that these underlying factors of variation have a very simple distribution (unlike the data itself), to generate a new face, we can simply sample a random number from the assumed simple distribution (such as a Gaussian). In other words, if there are k different factors, we randomly sample from a k-dimensional Gaussian distribution (aka noise). In this post, we will take a look at one of the recent models in the area of deep learning and generative models, called generative adversarial network. This model can be seen as a game between two agents: the Generator and the Discriminator. The generator generates images from noise and the discriminator discriminates between real images and those images which are generated by the generator. The objective is then to train the model such that while the discriminator tries to distinguish generated images from real images, the generator tries to fool the discriminator.  To train the model, we need to define a cost. In the case of GAN, the errors made by the discriminator are considered as the cost. Consequently, the objective of the discriminator is to minimize the cost, while the objective for the generator is to fool the discriminator by maximizing the cost. A graphical illustration of the model is shown in Figure 2.   Formally, we define the discriminator as a binary classiffier D : Rm ! f0; 1g and the generator as the mapping G : Rk ! Rm in which k is the dimension of the latent space that represents all of the factors of variation. Denoting the data by x and a point in latent space by z, the model can be trained by playing the following minmax game:   Note that the rst term encourages the discriminator to discriminate between generated images and real ones, while the second term encourages the generator to come up with images that would fool the discriminator. In practice, the log in the second term can be saturated, which would hurt the row of the gradient. As a result, the cost may be reformulated equivalently as:   At the time of generation, we can sample from a simple k-dimensional Gaussian distribution with zero mean and unit variance and pass it onto the generator. Among different models that can be used as the discriminator and generator, we use deep neural networks with parameters D and G for the discriminator and generator, respectively. Since the training boils down to updating the parameters using the backpropagation algorithm, the update rule is defined as follows: If we use a convolutional network as the discriminator and another convolutional network with fractionally strided convolution layers as the generator, the model is called DCGAN (Deep Convolutional Generative Adversarial Network). Some samples of bedroom im-age generation from this model are shown in Figure 3.   The generator can also be a sequential model, meaning that it can generate an image using a sequence of images with lower-resolution or details. A few examples of the generated images using such a model are shown in Figure 4. The GAN and later variants such as the DCGAN are currently considered to be among the best when it comes to the quality of the generated samples. The images look so realistic that you might assume that the model has simply memorized instances of the training set, but a quick KNN search reveals this not to be the case. About the author Mohammad Pezeshk is a master’s student in the LISA lab of Universite de Montreal working under the supervision of Yoshua Bengio and Aaron Courville. He obtained his bachelor's in computer engineering from Amirkabir University of Technology (Tehran Polytechnic) in July 2014 and then started his master’s in September 2014. His research interests lie in the fields of artificial intelligence, machine learning, probabilistic models and specifically deep learning.
Read more
  • 0
  • 0
  • 4389

article-image-connecting-arduino-web
Packt
27 Sep 2016
6 min read
Save for later

Connecting Arduino to the Web

Packt
27 Sep 2016
6 min read
In this article by Marco Schwartz, author of Internet of Things with Arduino Cookbook, we will focus on getting you started by connecting an Arduino board to the web. This article will really be the foundation of the rest of the article, so make sure to carefully follow the instructions so you are ready to complete the exciting projects we'll see in the rest of the article. (For more resources related to this topic, see here.) You will first learn how to set up the Arduino IDE development environment, and add Internet connectivity to your Arduino board. After that, we'll see how to connect a sensor and a relay to the Arduino board, for you to understand the basics of the Arduino platform. Then, we are actually going to connect an Arduino board to the web, and use it to grab the content from the web and to store data online. Note that all the projects in this article use the Arduino MKR1000 board. This is an Arduino board released in 2016 that has an on-board Wi-Fi connection. You can make all the projects in the article with other Arduino boards, but you might have to change parts of the code. Setting up the Arduino development environment In this first recipe of the article, we are going to see how to completely set up the Arduino IDE development environment, so that you can later use it to program your Arduino board and build Internet of Things projects. How to do it… The first thing you need to do is to download the latest version of the Arduino IDE from the following address: https://www.arduino.cc/en/Main/Software This is what you should see, and you should be able to select your operating system: You can now install the Arduino IDE, and open it on your computer. The Arduino IDE will be used through the whole article for several tasks. We will use it to write down all the code, but also to configure the Arduino boards and to read debug information back from those boards using the Arduino IDE Serial monitor. What we need to install now is the board definition for the MKR1000 board that we are going to use in this article. To do that, open the Arduino boards manager by going to Tools | Boards | Boards Manager. In there, search for SAMD boards: To install the board definition, just click on the little Install button next to the board definition. You should now be able to select the Arduino/GenuinoMKR1000 board inside the Arduino IDE: You are now completely set to develop Arduino projects using the Arduino IDE and the MKR1000 board. You can, for example, try to open an example sketch inside the IDE: How it works... The Arduino IDE is the best tool to program a wide range of boards, including the MKR1000 board that we are going to use in this article. We will see that it is a great tool to develop Internet of Things projects with Arduino. As we saw in this recipe, the board manager makes it really easy to use new boards inside the IDE. See also These are really the basics of the Arduino framework that we are going to use in the whole article to develop IoT projects. Options for Internet connectivity with Arduino Most of the boards made by Arduino don't come with Internet connectivity, which is something that we really need to build Internet of Things projects with Arduino. We are now going to review all the options that are available to us with the Arduino platform, and see which one is the best to build IoT projects. How to do it… The first option, that has been available since the advent of the Arduino platform, is to use a shield. A shield is basically an extension board that can be placed on top of the Arduino board. There are many shields available for Arduino. Inside the official collection of shields, you will find motor shields, prototyping shields, audio shields, and so on. Some shields will add Internet connectivity to the Arduino boards, for example the Ethernet shield or the Wi-Fi shield. This is a picture of the Ethernet shield: The other option is to use an external component, for example a Wi-Fi chip mounted on a breakout board, and then connect this shield to Arduino. There are many Wi-Fi chips available on the market. For example, Texas Instruments has a chip called the CC3000 that is really easy to connect to Arduino. This is a picture of a breakout board for the CC3000 Wi-Fi chip: Finally, there is the possibility of using one of the few Arduino boards that has an onboard Wi-Fi chip or Ethernet connectivity. The first board of this type introduced by Arduino was the Arduino Yun board. It is a really powerful board, with an onboard Linux machine. However, it is also a bit complex to use compared to other Arduino boards. Then, Arduino introduced the MKR1000 board, which is a board that integrates a powerful ARM Cortex M0+ process and a Wi-Fi chip on the same board, all in the small form factor. Here is a picture of this board: What to choose? All the solutions above would work to build powerful IoT projects using Arduino. However, as we want to easily build those projects and possibly integrate them into projects that are battery-powered, I chose to use the MKR1000 board for all the projects in this article. This board is really simple to use, powerful, and doesn't required any connections to hook it up with a Wi-Fi chip. Therefore, I believe this is the perfect board for IoT projects with Arduino. There's more... Of course, there are other options to connect Arduino boards to the Web. One option that's becoming more and more popular is to use 3G or LTE to connect your Arduino projects to the Web, again using either shields or breakout boards. This solution has the advantage of not requiring an active Internet connection like a Wi-Fi router, and can be used anywhere, for example outdoors. See also Now we have chosen a board that we will use in our IoT projects with Arduino, you can move on to the next recipe to actually learn how to use it. Resources for Article: Further resources on this subject: Building a Cloud Spy Camera and Creating a GPS Tracker [article] Internet Connected Smart Water Meter [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 49939

article-image-getting-started-beaglebone
Packt
27 Sep 2016
36 min read
Save for later

Getting Started with BeagleBone

Packt
27 Sep 2016
36 min read
In this article by Jayakarthigeyan Prabakar, authors of BeagleBone By Example, we will discuss steps to get started with your BeagleBone board to build real-time physical computing systems using your BeagleBone board and Python programming language. This article will teach you how to set up your BeagleBone board for the first time and write your first few python codes on it. (For more resources related to this topic, see here.) By end of this article, you would have learned the basics of interfacing electronics to BeagleBone boards and coding it using Python which will allow you to build almost anything from a home automation system to a robot through examples given in this article. Firstly, in this article, you will learn how to set up your BeagleBone board for the first time with a new operating system, followed by usage of some basic Linux Shell commands that will help you out while we work on the Shell Terminal to write and execute python codes and do much more like installing different libraries and software on your BeagleBone board. Once you get familiar with usage of the Linux terminal, you will write your first code on python that will run on your BeagleBone board. Most of the time, we will using the freely available open-source codes and libraries available on the Internet to write programs on top of it and using it to make the program work for our requirement instead of entirely writing a code from scratch to build our embedded systems using BeagleBone board. The contents of this article are divided into: Prerequisites About the single board computer - BeagleBone board Know your BeagleBone board Setting up your BeagleBone board Working on Linux Shell Coding on Python in BeagleBone Black Prerequisites This topic will cover what parts you need to get started with BeagleBone Black. You can buy them online or pick them up from any electronics store that is available in your locality. The following is the list of materials needed to get started: 1x BeagleBone Black 1x miniUSB type B to type A cable 1x microSD Card (4 GB or More) 1x microSD Card Reader 1x 5V DC, 2A Power Supply 1x Ethernet Cable There are different variants of BeagleBone boards like BeagleBone, BeagleBone Black, BeagleBone Green and some more old variants. This articlewill mostly have the BeagleBone Black shown in the pictures. Note that BeagleBone Black can replace any of the other BeagleBone boards such as the BeagleBone or BeagleBone Green for most of the projects. These boards have their own extra features so to say. For example, the BeagleBone Black has more RAM, it has almost double the size of RAM available in BeagleBone and an in-built eMMC to store operating system instead of booting it up only through operating system installed on microSD card in BeagleBone White. Keeping in mind that this articleshould be able to guide people with most of the BeagleBone board variants, the tutorials in this articlewill be based on operating system booted from microSD card inserted on the BeagleBone board. We will discuss about this in detail in the Setting up your BeagleBone board and installing operating system's topics of this article. BeagleBone Black – a single board computer This topic will give you brief information about single board computers to make you understand what they are and where BeagleBone boards fit inside this category. Have you ever wondered how your smartphones, smart TVs, and set-top boxes work? All these devices run custom firmware developed for specific applications based on the different Linux and Unix kernels. When you hear the word Linux and if you are familiar with Linux, you will get in your mind that it's nothing but an operating system, just like Windows or Mac OS X that runs on desktops and server computers. But in the recent years the Linux kernel is being used in most of the embedded systems including consumer electronics such as your Smartphones, smart TVs, set-top boxes, and much more. Most people know android and iOS as an operating system on their smart phones. But only a few know that, both these operating systems are based on Linux and Unix kernels. Did you ever question how they would develop such devices? There should be a development board right? What are they? This is where Linux Development boards like our BeagleBone boards are used. By adding peripherals such as touch screens, GSM modules, microphones, and speakers to these single board computers and writing the software that is the operating system with graphical user interface to make them interact with the physical world, we have so many smart devices now that people use every day. Nowadays you have proximity sensors, accelerometers, gyroscopes, cameras, IR blasters, and much more on your smartphones. These sensors and transmitters are connected to the CPU on your phone through the Input Output ports on the CPU, and there is a small piece of software that is running to communicate with these electronics when the whole operating system is running in the Smartphone to get the readings from these sensors in real-time. Like the autorotation of screen on the latest smartphones. There is a small piece of software that is reading the data from accelerometer and gyroscope sensors on the phone and based on the orientation of the phone it turns the graphical display. So, all these Linux Development boards are tools and base boards using which you can build awesome real world smart devices or we can call them physical computing systems as they interact with the physical world and respond with an output. Getting to know your board – BeagleBone Black BeagleBone Black can be described as low cost single board computer that can be plugged into a computer monitor or TV via a HDMI cable for output and uses standard keyboard and mouse for input. It's capable of doing everything you'd expect a desktop computer to do, like playing videos, word processing, browsing the Internet, and so on. You can even setup a web server on your BeagleBone Black just like you would do if you want to set up a webserver on a Linux or Windows PC. But, differing from your desktop computer, the BeagleBone boards has the ability to interact with the physical world using the GPIO pins available on the board, and has been used in various physical computing applications. Starting from Internet of Things, Home Automation projects, to Robotics, or tweeting shark intrusion systems. The BeagleBone boards are being used by hardware makers around the world to build awesome physical computing systems which are turning into commercial products also in the market. OpenROV, an underwater robot being one good example of what someone can build using a BeagleBone Black that can turn into a successful commercial product. Hardware specification of BeagleBone Black A picture is worth a thousand words. The following picture describes about the hardware specifications of the BeagleBone Black. But you will get some more details about every part of the board as you read the content in the following picture. If you are familiar with the basic setup of a computer. You will know that it has a CPU with RAM and Hard Disk. To the CPU you can connect your Keyboard, Mouse, and Monitor which are powered up using a power system. The same setup is here in BeagleBone Black also. There is a 1GHz Processor with 512MB of DDR3 RAM and 4GB on board eMMC storage, which replaces the Hard Disk to store the operating system. Just in case you want more storage to boot up using a different operating system, you can use an external microSD card that can have the operating system that you can insert into the microSD card slot for extra storage. As in every computer, the board consists of a power button to turn on and turn off the board and a reset button to reset the board. In addition, there is a boot button which is used to boot the board when the operating system is loaded on the microSD card instead of the eMMC. We will be learning about usage of this button in detail in the installing operating systems topic of this article. There is a type A USB Host port to which you can connect peripherals such as USB Keyboard, USB Mouse, USB Camera, and much more, provided that the Linux drivers are available for the peripherals you connect to the BeagleBone Black. It is to be noted that the BeagleBone Black has only one USB Host Port, so you need to get an USB Hub to get multiple USB ports for connecting more number of USB devices at a time. I would recommend using a wireless Keyboard and Mouse to eliminate an extra USB Hub when you connect your BeagleBone Black to monitor using the HDMI port available. The microHDMI port available on the BeagleBone Black gives the board the ability to give output to HDMI monitors and HDMI TVs just like any computer will give. You can power up the BeagleBone Black using the DC Barrel jack available on the left hand side corner of the board using a 5V DC, 2A adapter. There is an option to power the board using USB, although it is not recommended due to the current limit on USB ports. There are 4 LEDs on board to indicate the status of the board and help us for identifications to boot up the BeagleBone Black from microSD card. The LEDs are linked with the GPIO pins on the BeagleBone Black which can be used whenever needed. You can connect the BeagleBone Black to the LAN or Internet using the Ethernet port available on the board using an Ethernet cable. You can even use a USB Wi-Fi module to give Internet access to your BeagleBone Black. The expansion headers which are in general called the General Purpose Input Output (GPIO) pins include 65 digital pins. These pins can be used as digital input or output pins to which you can connect switches, LEDs and many more digital input output components, 7 analog inputs to which you can connect analog sensors like a potentiometer or an analog temperature sensor, 4 Serial Ports to which you can connect Serial Bluetooth or Xbee Modules for wireless communication or anything else, 2 SPI and 2 I2C Ports to connect different modules such as sensors or any other modules using SPI or I2C communication. We also have the serial debugging port to view the low-level firmware pre-boot and post-shutdown/reboot messages via a serial monitor using an external serial to USB converter while the system is loading up and running. After booting up the operating system, this also acts as a fully interactive Linux console. Setting up your BeagleBone board Your first step to get started with BeagleBone Boards with your hands on will be to set it up and test it as suggested by the BeagleBone Community with the Debian distribution of Linux running on BeagleBone Black that comes preloaded on the eMMC on board. This section will walk you through that process followed by installing different operating system on your BeagleBone board and log in into it. And then get into start working with files and executing Linux Shell commands via SSH. Connect your BeagleBone Black using the USB cable to your Laptop or PC. This is the simplest method to get your BeagleBone Black up and running. Once you connect your BeagleBone Black, it will start to boot using the operating system on the eMMC storage. To log in into the operating system and start working on it, the BeagleBone Black has to connect to a network and the drivers that are provided by the BeagleBoard manufacturers allow us to create a local network between your BeagleBone Black and your computer when you connect it via the USB cable. For this, you need to download and install the device drivers provided by BeagleBone board makers on your PC as explained in step 2. Download and install device drivers. Goto http://beagleboard.org/getting-started Click and download the driver package based on your operating system. Mine is Windows (64-bit), so I am going to download that Once the installation is complete, click on Finish. It is shown in the following screenshot: Once the installation is done, restart your PC. Make sure that the Wi-Fi on your laptop is off and also there is no Ethernet connected to your Laptop. Because now the BeagleBone Black device drivers will try to create a LAN connection between you laptop and BeagleBone Black so that you can access the webserver running by default on the BeagleBone Black to test it's all good, up, and running. Once you reboot your PC, get to step 3. Connect to the Web Server Running on BeagleBone Black. Open your favorite web browser and enter the IP address 192.168.7.2 on the URL bar, which is the default static IP assigned to BeagleBone Black.This should open up the webpage as shown in the following screenshot: If you get a green tick mark with the message your board is connected. You can make sure that you got the previous steps correct and you have successfully connected to your board. If you don't get this message, try removing the USB cable connected to the BeagleBone Black, reconnect it and check again. If you still don't get it. Then check whether you did the first two steps correctly. Play with on board LEDs via the web server. If you Scroll down on the web page to which we got connected, you will find the section as shown in the following screenshot: This is a sample setup made by BeagleBone makers as the first time interaction interface to make you understand what is possible using BeagleBone Black. In this section of the webpage, you can run a small script. When you click on Run, the On board status LEDs that are flashing depending on the status of the operating system will stop its function and start working based on the script that you see on the page. The code is running based on a JavaScript library built by BeagleBone makers called the BoneScript. We will not look into this in detail as we will be concentrating more on writing our own programs using python to work with GPIOs on the board. But to make you understand, here is a simple explanation on what is there on the script and what happens when you click on the run button on the webpage. The pinMode function defines the on board LED pins as outputs and the digitalWrite function sets the state of the output either as HIGH or LOW. And the setTimeout function will restore the LEDs back to its normal function after the set timeout, that is, the program will stop running after the time that was set in the setTimeout function. Say I modify the code to what is shown in the following screenshot: You can notice that, I have changed the states of two LEDs to LOW and other two are HIGH and the timeout is set to 10,000 milliseconds. So when you click on the Run Button. The LEDs will switch to these states and stay like that for 10 seconds and then restore back to its normal status indication routine, that is, blinking. You can play around with different combinations of HIGH and LOW states and setTimeout values so that you can see and understand what is happening. You can see the LED output state of BeagleBone Black in the following screenshot for the program we executed earlier: You can see that the two LEDs in the middle are in LOW state. It stays like this for 10 seconds when you run the script and then it will restore back to its usual routine. You can try with different timeout values and states of LEDs on the script given in the webpage and try clicking on the run button to see how it works. Like this we will be writing our own python programs and setting up servers to use the GPIOs available on the BeagleBone Black to make them work the way we desire to build different applications in each project that is available in this article. Installing operating systems We can make the BeagleBone Black boot up and run using different operating systems just like any computer can do. Mostly Linux is used on these boards which is free and open source, but it is to be noted that specific distributions of Linux, Android, and Windows CE have been made available for these boards as well which you can try out. The stable versions of these operating systems are made available at http://beagleboard.org/latest-images. By default, the BeagleBone Black comes preloaded with a Debian distribution of Linux on the eMMC of the board. However, if you want, you can flash this eMMC just like you do to your Hard Drive on your computer and install different operating systems on it. As mentioned in the note at the beginning of this article, considering all the tutorials in this articleshould be useful to people who own BeagleBone as well as the BeagleBone Black. We will choose the recommended Debian package by www.BeagleBoard.org Foundation and we will boot the board every time using the operating system on microSD card. Perform the following steps to prepare the microSD card and boot BeagleBone using that: Goto: http://beagleboard.org/latest-images. Download the latest Debian Image. The following screenshot highlights the latest Debian Image available for flashing on microSD card: Extract the image file inside the RAR file that was downloaded: You might have to install WinRAR or any .rar file extracting software if it is not available in your computer already. Install Win32 Disk Imager software. To write the image file to a microSD card, we need this software. You can go to Google or any other search engine and type win32 disk imager as keyword and search to get the web link to download this software as shown in the following screenshot: The web link, where you can find this software is http://sourceforge.net/projects/win32diskimager/. But this keeps changing often that's why I suggest you can search it via any search engine with the keyword. Once you download the software and install it. You should be able to see the window as shown in the following screenshot when you open the Win32 Disk Imager: Now that you are all set with the software, using which you can flash the operating system image that we downloaded. Let's move to the next step where you can use Win32 Disk Imager software to flash the microSD card. Flashing the microSD card. Insert the microSD into a microSD card reader and plug it onto your computer. It might take some time for the card reader to show up your microSD card. Once it shows up, you should be able to select the USB drive as shown in the following screenshot on the Win32 Disk Imager software. Now, click on the icon highlighted in the following screenshot to open the file explorer and select the image file that we extracted in Step 3: Go to the folder where you extracted the latest Debian image file and select it. Now you can write the image file to microSD card by clicking on the Write button on the Win32 Disk Imager. If you get a prompt as shown in the following screenshot, click on Yes and continue: Once you click on Yes, the flashing process will start and the image file will be written on to the microSD card. The following screenshot shows the flashing process progressing: Once the flashing is completed, you will get a message as shown in the following screenshot: Now you can click onOKexit the Win32 Disk Imager software and safely remove the microSD card from your computer. Now you have successfully prepared your microSD card with the latest Debian operating system available for BeagleBone Black. This process is same for all other operating systems that are available for BeagleBone boards. You can try out different operating systems such as the Angstrom Linux, Android, or Windows CE others, once you get familiar with your BeagleBone board by end of this article. For Mac users, you can refer to either https://learn.adafruit.com/ssh-to-beaglebone-black-over-usb/installing-drivers-mac or https://learn.adafruit.com/beaglebone-black-installing-operating-systems/mac-os-x. Booting your BeagleBone board from a SD card Since you have the operating system on your microSD card now, let us go ahead and boot your BeagleBone board from that microSD card and see how to login and access the filesystem via Linux Shell. You will need your computer connected to your Router either via Ethernet or Wi-Fi and an Ethernet cable which you should connect between your Router and the BeagleBone board. The last but most important thing is an External Power Supply using which you will power up your BeagleBone board because power supply via a USB will be not be enough to run the BeagleBone board when it is booted from a microSD card. Insert the microSD card into BeagleBone board. Now you should insert the microSD card that you have prepared into the microSD card slot available on your BeagleBone board. Connect your BeagleBone to your LAN. Now connect your BeagleBone board to your Internet router using an Ethernet cable. You need to make sure that your BeagleBone board and your computer are connected to the same router to follow the next steps. Connect external power supply to your BeagleBone board. Boot your BeagleBone board from microSD card. On BeagleBone Black and BeagleBone Green, you have a Boot Button which you need to hold on while turning on your BeagleBone board so that it starts booting from the microSD card instead of the default mode where it starts to boot from the onboard eMMC storage which holds the operating system. In case of BeagleBone White, you don't have this button, it starts to boot from the microSD card itself as it doesn't have onboard eMMC storage. Depending on the board that you have, you can decide whether to boot the board from microSD card or eMMC. Consider you have a BeagleBone Black just like the one I have shown in the preceding picture. You hold down the User Boot button that is highlighted on the image and turn on the power supply. Once you turn on the board while holding the button down, the four on-board LEDs will light up and stay HIGH as shown in the following picture for 1 or 2 seconds, then they will start to blink randomly. Once they start blinking, you can leave the button. Now your BeagleBone board must have started Booting from the microSD card, so our next step will be to log in to the system and start working on it. The next topic will walk you through the steps on how to do this. Logging into the board via SSH over Ethernet If you are familiar with Linux operations, then you might have guessed what this section is about. But for those people who are not daily Linux users or have never heard the term SSH, Secure Shell (SSH) is a network protocol that allows network services and remote login to be able to operate over an unsecured network in a secure manner. In basic terms, it's a protocol through which you can log in to a computer and assess its filesystem and also work on that system using specific commands to create and work with files on the system. In the steps ahead, you will work with some Linux commands that will make you understand this method of logging into a system and working on it. Setup SSH Software. To get started, log in to your BeagleBone board now, from a Windows PC, you need to install any SSH terminal software for windows. My favorite is PuTTY, so I will be using that in the steps ahead. If you are new to using SSH, I would suggest you also get PuTTY. The software interface of PuTTY will be as shown in the following screenshot: You need to know the IP address or the Host Name of your BeagleBone Black to log in to it via SSH. The default Host Name is beaglebone but in some routers, depending on their security settings, this method of login doesn't work with Host Name. So, I would suggest you try to login entering the hostname first. If you are not able to login, follow Step 2. If you successfully connect and get the login prompt with Host Name, you can skip Step 2 and go to Step 3. But if you get an error as shown in the following screenshot, perform Step 2. Find an IP address assigned to BeagleBone board. Whenever you connect a device to your Router, say your computer, printer, or mobile phone. The router assigns a unique IP to these devices. The same way, the router must have assigned an IP to your BeagleBone board also. We can get this detail on the router's configuration page of your router from any browser of a computer that is connected to that router. In most cases, the router can be assessed by entering the IP 192.168.1.1 but some router manufacturers have a different IP in very rare cases. If you are not able to assess your router using this IP 192.168.1.1, refer your router manual for getting access to this page. The images that are shown in this section are to give you an idea about how to log in to your router and get the IP address details assigned to your BeagleBone board from your router. The configuration pages and how the devices are shown on the router will look different depending on the router that you own. Enter the 192.168.1.1 address in you browser. When it asks for User Name and Password, enter admin as UserName and password as Password These are the mostly used credentials by default in most of the routers. Just in case you fail in this step, check your router's user manual. Considering you logged into your router configuration page successfully, you will see the screen with details as shown in the following screenshot: If you click on the Highlighted part, Attached Devices, you will be able to see the list of devices with their IP as shown in the following screenshot, where you can find the details of the IP address that is assigned to your BeagleBone board. So now you can note down the IP that has been assigned to your BeagleBone board. It can be seen that it's 192.168.1.14 in the preceding screenshot for my beaglebone board. We will be using this IP address to connect to the BeagleBone board via SSH in the next step. Connect via SSH using IP Address. Once you click on Open you might get a security prompt as shown in the following screenshot. Click on Yes and continue. Now you will get the login prompt on the terminal screen as shown in the following screenshot: If you got this far successfully, then it is time to log in to your BeagleBone board and start working on it via Linux Shell. Log in to BeagleBone board. When you get the login prompt as shown in the preceding screenshot, you need to enter the default username which is debian and default password which is temppwd. Now you should have logged into Linux Shell of your BeagleBone board as user with username debian. Now that you have successfully logged into your BeagleBone board's Linux Shell, you can start working on it using Linux Shell commands like anyone does on any computer that is running Linux. The next section will walk you through some basic Linux Shell commands that will come in handy for you to work with any Linux system. Working on Linux shell Simply put, the shell is a program that takes commands from the keyboard and gives them to the operating system to perform. Since we will be working on BeagleBone board as a development board to build electronics projects, plugging it to a Monitor, Keyboard, and Mouse every time to work on it like a computer might be unwanted most of the times and you might need more resources also which is unnecessary all the time. So we will be using the shell command-line interface to work on the BeagleBone boards. If you want to learn more about the Linux command-line interfaces, I would suggest you visit to http://linuxcommand.org/. Now let's go ahead and try some basic shell commands and do something on your BeagleBone board. You can see the kernel version using the command uname -r. Just type the command and hit enter on your keyboard, the command will get executed and you will see the output as shown here: Next, let us check the date on your BeagleBone board: Like this shell will execute your commands and you can work on your BeagleBone boards via the shell. Getting kernel version and date was just for a sample test. Now let's move ahead and start working with the filesystem. ls: This stands for list command. This command will list out and display the names of folders and files available in the current working directory on which you are working. pwd: This stands for print working directory command. This command prints the current working directory in which you are present. mkdir: This stands for make directory command. This command will create a directory in other words a folder, but you need to mention the name of the directory you want to create in the command followed by it. Say I want to create a folder with the name WorkSpace, I should enter the command as follows: When you execute this command, it will create a folder named WorkSpace inside the current working directory you are in, to check whether the directory was created. You can try the ls command again and see that the directory named WorkSpace has been created. To change the working directory and go inside the WorkSpace directory, you can use the next command that we will be seeing. cd: This stands for change directory command. This command will help you switch between directories depending on the path you provide along with this command. Now to switch and get inside the WorkSpace directory that you created, you can type the command as follows: cd WorkSpace You can note that whenever you type a command, it executes in the current working that you are in. So the execution of cd WorkSpace now will be equivalent to cd /home/debian/WorkSpace as your current working directory is /home/debian. Now you can see that you have got inside the WorkSpace folder, which is empty right now, so if you type the ls command now, it will just go to the next line on the shell terminal, it will not output anything as the folder is empty. Now if you execute the pwd command, you will see that your current working directory has changed. cat: This stands for the cat command. This is one of the most basic commands that is used to read, write, and append data to files in shell. To create a text file and add some content to it, you just need to type the cat command cat > filename.txt Say I want to create a sample.txt file, I would type the command as shown next: Once you type, the cursor will be waiting for the text you want to type inside the text file you created. Now you can type whatever text you want to type and when you are done press Ctrl + D. It will save the file and get back to the command-line interface. Say I typed This is a test and then pressed Ctrl + D. The shell will look as shown next. Now if you type ls command, you can see the text file inside the WorkSpace directory. If you want to read the contents of the sample.txt file, again you can use the cat command as follows: Alternatively, you can even use the more command which we will be using mostly: Now that we saw how we can create a file, let's see how to delete what we created. rm: This stands for remove command. This will let you delete any file by typing the filename or filename along with path followed by the command. Say now we want to delete the sample.txt file we created, the command can be either rm sample.txt which will be equivalent to rm /home/debian/WorkSpace/sample.txt as your current working directory is /home/debian/Workspace. After you execute this command, if you try to list the contents of the directory, you will notice that the file has been deleted and now the directory is empty. Like this, you can make use of the shell commands work on your BeagleBone board via SSH over Ethernet or Wi-Fi. Now that you have got a clear idea and hands-on experience on using the Linux Shell, let's go ahead and start working with python and write a sample program on a text editor on Linux and test it in the next and last topic of this article. Writing your own Python program on BeagleBone board In this section, we will write our first few Python codes on your BeagleBone board. That will take an input and make a decision based on the input and print out an output depending on the code that we write. There are three sample codes in this topic as examples, which will help you cover some fundamentals of any programming language, including defining variables, using arithmetic operators, taking input and printing output, loops and decision making algorithm. Before we write and execute a python program, let us get into python's interactive shell interface via the Linux shell and try some basic things like creating a variable and performing math operations on those variables. To open the python shell interface, you just have the type python on the Linux shell like you did for any Linux shell command in the previous section of this article. Once you type python and hit Enter, you should be able to see the terminal as shown in the following screenshot: Now you are into python's interactive shell interface where every line that you type is the code that you are writing and executing simultaneously in every step. To learn more about this, visit https://www.python.org/shell/ or to get started and learn python programming language you can get our python by example articlein our publication. Let's execute a series of syntax in python's interactive shell interface to see whether it's working. Let's create a variable A and assign value 20 to it: Now let's print A to check what value it is assigned: You can see that it prints out the value that we stored on it. Now let's create another variable named B and store value 30 to it: Let's try adding these two variables and store the result in another variable named C. Then print C where you can see the result of A+B, that is, 50. That is the very basic calculation we performed on a programming interface. We created two variables with different values and then performed an arithmetic operation of adding two values on those variables and printed out the result. Now, let's get a little more ahead and store string of characters in a variable and print them. Wasn't that simple. Like this you can play around with python's interactive shell interface to learn coding. But any programmer would like to write a code and execute the program to get the output on a single command right. Let's see how that can be done now. To get out of the Python's Interactive Shell and get back to the current working directory on Linux Shell, just hold the Ctrl button and press D, that is, Ctrl + D on the keyboard. You will be back on the Linux Shell interface as shown next: Now let's go ahead and write the program to perform the same action that we tried executing on python's interactive shell. That is to store two values on different variables and print out the result when both of them are added. Let's add some spice to it by doing multiple arithmetic operations on the variables that we create and print out the values of addition and subtraction. You will need a text editor to write programs and save them. You can do it using the cat command also. But in future when you use indentation and more editing on the programs, the basic cat command usage will be difficult. So, let's start using the available text editor on Linux named nano, which is one of my favorite text editors in Linux. If you have a different choice and you are familiar with other text editors on Linux, such as vi or anything else, you can go ahead and continue the next steps using them to write programs. To create a python file and start writing the code on it using nano, you need to use the nano command followed by the filename with extension .py. Let's create a file named ArithmeticOperations.py. Once you type this command, the text editor will open up. Here you can type your code and save it using the keyboard command Ctrl + X. Let's go ahead and write the code which is shown in the following screenshot and let's save the file using Ctrl + X: Then type Y when it prompts to save the modified file. Then if you want to change the file with a different name, you can change it in the next step before you save it. Since we created the file now only, we don't need to do it. In case if you want to save the modified copy of the file in future with a different name, you can change the filename in this step: For now we will just hit enter that will take us back to the Linux Shell and the file AritmeticOperations.py will be created inside the current working directory, which you can see by typing the ls command. You can also see the contents of the file by typing the more command that we learned in the previous section of this article. Now let's execute the python script and see the output. To do this, you just have to type the command python followed by the python program file that we created, that is, the ArithmeticOperations.py. Once you run the python code that you wrote, you will see the output as shown earlier with the results as output. Now that you have written and executed your first code on python and tested it working on your BeagleBone board, let's write another python code, which is shown in the following screenshot where the code will ask you to enter an input and whatever text you type as input will be printed on the next line and the program will run continuously. Let's save this python code as InputPrinter.py: In this code, we will use a while loop so that the program runs continuously until you break it using the Ctrl + D command where it will break the program with an error and get back to Linux Shell. Now let's try out our third and last program of this section and article, where when we run the code, the program asks the user to type the user's name as input and if they type a specific name that we compare, it prints and says Hello message or if a different name was given as input, it prints go away; let's call this code Say_Hello_To_Boss.py. Instead of my name Jayakarthigeyan, you can replace it with your name or any string of characters on comparing which, the output decision varies. When you execute the code, the output will look as shown in the following screenshot: Like we did in the previous program, you can hit Ctrl + D to stop the program and get back to Linux Shell. In this way, you can work with python programming language to create codes that can run on the BeagleBone boards in the way you desire. Since we have come to the end of this article, let's give a break to our BeagleBone board. Let's power off our BeagleBone board using the command sudo poweroff, which will shut down the operating system. After you execute this command, if you get the error message shown in the following screenshot, it means the BeagleBoard has powered off. You can turn off the power supply that was connected to your BeagleBone board now. Summary So here we are at the end of this article. In this article, you have learned how to boot your BeagleBone board from a different operating system on microSD card and then log in to it and start coding in Python to run a routine and make decisions On an extra note, you can take this article to another level by trying out a little more by connecting your BeagleBone board to an HDMI monitor using a microHDMI cable and connecting a USB Keyboard and Mouse to the USB Host of the BeagleBone board and power the monitor and BeagleBone board using external power supply and boot it from microSD Card and you should be able to see some GUI and you will be able to use the BeagleBone board like a normal Linux computer. You will be able access the files, manage them, and use Shell Terminal on the GUI also. If you own BeagleBone Black or BeagleBone Green, you can try out to flash the onboard eMMC using the latest Debian operating system and try out the same thing that we did using the operating system booted from microSD card. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Learning BeagleBone Python Programming [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 1
  • 17309

article-image-approaching-penetration-test-using-metasploit
Packt
26 Sep 2016
17 min read
Save for later

Approaching a Penetration Test Using Metasploit

Packt
26 Sep 2016
17 min read
"In God I trust, all others I pen-test" - Binoj Koshy, cyber security expert In this article by Nipun Jaswal, authors of Mastering Metasploit, Second Edition, we will discuss penetration testing, which is an intentional attack on a computer-based system with the intension of finding vulnerabilities, figuring out security weaknesses, certifying that a system is secure, and gaining access to the system by exploiting these vulnerabilities. A penetration test will advise an organization if it is vulnerable to an attack, whether the implemented security is enough to oppose any attack, which security controls can be bypassed, and so on. Hence, a penetration test focuses on improving the security of an organization. (For more resources related to this topic, see here.) Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered one of the most effective auditing tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, an extensive exploit development environment, information gathering and web testing capabilities, and much more. This article has been written so that it will not only cover the frontend perspectives of Metasploit, but it will also focus on the development and customization of the framework as well. This article assumes that the reader has basic knowledge of the Metasploit framework. However, some of the sections of this article will help you recall the basics as well. While covering Metasploit from the very basics to the elite level, we will stick to a step-by-step approach, as shown in the following diagram: This article will help you recall the basics of penetration testing and Metasploit, which will help you warm up to the pace of this article. In this article, you will learn about the following topics: The phases of a penetration test The basics of the Metasploit framework The workings of exploits Testing a target network with Metasploit The benefits of using databases An important point to take a note of here is that we might not become an expert penetration tester in a single day. It takes practice, familiarization with the work environment, the ability to perform in critical situations, and most importantly, an understanding of how we have to cycle through the various stages of a penetration test. When we think about conducting a penetration test on an organization, we need to make sure that everything is set perfectly and is according to a penetration test standard. Therefore, if you feel you are new to penetration testing standards or uncomfortable with the term Penetration testing Execution Standard (PTES), please refer to http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines to become more familiar with penetration testing and vulnerability assessments. According to PTES, the following diagram explains the various phases of a penetration test: Refer to the http://www.pentest-standard.org website to set up the hardware and systematic phases to be followed in a work environment; these setups are required to perform a professional penetration test. Organizing a penetration test Before we start firing sophisticated and complex attack vectors with Metasploit, we must get ourselves comfortable with the work environment. Gathering knowledge about the work environment is a critical factor that comes into play before conducting a penetration test. Let us understand the various phases of a penetration test before jumping into Metasploit exercises and see how to organize a penetration test on a professional scale. Preinteractions The very first phase of a penetration test, preinteractions, involves a discussion of the critical factors regarding the conduct of a penetration test on a client's organization, company, institute, or network; this is done with the client. This serves as the connecting line between the penetration tester and the client. Preinteractions help a client get enough knowledge on what is about to be done over his or her network/domain or server. Therefore, the tester will serve here as an educator to the client. The penetration tester also discusses the scope of the test, all the domains that will be tested, and any special requirements that will be needed while conducting the test on the client's behalf. This includes special privileges, access to critical systems, and so on. The expected positives of the test should also be part of the discussion with the client in this phase. As a process, preinteractions discuss some of the following key points: Scope: This section discusses the scope of the project and estimates the size of the project. Scope also defines what to include for testing and what to exclude from the test. The tester also discusses ranges and domains under the scope and the type of test (black box or white box) to be performed. For white box testing, what all access options are required by the tester? Questionnaires for administrators, the time duration for the test, whether to include stress testing or not, and payment for setting up the terms and conditions are included in the scope. A general scope document provides answers to the following questions: What are the target organization's biggest security concerns? What specific hosts, network address ranges, or applications should be tested? What specific hosts, network address ranges, or applications should explicitly NOT be tested? Are there any third parties that own systems or networks that are in the scope, and which systems do they own (written permission must have been obtained in advance by the target organization)? Will the test be performed against a live production environment or a test environment? Will the penetration test include the following testing techniques: ping sweep of network ranges, port scan of target hosts, vulnerability scan of targets, penetration of targets, application-level manipulation, client-side Java/ActiveX reverse engineering, physical penetration attempts, social engineering? Will the penetration test include internal network testing? If so, how will access be obtained? Are client/end-user systems included in the scope? If so, how many clients will be leveraged? Is social engineering allowed? If so, how may it be used? Are Denial of Service attacks allowed? Are dangerous checks/exploits allowed? Goals: This section discusses various primary and secondary goals that a penetration test is set to achieve. The common questions related to the goals are as follows: What is the business requirement for this penetration test? This is required by a regulatory audit or standard Proactive internal decision to determine all weaknesses What are the objectives? Map out vulnerabilities Demonstrate that the vulnerabilities exist Test the incident response Actual exploitation of a vulnerability in a network, system, or application All of the above Testing terms and definitions: This section discusses basic terminologies with the client and helps him or her understand the terms well Rules of engagement: This section defines the time of testing, timeline, permissions to attack, and regular meetings to update the status of the ongoing test. The common questions related to rules of engagement are as follows: At what time do you want these tests to be performed? During business hours After business hours Weekend hours During a system maintenance window Will this testing be done on a production environment? If production environments should not be affected, does a similar environment (development and/or test systems) exist that can be used to conduct the penetration test? Who is the technical point of contact? For more information on preinteractions, refer to http://www.pentest-standard.org/index.php/File:Pre-engagement.png. Intelligence gathering / reconnaissance phase In the intelligence-gathering phase, you need to gather as much information as possible about the target network. The target network could be a website, an organization, or might be a full-fledged fortune company. The most important aspect is to gather information about the target from social media networks and use Google Hacking (a way to extract sensitive information from Google using specialized queries) to find sensitive information related to the target. Footprinting the organization using active and passive attacks can also be an approach. The intelligence phase is one of the most crucial phases in penetration testing. Properly gained knowledge about the target will help the tester to stimulate appropriate and exact attacks, rather than trying all possible attack mechanisms; it will also help him or her save a large amount of time as well. This phase will consume 40 to 60 percent of the total time of the testing, as gaining access to the target depends largely upon how well the system is foot printed. It is the duty of a penetration tester to gain adequate knowledge about the target by conducting a variety of scans, looking for open ports, identifying all the services running on those ports and to decide which services are vulnerable and how to make use of them to enter the desired system. The procedures followed during this phase are required to identify the security policies that are currently set in place at the target, and what we can do to breach them. Let us discuss this using an example. Consider a black box test against a web server where the client wants to perform a network stress test. Here, we will be testing a server to check what level of bandwidth and resource stress the server can bear or in simple terms, how the server is responding to the Denial of Service (DoS) attack. A DoS attack or a stress test is the name given to the procedure of sending indefinite requests or data to a server in order to check whether the server is able to handle and respond to all the requests successfully or crashes causing a DoS. A DoS can also occur if the target service is vulnerable to specially crafted requests or packets. In order to achieve this, we start our network stress-testing tool and launch an attack towards a target website. However, after a few seconds of launching the attack, we see that the server is not responding to our browser and the website does not open. Additionally, a page shows up saying that the website is currently offline. So what does this mean? Did we successfully take out the web server we wanted? Nope! In reality, it is a sign of protection mechanism set by the server administrator that sensed our malicious intent of taking the server down, and hence resulting in a ban of our IP address. Therefore, we must collect correct information and identify various security services at the target before launching an attack. The better approach is to test the web server from a different IP range. Maybe keeping two to three different virtual private servers for testing is a good approach. In addition, I advise you to test all the attack vectors under a virtual environment before launching these attack vectors onto the real targets. A proper validation of the attack vectors is mandatory because if we do not validate the attack vectors prior to the attack, it may crash the service at the target, which is not favorable at all. Network stress tests should generally be performed towards the end of the engagement or in a maintenance window. Additionally, it is always helpful to ask the client for white listing IP addresses used for testing. Now let us look at the second example. Consider a black box test against a windows 2012 server. While scanning the target server, we find that port 80 and port 8080 are open. On port 80, we find the latest version of Internet Information Services (IIS) running while on port 8080, we discover that the vulnerable version of the Rejetto HFS Server is running, which is prone to the Remote Code Execution flaw. However, when we try to exploit this vulnerable version of HFS, the exploit fails. This might be a common scenario where inbound malicious traffic is blocked by the firewall. In this case, we can simply change our approach to connecting back from the server, which will establish a connection from the target back to our system, rather than us connecting to the server directly. This may prove to be more successful as firewalls are commonly being configured to inspect ingress traffic rather than egress traffic. Coming back to the procedures involved in the intelligence-gathering phase when viewed as a process are as follows: Target selection: This involves selecting the targets to attack, identifying the goals of the attack, and the time of the attack. Covert gathering: This involves on-location gathering, the equipment in use, and dumpster diving. In addition, it covers off-site gathering that involves data warehouse identification; this phase is generally considered during a white box penetration test. Foot printing: This involves active or passive scans to identify various technologies used at the target, which includes port scanning, banner grabbing, and so on. Identifying protection mechanisms: This involves identifying firewalls, filtering systems, network- and host-based protections, and so on. For more information on gathering intelligence, refer to http://www.pentest-standard.org/index.php/Intelligence_Gathering Predicting the test grounds A regular occurrence during penetration testers' lives is when they start testing an environment, they know what to do next. If they come across a Windows box, they switch their approach towards the exploits that work perfectly for Windows and leave the rest of the options. An example of this might be an exploit for the NETAPI vulnerability, which is the most favorable choice for exploiting a Windows XP box. Suppose a penetration tester needs to visit an organization, and before going there, they learn that 90 percent of the machines in the organization are running on Windows XP, and some of them use Windows 2000 Server. The tester quickly decides that they will be using the NETAPI exploit for XP-based systems and the DCOM exploit for Windows 2000 server from Metasploit to complete the testing phase successfully. However, we will also see how we can use these exploits practically in the latter section of this article. Consider another example of a white box test on a web server where the server is hosting ASP and ASPX pages. In this case, we switch our approach to use Windows-based exploits and IIS testing tools, therefore ignoring the exploits and tools for Linux. Hence, predicting the environment under a test helps to build the strategy of the test that we need to follow at the client's site. For more information on the NETAPI vulnerability, visit http://technet.microsoft.com/en-us/security/bulletin/ms08-067. For more information on the DCOM vulnerability, visit http://www.rapid7.com/db/modules/exploit/Windows /dcerpc/ms03_026_dcom. Modeling threats In order to conduct a comprehensive penetration test, threat modeling is required. This phase focuses on modeling out correct threats, their effect, and their categorization based on the impact they can cause. Based on the analysis made during the intelligence-gathering phase, we can model the best possible attack vectors. Threat modeling applies to business asset analysis, process analysis, threat analysis, and threat capability analysis. This phase answers the following set of questions: How can we attack a particular network? To which crucial sections do we need to gain access? What approach is best suited for the attack? What are the highest-rated threats? Modeling threats will help a penetration tester to perform the following set of operations: Gather relevant documentation about high-level threats Identify an organization's assets on a categorical basis Identify and categorize threats Mapping threats to the assets of an organization Modeling threats will help to define the highest priority assets with threats that can influence these assets. Now, let us discuss a third example. Consider a black box test against a company's website. Here, information about the company's clients is the primary asset. It is also possible that in a different database on the same backend, transaction records are also stored. In this case, an attacker can use the threat of a SQL injection to step over to the transaction records database. Hence, transaction records are the secondary asset. Mapping a SQL injection attack to primary and secondary assets is achievable during this phase. Vulnerability scanners such as Nexpose and the Pro version of Metasploit can help model threats clearly and quickly using the automated approach. This can prove to be handy while conducting large tests. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Threat_Modeling. Vulnerability analysis Vulnerability analysis is the process of discovering flaws in a system or an application. These flaws can vary from a server to web application, an insecure application design for vulnerable database services, and a VOIP-based server to SCADA-based services. This phase generally contains three different mechanisms, which are testing, validation, and research. Testing consists of active and passive tests. Validation consists of dropping the false positives and confirming the existence of vulnerabilities through manual validations. Research refers to verifying a vulnerability that is found and triggering it to confirm its existence. For more information on the processes involved during the threat-modeling phase, refer to http://www.pentest-standard.org/index.php/Vulnerability_Analysis. Exploitation and post-exploitation The exploitation phase involves taking advantage of the previously discovered vulnerabilities. This phase is considered as the actual attack phase. In this phase, a penetration tester fires up exploits at the target vulnerabilities of a system in order to gain access. This phase is covered heavily throughout the article. The post-exploitation phase is the latter phase of exploitation. This phase covers various tasks that we can perform on an exploited system, such as elevating privileges, uploading/downloading files, pivoting, and so on. For more information on the processes involved during the exploitation phase, refer to http://www.pentest-standard.org/index.php/Exploitation. For more information on post exploitation, refer to http://www.pentest-standard.org/index.php/Post_Exploitation. Reporting Creating a formal report of the entire penetration test is the last phase to conduct while carrying out a penetration test. Identifying key vulnerabilities, creating charts and graphs, recommendations, and proposed fixes are a vital part of the penetration test report. An entire section dedicated to reporting is covered in the latter half of this article. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Reporting. Mounting the environment Before going to a war, the soldiers must make sure that their artillery is working perfectly. This is exactly what we are going to follow. Testing an environment successfully depends on how well your test labs are configured. Moreover, a successful test answers the following set of questions: How well is your test lab configured? Are all the required tools for testing available? How good is your hardware to support such tools? Before we begin to test anything, we must make sure that all the required set of tools are available and that everything works perfectly. Summary Throughout this article, we have introduced the phases involved in penetration testing. We have also seen how we can set up Metasploit and conduct a black box test on the network. We recalled the basic functionalities of Metasploit as well. We saw how we could perform a penetration test on two different Linux boxes and Windows Server 2012. We also looked at the benefits of using databases in Metasploit. After completing this article, we are equipped with the following: Knowledge of the phases of a penetration test The benefits of using databases in Metasploit The basics of the Metasploit framework Knowledge of the workings of exploits and auxiliary modules Knowledge of the approach to penetration testing with Metasploit The primary goal of this article was to inform you about penetration test phases and Metasploit. We will dive into the coding part of Metasploit and write our custom functionalities to the Metasploit framework. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Open Source Intelligence [article] Ruby and Metasploit Modules [article]
Read more
  • 0
  • 0
  • 37125

article-image-how-add-unit-tests-sails-framework-application
Luis Lobo
26 Sep 2016
8 min read
Save for later

How to add Unit Tests to a Sails Framework Application

Luis Lobo
26 Sep 2016
8 min read
There are different ways to implement Unit Tests for a Node.js application. Most of them use Mocha, for their test framework, Chai as the assertion library, and some of them include Istanbul for Code Coverage. We will be using those tools, not entering in deep detail on how to use them but rather on how to successfully configure and implement them for a Sails project. 1) Creating a new application from scratch (if you don't have one already) First of all, let’s create a Sails application from scratch. The Sails version in use for this article is 0.12.3. If you already have a Sails application, then you can continue to step 2. Issuing the following command creates the new application: $ sails new sails-test-article Once we create it, we will have the following file structure: ./sails-test-article ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ └── templates ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views 2) Create a basic test structure We want a folder structure that contains all our tests. For now we will only add unit tests. In this project we want to test only services and controllers. Add necessary modules npm install --save-dev mocha chai istanbul supertest Folder structure Let's create the test folder structure that supports our tests: mkdir -p test/fixtures test/helpers test/unit/controllers test/unit/services After the creation of the folders, we will have this structure: ./sails-test-article ├── api [...] ├── test │ ├── fixtures │ ├── helpers │ └── unit │ ├── controllers │ └── services └── views We now create a mocha.opts file inside the test folder. It contains mocha options, such as a timeout per test run, that will be passed by default to mocha every time it runs. One option per line, as described in mocha opts. --require chai --reporter spec --recursive --ui bdd --globals sails --timeout 5s --slow 2000 Up to this point, we have all our tools set up. We can do a very basic test run: mocha test It prints out this: 0 passing (2ms) Normally, Node.js applications define a test script in the packages.json file. Edit it so that it now looks like this: "scripts": { "debug": "node debug app.js", "start": "node app.js", "test": "mocha test" } We are ready for the next step. 3) Bootstrap file The boostrap.js file is the one that defines the environment that all tests use. Inside it, we define before and after events. In them, we are starting and stopping (or 'lifting' and 'lowering' in Sails language) our Sails application. Since Sails makes globally available models, controller, and services at runtime, we need to start them here. var sails = require('sails'); var _ = require('lodash'); global.chai = require('chai'); global.should = chai.should(); before(function (done) { // Increase the Mocha timeout so that Sails has enough time to lift. this.timeout(5000); sails.lift({ log: { level: 'silent' }, hooks: { grunt: false }, models: { connection: 'unitTestConnection', migrate: 'drop' }, connections: { unitTestConnection: { adapter: 'sails-disk' } } }, function (err, server) { if (err) returndone(err); // here you can load fixtures, etc. done(err, sails); }); }); after(function (done) { // here you can clear fixtures, etc. if (sails && _.isFunction(sails.lower)) { sails.lower(done); } }); This file will be required on each of our tests. That way, each test can individually be run if needed, or run as a whole. 4) Services tests We now are adding two models and one service to show how to test services: Create a Comment model in /api/models/Comment.js: /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; /** * Comment.js */ module.exports = { attributes: { comment: {type: 'string'}, timestamp: {type: 'datetime'} } }; Create a Post model in /api/models/Post.js: /** * Post.js */ module.exports = { attributes: { title: {type: 'string'}, body: {type: 'string'}, timestamp: {type: 'datetime'}, comments: {model: 'Comment'} } }; Create a Post service in /api/services/PostService.js: /** * PostService * * @description :: Service that handles posts */ module.exports = { getPostsWithComments: function () { return Post .find() .populate('comments'); } }; To test the Post service, we need to create a test for it in /test/unit/services/PostService.spec.js. In the case of services, we want to test business logic. So basically, you call your service methods and evaluate the results using an assertion library. In this case, we are using Chai's should. /* global PostService */ // Here is were we init our 'sails' environment and application require('../../bootstrap'); // Here we have our tests describe('The PostService', function () { before(function (done) { Post.create({}) .then(Post.create({}) .then(Post.create({}) .then(function () { done(); }) ) ); }); it('should return all posts with their comments', function (done) { PostService .getPostsWithComments() .then(function (posts) { posts.should.be.an('array'); posts.should.have.length(3); done(); }) .catch(done); }); }); We can now test our service by running: npm test The result should be similar to this one: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostService ✓ should return all posts with their comments 1 passing (979ms) 5) Controllers tests In the case of controllers, we want to validate that our requests are working, that they are returning the correct error codes and the correct data. In this case, we make use of the SuperTest module, which provides HTTP assertions. We add now a Post controller with this content in /api/controllers/PostController.js: /** * PostController */ module.exports = { getPostsWithComments: function (req, res) { PostService.getPostsWithComments() .then(function (posts) { res.ok(posts); }) .catch(res.negotiate); } }; And now we create a Post controller test in: /test/unit/controllers/PostController.spec.js: // Here is were we init our 'sails' environment and application var supertest = require('supertest'); require('../../bootstrap'); describe('The PostController', function () { var createdPostId = 0; it('should create a post', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .post('/post') .set('Accept', 'application/json') .send({"title": "a post", "body": "some body"}) .expect('Content-Type', /json/) .expect(201) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('object'); result.body.should.have.property('id'); result.body.should.have.property('title', 'a post'); result.body.should.have.property('body', 'some body'); createdPostId = result.body.id; done(); } }); }); it('should get posts with comments', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .get('/post/getPostsWithComments') .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { done(err); } else { result.body.should.be.an('array'); result.body.should.have.length(1); done(); } }); }); it('should delete post created', function (done) { var agent = supertest.agent(sails.hooks.http.app); agent .delete('/post/' + createdPostId) .set('Accept', 'application/json') .expect('Content-Type', /json/) .expect(200) .end(function (err, result) { if (err) { returndone(err); } else { returndone(null, result.text); } }); }); }); After running the tests again: npm test We can see that now we have 4 tests: > sails-test-article@0.0.0 test /home/lobo/dev/luislobo/sails-test-article > mocha test The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) 6) Code Coverage Finally, we want to know if our code is being covered by our unit tests, with the help of Istanbul. To generate a report, we just need to run: istanbul cover _mocha test Once we run it, we will have a result similar to this one: The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 26.95% ( 45/167 ) Branches : 3.28% ( 4/122 ) Functions : 35.29% ( 6/17 ) Lines : 26.95% ( 45/167 ) ================================================================================ In this case, we can see that the percentages are not very nice. We don't have to worry much about these since most of the “not covered” code is in /api/policies and /api/responses. You can check that result in a file that was created after istanbul ran, in ./coverage/lcov-report/index.html. If you remove those folders and run it again, you will see the difference. rm -rf api/policies api/responses istanbul cover _mocha test ⬡ 4.4.2 [±master ●●●] Now the result is much better: 100% coverage! The PostController ✓ should create a post ✓ should get posts with comments ✓ should delete post created The PostService ✓ should return all posts with their comments 4 passing (1s) ============================================================================= Writing coverage object [/home/lobo/dev/luislobo/sails-test-article/coverage/coverage.json] Writing coverage reports at [/home/lobo/dev/luislobo/sails-test-article/coverage] ============================================================================= =============================== Coverage summary =============================== Statements : 100% ( 24/24 ) Branches : 100% ( 0/0 ) Functions : 100% ( 4/4 ) Lines : 100% ( 24/24 ) ================================================================================ Now if you check the report again, you will see a different picture: Coverage report You can get the source code for each of the steps here. I hope you enjoyed the post! Reference Sails documentation on Testing your code Follows recommendations from Sails author, Mike McNeil, Adds some extra stuff based on my own experience developing applications using Sails Framework. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, mentor and advisor, independent software engineer, consultant, and conference speaker. He has a background as a software analyst and designer—creating, designing, and implementing software products and solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 14264
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-to-denoise-images-neural-networks
Graham Annett
26 Sep 2016
8 min read
Save for later

How to Denoise Images with Neural Networks

Graham Annett
26 Sep 2016
8 min read
The premise of denoising images is very useful and can be applied to images, sounds, texts, and more. While deep learning is possibly not the best approach, it is an interesting one, and shows how versatile deep learning can be. Get The Data The data we will be using is a dataset of faces from github user hromi. It's a fun dataset to play around with because it has both smiling and non-smiling images of faces and it’s good for a lot of different scenarios, such as training to find a smile or training to fill missing parts of images. The data is neatly packaged in a zip and is easily accessed with the following: import os import numpy as np import zipfile from urllib import request import matplotlib.pyplot as plt import matplotlib.image as mpimg import random %matplotlib inline url = 'https://github.com/hromi/SMILEsmileD/archive/master.zip' request.urlretrieve(url, 'data.zip') zipfile.ZipFile('data.zip').extractall() This will download all of the images to a folder with a variety of peripheral information we will not be using, but would be incredibly fun to incorporate into a model in other ways. Preview images First, let’s load all of the data and preview some images: x_pos = [] base_path = 'SMILEsmileD-master/SMILEs/' positive_smiles = base_path + 'positives/positives7/' negative_smiles = base_path + 'SMILEsmileD-master/SMILEs/negatives/negatives7/' for img in os.listdir(positive_smiles): x_pos.append(mpimg.imread(positive_smiles + img)) # change into np.array and scale to 255. which is max x_pos = np.array(x_pos)/255. # reshape which is explained later x_pos = x_pos.reshape(len(x_pos),1,64,64) # plot 3 random images plt.figure(figsize=(8, 6)) n = 3 for i in range(n): ax = plt.subplot(2, 3, i+1) # using i+1 since 0 is deprecated in future matplotlib plt.imshow(random.choice(x_pos), cmap=plt.cm.gray) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) Below is what you should get: Visualize Noise From here let's add a random amount of noise and visualize it. plt.figure(figsize=(8, 10)) plt.subplot(3,2,1).set_title('normal') plt.subplot(3,2,2).set_title('noisy') plt.tight_layout() n = 6 for i in range(1,n+1,2): # 2 columns with good on left side, noisy on right side ax = plt.subplot(3, 2, i) rand_img = random.choice(x_pos)[0] random_factor = 0.05 * np.random.normal(loc=0., scale=1., size=rand_img.shape) # plot normal images plt.imshow(rand_img, cmap=plt.cm.gray) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot noisy images ax = plt.subplot(3,2,i+1) plt.imshow(rand_img + random_factor, cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) Below is comparison of normal image on the left and a noisy image on the right:   As you can see, the images are still visually similar to the normal images but this technique can be very useful if an image is blurry or very grainy due to the high ISO in traditional cameras. Prepare the Dataset From here it's always good practice to split the dataset if we intend to evaluate our model later, so we will split the data into a train and a test set. We will also shuffle the images, since I am unaware of any requirement for order to the data. # shuffle the images in case there was some underlying order np.random.shuffle(x_pos) # split into test and train set, but we will use keras built in validation_size x_pos_train = x_pos[int(x_pos.shape[0]* .20):] x_pos_test = x_pos[:int(x_pos.shape[0]* .20)] x_pos_noisy = x_pos_train + 0.05 * np.random.normal(loc=0., scale=1., size=x_pos_train.shape) Training Model The model we are using is based off of the new Keras functional API with a Sequential comparison as well. Quick intro to Keras Functional API While previously there was the graph and sequential model, almost all models used the Sequential form. This is the standard type of modeling in deep learning and consists of a linear ordering of layer to layer (that is, no merges or splits). Using the Sequential model is the same as before and is incredibly modular and understandable since the model is composed by adding layer upon layer. For example, our keras model in Sequential form will look like the following: from keras.models import Sequential from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, UpSampling2D seqmodel = Sequential() seqmodel.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(1, 64,64))) seqmodel.add(Activation('relu')) seqmodel.add(MaxPooling2D((2, 2), border_mode='same') seqmodel.add(Convolution2D(32, 3, 3, border_mode='same')) seqmodel.add(Activation('relu')) seqmodel.add(UpSampling2D((2, 2)) seqmodel.add(Convolution2D(1, 3, 3, border_mode='same')) seqmodel.add(Activation('sigmoid')) seqmodel.compile(optimizer='adadelta', loss='binary_crossentropy') Versus the Functional Model format: from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D from keras.models import Model input_img = Input(shape=(1, 64, 64)) x = Convolution2D(32, 3, 3, border_mode='same')(input_img) x = Activation('relu')(x) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Convolution2D(32, 3, 3, border_mode='same')(x) x = Activation('relu')(x) x = UpSampling2D((2, 2))(x) x = Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) funcmodel = Model(input_img, x) funcmodel.compile(optimizer='adadelta', loss='binary_crossentropy') While these models look very similar, the functional form is more versatile at the cost of being more confusing. Let's fit these and compare the results to show that they are equivalent: seqmodel.fit(x_pos_noisy, x_pos_train, nb_epoch=10, batch_size=32, shuffle=True, validation_split=.20) funcmodel.fit(x_pos_noisy, x_pos_train, nb_epoch=10, batch_size=32, shuffle=True, validation_split=.20) Following the training time and loss functions should net near-identical results. For the sake of argument, we will plot outputs from both models and show how they result in near identical results. # create noisy test set and create predictions from sequential and function x_noisy_test = x_pos_test + 0.05 * np.random.normal(loc=0., scale=1., size=x_pos_test.shape) f1 = funcmodel.predict(x_noisy_test) s1 = seqmodel.predict(x_noisy_test) plt.figure(figsize=(12, 12)) plt.subplot(3,4,1).set_title('normal') plt.subplot(3,4,2).set_title('noisy') plt.subplot(3,4,3).set_title('denoised-functional') plt.subplot(3,4,4).set_title('denoised-sequential') n = 3 for i in range(1,12,4): img_index = random.randint(0,len(x_noisy_test)) # plot original image ax = plt.subplot(3, 4, i) plt.imshow(x_pos_test[img_index][0], cmap=plt.cm.gray) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot noisy images ax = plt.subplot(3,4,i+1) plt.imshow(x_noisy_test[img_index][0], cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) # plot denoised functional ax = plt.subplot(3,4,i+2) plt.imshow(f1[img_index][0], cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) # plot denoised sequential ax = plt.subplot(3,4,i+3) plt.imshow(s1[img_index][0], cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) plt.tight_layout() The result will be something like this.   Since we only trained the net with 10 epochs and it was very shallow, we also could add more layers, use more epochs, and see if it nets in better results: seqmodel = Sequential() seqmodel.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(1, 64,64))) seqmodel.add(Activation('relu')) seqmodel.add(MaxPooling2D((2, 2), border_mode='same')) seqmodel.add(Convolution2D(32, 3, 3, border_mode='same')) seqmodel.add(Activation('relu')) seqmodel.add(MaxPooling2D((2, 2), border_mode='same')) seqmodel.add(Convolution2D(32, 3, 3, border_mode='same')) seqmodel.add(Activation('relu')) seqmodel.add(UpSampling2D((2, 2))) seqmodel.add(Convolution2D(32, 3, 3, border_mode='same')) seqmodel.add(Activation('relu')) seqmodel.add(UpSampling2D((2, 2))) seqmodel.add(Convolution2D(1, 3, 3, border_mode='same')) seqmodel.add(Activation('sigmoid')) seqmodel.compile(optimizer='adadelta', loss='binary_crossentropy') seqmodel.fit(x_pos_noisy, x_pos_train, nb_epoch=50, batch_size=32, shuffle=True, validation_split=.20, verbose=0) s2 = seqmodel.predict(x_noisy_test)plt.figure(figsize=(10, 10)) plt.subplot(3,3,1).set_title('normal') plt.subplot(3,3,2).set_title('noisy') plt.subplot(3,3,3).set_title('denoised') for i in range(1,9,3): img_index = random.randint(0,len(x_noisy_test)) # plot original image ax = plt.subplot(3, 3, i) plt.imshow(x_pos_test[img_index][0], cmap=plt.cm.gray) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot noisy images ax = plt.subplot(3,3,i+1) plt.imshow(x_noisy_test[img_index][0], cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) # plot denoised functional ax = plt.subplot(3,3,i+2) plt.imshow(s2[img_index][0], cmap=plt.cm.gray) ax.get_yaxis().set_visible(False) ax.get_xaxis().set_visible(False) plt.tight_layout() While this is a small example, it's easily extendable to other scenarios. The ability to denoise an image is by no means new and unique to neural networks, but is an interesting experiment about one of the many uses that show potential for deep learning. About the author Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras.  He can be found on GitHub or via here .
Read more
  • 0
  • 3
  • 30063

article-image-using-model-serializers-eliminate-duplicate-code
Packt
23 Sep 2016
12 min read
Save for later

Using model serializers to eliminate duplicate code

Packt
23 Sep 2016
12 min read
In this article by Gastón C. Hillar, author of, Building RESTful Python Web Services, we will cover the use of model serializers to eliminate duplicate code and use of default parsing and rendering options. (For more resources related to this topic, see here.) Using model serializers to eliminate duplicate code The GameSerializer class declares many attributes with the same names that we used in the Game model and repeats information such as the types and the max_length values. The GameSerializer class is a subclass of the rest_framework.serializers.Serializer, it declares attributes that we manually mapped to the appropriate types, and overrides the create and update methods. Now, we will create a new version of the GameSerializer class that will inherit from the rest_framework.serializers.ModelSerializer class. The ModelSerializer class automatically populates both a set of default fields and a set of default validators. In addition, the class provides default implementations for the create and update methods. In case you have any experience with Django Web Framework, you will notice that the Serializer and ModelSerializer classes are similar to the Form and ModelForm classes. Now, go to the gamesapi/games folder folder and open the serializers.py file. Replace the code in this file with the following code that declares the new version of the GameSerializer class. The code file for the sample is included in the restful_python_chapter_02_01 folder. from rest_framework import serializers from games.models import Game class GameSerializer(serializers.ModelSerializer): class Meta: model = Game fields = ('id', 'name', 'release_date', 'game_category', 'played') The new GameSerializer class declares a Meta inner class that declares two attributes: model and fields. The model attribute specifies the model related to the serializer, that is, the Game class. The fields attribute specifies a tuple of string whose values indicate the field names that we want to include in the serialization from the related model. There is no need to override either the create or update methods because the generic behavior will be enough in this case. The ModelSerializer superclass provides implementations for both methods. We have reduced boilerplate code that we didn’t require in the GameSerializer class. We just needed to specify the desired set of fields in a tuple. Now, the types related to the game fields is included only in the Game class. Press Ctrl + C to quit Django’s development server and execute the following command to start it again. python manage.py runserver Using the default parsing and rendering options and move beyond JSON The APIView class specifies default settings for each view that we can override by specifying appropriate values in the gamesapi/settings.py file or by overriding the class attributes in subclasses. As previously explained the usage of the APIView class under the hoods makes the decorator apply these default settings. Thus, whenever we use the decorator, the default parser classes and the default renderer classes will be associated with the function views. By default, the value for the DEFAULT_PARSER_CLASSES is the following tuple of classes: ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ) When we use the decorator, the API will be able to handle any of the following content types through the appropriate parsers when accessing the request.data attribute. application/json application/x-www-form-urlencoded multipart/form-data When we access the request.data attribute in the functions, Django REST Framework examines the value for the Content-Type header in the incoming request and determines the appropriate parser to parse the request content. If we use the previously explained default values, Django REST Framework will be able to parse the previously listed content types. However, it is extremely important that the request specifies the appropriate value in the Content-Type header. We have to remove the usage of the rest_framework.parsers.JSONParser class in the functions to make it possible to be able to work with all the configured parsers and stop working with a parser that only works with JSON. The game_list function executes the following two lines when request.method is equal to 'POST': game_data = JSONParser().parse(request) game_serializer = GameSerializer(data=game_data) We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(data=request.data) The game_detail function executes the following two lines when request.method is equal to 'PUT': game_data = JSONParser().parse(request) game_serializer = GameSerializer(game, data=game_data) We will make the same edits done for the code in the game_list function. We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(game, data=request.data) By default, the value for the DEFAULT_RENDERER_CLASSES is the following tuple of classes: ( 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ) When we use the decorator, the API will be able to render any of the following content types in the response through the appropriate renderers when working with the rest_framework.response.Response object. application/json text/html By default, the value for the DEFAULT_CONTENT_NEGOTIATION_CLASS is the rest_framework.negotiation.DefaultContentNegotiation class. When we use the decorator, the API will use this content negotiation class to select the appropriate renderer for the response based on the incoming request. This way, when a request specifies that it will accept text/html, the content negotiation class selects the rest_framework.renderers.BrowsableAPIRenderer to render the response and generate text/html instead of application/json. We have to replace the usages of both the JSONResponse and HttpResponse classes in the functions with the rest_framework.response.Response class. The Response class uses the previously explained content negotiation features, renders the received data into the appropriate content type and returns it to the client. Now, go to the gamesapi/games folder folder and open the views.py file. Replace the code in this file with the following code that removes the JSONResponse class, uses the @api_view decorator for the functions and the rest_framework.response.Response class. The modified lines are highlighted. The code file for the sample is included in the restful_python_chapter_02_02 folder. from rest_framework.parsers import JSONParser from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from games.models import Game from games.serializers import GameSerializer @api_view(['GET', 'POST']) def game_list(request): if request.method == 'GET': games = Game.objects.all() games_serializer = GameSerializer(games, many=True) return Response(games_serializer.data) elif request.method == 'POST': game_serializer = GameSerializer(data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data, status=status.HTTP_201_CREATED) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) @api_view(['GET', 'PUT', 'POST']) def game_detail(request, pk): try: game = Game.objects.get(pk=pk) except Game.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) if request.method == 'GET': game_serializer = GameSerializer(game) return Response(game_serializer.data) elif request.method == 'PUT': game_serializer = GameSerializer(game, data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) elif request.method == 'DELETE': game.delete() return Response(status=status.HTTP_204_NO_CONTENT) After you save the previous changes, run the following command: http OPTIONS :8000/games/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/. The request will match and run the views.game_list function, that is, the game_list function declared within the games/views.py file. We added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS, PUT Content-Type: application/json Date: Thu, 09 Jun 2016 21:35:58 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game Detail", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with a comma-separated list of HTTP verbs supported by the resource collection as its value: GET, POST, OPTIONS. As our request didn’t specify the allowed content type, the function rendered the response with the default application/json content type. The response body specifies the Content-type that the resource collection parses and the Content-type that it renders. Run the following command to compose and send and HTTP request with the OPTIONS verb for a game resource. Don’t forget to replace 3 with a primary key value of an existing game in your configuration: http OPTIONS :8000/games/3/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/3/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/3/. The request will match and run the views.game_detail function, that is, the game_detail function declared within the games/views.py file. We also added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS Content-Type: application/json Date: Thu, 09 Jun 2016 20:24:31 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game List", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with comma-separated list of HTTP verbs supported by the resource as its value: GET, POST, OPTIONS, PUT. The response body specifies the content-type that the resource parses and the content-type that it renders, with the same contents received in the previous OPTIONS request applied to a resource collection, that is, to a games collection. When we composed and sent POST and PUT commands, we had to use the use the -H "Content-Type: application/json" option to indicate curl to send the data specified after the -d option as application/json instead of the default application/x-www-form-urlencoded. Now, in addition to application/json, our API is capable of parsing application/x-www-form-urlencoded and multipart/form-data data specified in the POST and PUT requests. Thus, we can compose and send a POST command that sends the data as application/x-www-form-urlencoded with the changes made to our API. We will compose and send an HTTP request to create a new game. In this case, we will use the -f option for HTTPie that serializes data items from the command line as form fields and sets the Content-Type header key to the application/x-www-form-urlencoded value. http -f POST :8000/games/ name='Toy Story 4' game_category='3D RPG' played=false release_date='2016-05-18T03:02:00.776594Z' The following is the equivalent curl command. Notice that we don’t use the -H option and curl will send the data in the default application/x-www-form-urlencoded: curl -iX POST -d '{"name":"Toy Story 4", "game_category":"3D RPG", "played": "false", "release_date": "2016-05-18T03:02:00.776594Z"}' :8000/games/ The previous commands will compose and send the following HTTP request: POST http://localhost:8000/games/ with the Content-Type header key set to the application/x-www-form-urlencoded value and the following data: name=Toy+Story+4&game_category=3D+RPG&played=false&release_date=2016-05-18T03%3A02%3A00.776594Z The request specifies /games/, and therefore, it will match '^games/$' and run the views.game_list function, that is, the updated game_detail function declared within the games/views.py file. As the HTTP verb for the request is POST, the request.method property is equal to 'POST', and therefore, the function will execute the code that creates a GameSerializer instance and passes request.data as the data argument for its creation. The rest_framework.parsers.FormParser class will parse the data received in the request, the code creates a new Game and, if the data is valid, it saves the new Game. If the new Game was successfully persisted in the database, the function returns an HTTP 201 Created status code and the recently persisted Game serialized to JSON in the response body. The following lines show an example response for the HTTP request, with the new Game object in the JSON response: HTTP/1.0 201 Created Allow: OPTIONS, POST, GET Content-Type: application/json Date: Fri, 10 Jun 2016 20:38:40 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "game_category": "3D RPG", "id": 20, "name": "Toy Story 4", "played": false, "release_date": "2016-05-18T03:02:00.776594Z" } After the changes we made in the code, we can run the following command to see what happens when we compose and send an HTTP request with an HTTP verb that is not supported: http PUT :8000/games/ The following is the equivalent curl command: curl -iX PUT :8000/games/ The previous command will compose and send the following HTTP request: PUT http://localhost:8000/games/. The request will match and try to run the views.game_list function, that is, the game_list function declared within the games/views.py file. The @api_view decorator we added to this function doesn’t include 'PUT' in the string list with the allowed HTTP verbs, and therefore, the default behavior returns a 405 Method Not Allowed status code. The following lines show the output with the response from the previous request. A JSON content provides a detail key with a string value that indicates the PUT method is not allowed. HTTP/1.0 405 Method Not Allowed Allow: GET, OPTIONS, POST Content-Type: application/json Date: Sat, 11 Jun 2016 00:49:30 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "detail": "Method "PUT" not allowed." } Summary This article covers the use of model serializers and how it is effective in removing duplicate code. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Implementing a WCF Service in the Real World [article] WCF – Windows Communication Foundation [article]
Read more
  • 0
  • 0
  • 9087

article-image-buildbox-2-game-development-peek-boo
Packt
23 Sep 2016
20 min read
Save for later

Buildbox 2 Game Development: peek-a-boo

Packt
23 Sep 2016
20 min read
In this article by Ty Audronis author of the book Buildbox 2 Game Development, teaches the reader the Buildbox 2 game development environment by example.The following excerpts from the book should help you gain an understanding of the teaching style and the feel of the book.The largest example we give is by making a game called Ramblin' Rover (a motocross-style game that uses some of the most basic to the most advanced features of Buildbox).Let's take a quick look. (For more resources related to this topic, see here.) Making the Rover Jump As we've mentioned before, we're making a hybrid game. That is, it's a combination of a motocross game, a platformer, and a side-scrolling shooter game. Our initial rover will not be able to shoot at anything (we'll save this feature for the next upgraded rover that anyone can buy with in-game currency). But this rover will need to jump in order to make the game more fun. As we know, NASA has never made a rover for Mars that jumps. But if they did do this, how would they do it? The surface of Mars is a combination of dust and rocks, so the surface conditions vary greatly in both traction and softness. One viable way is to make the rover move in the same way a spacecraft manoeuvres (using little gas jets). And since the gravity on Mars is lower than that on Earth, this seems legit enough to include it in our game. While in our Mars Training Ground world, open the character properties for Training Rover. Drag the animated PNG sequence located in our Projects/RamblinRover/Characters/Rover001-Jump folder (a small four-frame animation) into the JumpAnimation field. Now we have an animation of a jump-jet firing when we jump. We just need to make our rover actually jump. Your Properties window should look like the following screenshot: The preceding screenshot shows the relevant sections of the character's properties window We're now going to revisit the Character Gameplay Settings section. Scroll the Properties window all the way down to this section. Here's where we actually configure a few settings in order to make the rover jump. The previous screenshot shows the section as we're going to set it up. You can configure your settings similarly. The first setting we are considering is Jump Force. You may notice that the vertical force is set to 55. Since our gravity is -20 in this world, we need enough force to not only counteract the gravity, but also to give us a decent height (about half the screen). A good rule is to just make our Jump Force 2x our Gravity. Next is Jump Counter. We've set it to 1. By default, it's set to 0. This actually means infinity. When JumpCounter is set to 0, there is no limit to how many times a player can use the jump boost… They could effectively ride the top of the screen using the jump boost,such as a flappy bird control. So, we set it to 1 in order to limit the jumps to one at a time. There is also a strange oddity with the Buildbox that we can exploit with this. The jump counter resets only after the rover hits the ground. But, there's a funny thing… The rover itself never actually touches the ground (unless it crashes), only the wheels do. There is one other way to reset the jump counter: by doing a flip. What this means is that once players use their jump up, the only way to reset it is to do a flip-trick off a ramp. Add a level of difficulty and excitement to the game using a quirk of the development software! We could trick the software into believing that the character is simply close enough to the ground to reset the counter by increasing Ground Threshold to the distance that the body is from the ground when the wheels have landed. But why do this? It's kind of cool that a player has to do a trick to reset the jump jets. Finally, let's untick the Jump From Ground checkbox. Since we're using jets for our boost, it makes sense that the driver could activate them while in the air. Plus, as we've already said, the body never meets the ground. Again, we could raise the ground threshold, but let's not (for the reasonsstated previously). Awesome! Go ahead and give it a try by previewing the level. Try jumping on the small ramp that we created, which is used to get on top of our cave. Now, instead of barely clearing it, the rover will easily clear it, and the player can then reset the counter by doing a flip off the big ramp on top. Making a Game Over screen This exercise will show you how to make some connections and new nodes using Game Mind Map. The first thing we're going to want is an event listener to sense when a character dies. It sounds complex, and if we were coding a game, this would take several lines of code to accomplish. In Buildbox, it's a simple drag-and-drop method. If you double-click on the Game Field UI node, you'll be presented with the overlay for the UI and controls during gameplay. Since this is a basic template, you are actually presented with a blank screen. This template is for you to play around with on a computer, so no controls are on the screen. Instead, it is assumed that you would use keyboard controls to play the demo game. This is why the screen looks blank: There are some significant differences between the UI editor and the World editor. You can notice that the Character tab from the asset library is missing, and there is a timeline editor on the bottom. We'll get into how to use this timeline later. For now, let's keep things simple and add our Game Over sensor. If you expand the Logic tab in the asset library, you'll find the Event Observer object. You can drag this object anywhere onto the stage. It doesn't even have to be in the visible window (the dark area in the center of the stage). So long as it's somewhere on the stage, the game can use this logic asset. If you do put it on the visible area of the stage, don't worry; it's an invisible asset, and won't show in your game. While the Event observer is selected on the stage, you'll notice that its properties pop up in the properties window (on the right side of the screen). By default, the Game Over type of event is selected. But if you select this drop-down menu, you'll notice a ton of different event types that this logic asset can handle. Let's leave all of the properties at their default values (except the name; change this to Game Over) and go back to Game Mind Map (the top-left button): Do you notice anything different? The Game Field UI node now has a Game Over output. Now, we just need a place to send this output. Right-click on the blank space of the grid area. Now you can either create a new world or new UI. Select Add New UI and you'll see a new green node that is titled New UI1. This new UI will be your Game Over screen when a character dies. Before we can use this new node, it needs to be connected to the Game Over output of Game Field UI. This process is exceedingly simple. Just hold down your left mouse button on the Game Over output's dark dot, and drag it to the New UI1's Load dark dot (on the left side of the New UI1 node). Congratulations, you've just created your first connected node. We're not done yet, though. We need to make this Game Over screen link back to restart the game. First, by selecting the New UI1 node, change its name using the parameters window (on the right of the screen) to Game Over UI. Make sure you hit your Enter key; this will commit the changed name. Now double-click on the Game Over UI node so we can add some elements to the screen. You can't have a Game Over screen without the words Game Over, so let's add some text. So, we've pretty much completed the game field (except for some minor items that we'll address quite soon). But believe it or not, we're only halfway there! In this article, we're going to finally create our other two rovers, and we'll test and tweak our scenes with them. We'll set up all of our menus, information screens, and even a coin shop where we can use in-game currency to buy the other two rovers, or even use some real-world currency to short-cut and buy more in-game currency. And speaking of monetization, we'll set up two different types of advertising from multiple providers to help us make some extra cash. Or, in the coin-shop, players can pay a modest fee to remove all advertising! Ready? Well, here we go! We got a fever, and the only cure is more rovers! So now that we've created other worlds, we definitely need to set up some rovers that are capable of traversing them. Let's begin with the optimal rover for Gliese. This one is called the K.R.A.B.B. (no, it doesn't actually stand for anything…but the rover looks like a crab, and acronyms look more military-like). Go ahead and drag all of the images in the Rover002-Body folder as characters. Don't worry about the error message. This just tells you that only one character can be on the stage at a time. The software still loads this new character into the library, and that's all we really want at this time anyway. Of course, drag the images in the Rover002-Jump folder to the Jump Animation field, and the LaserShot.png file to the Bullet Animation field. Set up your K.R.A.B.B. with the following settings: For Collision Shape, match this: In the Asset Library, drag the K.R.A.B.B. above the Mars Training Rover. This will make it the default rover. Now, you can test your Gliese level (by soloing each scene) with this rover to make sure it's challenging, yet attainable. You'll notice some problems with the gun destroying ground objects, but we'll solve that soon enough. Now, let's do the same with Rover 003. This one uses a single image for the Default Animation, but an image sequence for the jump. We'll get to the bullet for this one in a moment, but set it up as follows: Collision Shape should look as follows: You'll notice that a lot of the settings are different on this character, and you may wonder what the advantage of this is (since it doesn't lean as much as the K.R.A.B.B.). Well, it's a tank, so the damage it can take will be higher (which we'll set up shortly), and it can do multiple jumps before recharging (five, to be exact). This way, this rover can fly using flappy-bird style controls for short distances. It's going to take a lot more skill to pilot this rover, but once mastered, it'll be unstoppable. Let's move onto the bullet for this rover. Click on the Edit button (the little pencil icon) inside Bullet Animation (once you've dragged the missile.png file into the field), and let's add a flame trail. Set up a particle emitter on the missile, and position it as shown in the following screenshots: The image on the left shows the placement of the missile and the particle emitter. On the right, you can see the flame set up. You may wonder why it is pointed in the opposite direction. This will actually make the flames look more realistic (as if they're drifting behind the missile). Preparing graphic assets for use in Buildbox Okay, so as I said before, the only graphic assets that Buildbox can use are PNG files. If this was just a simple tutorial on how to make Ramblin' Rover, we could leave it there. But it's not just it. Ramblin' Rover is just an example of how a game is made, but we want to give you all of the tools and baseknowledge you need to create all of your own games from scratch. Even if you don't create your own graphic assets, you need to be able to tell anybody creating them for you how you want them. And more importantly, you need to know why. Graphics are absolutely the most important thing in developing a game. After all, you saw how just some eyes and sneakers made a cute character that people would want to see. Graphics create your world. They create characters that people want to succeed. Most importantly, graphics create the feel of your game, and differentiate it from other games on the market. What exactly is a PNG file? Anybody remember GIF files? No, not animated GIFs that you see on most chat-rooms and on Facebook (although they are related). Back in the 1990s, a still-frame GIF file was the best way to have a graphics file that had a transparent background. GIFs can be used for animation, and can have a number of different purposes. However, GIFs were clunky. How so? Well, they had a type of compression known as lossy. This just means that when compressed, information was lost, and artifacts and noise could pop up and be present. Furthermore, GIFs used indexed colors. This means that anywhere from 2 to 256 colors could be used, and that's why you see something known as banding in GIF imagery. Banding is where something in real life goes from dark to light because of lighting and shadows. In real life, it's a smooth transition known as a gradient. With indexed colors, banding can occur when these various shades are outside of the index. In this case, the colors of these pixels are quantized (or snapped) to the nearest color in the index. The images here show a noisy and banded GIF (left) versus the original picture (right): So, along came PNGs (Portable Network Graphics is what it stands for). Originally, the PNG format was what a program called Macromedia Fireworks used to save projects. Now,the same software is called Adobe Fireworks and is part of the Creative Cloud. Fireworks would cut up a graphics file into a table or image map and make areas of the image clickable via hyperlink for HTML web files. PNGs were still not widely supported by web browsers, so it would export the final web files as GIFs or JPEGs. But somewhere along the line, someone realized that the PNG image itself was extremely bandwidthefficient. So, in the 2000s, PNGs started to see some support on browsers. Up until around 2008, though, Microsoft's Internet Explorer still did not support PNGs with transparency, so some strange CSS hacks needed to be done to utilize them. Today, though, the PNG file is the most widely used network-based image file. It's lossless, has great transparency, and is extremely efficient. Since PNGs are very widely used, and this is probably why Buildbox restricts compatibility to this format. Remember, Buildbox can export for multiple mobile and desktop platforms. Alright, so PNGs are great and very compatible. But there are multiple flavours of PNG files. So, what differentiates them? What bit-ratings mean? When dealing with bit-ratings, you have to understand that when you hear 8-bit image and 24-bit image, it maybe talking about two different types of rating, or even exactly the same type of image. Confused? Good, because when dealing with a graphics professional to create your assets, you're going to have to be a lot more specific, so let's give you a brief education in this. Your typical image is 8 bits per channel (8 bpc), or 24 bits total (because there are three channels: red, green, and blue). This is also what they mean by a 16.7 million-color image. The math is pretty simple. A bit is either 0 or 1. 8 bits may look something as 01100110. This means that there are 256 possible combinations on that channel. Why? Because to calculate the number of possibilities, you take the number of possible values per slot and take it to that power. 0 or 1; that's 2 possibilities, and 8 bit is 8 slots. 2x2x2x2x2x2x2x2 (2 to the 8th power) is 256. To combine colors on a pixel, you'd need to multiply the possibilities such as 256x256x256 (which is 16.7 million). This is how they know that there are 16.7 million possible colors in an 8 bpc or 24-bit image. So saying 8 bit may mean per channel or overall. This is why it's extremely important to add thr "channel" word if that's what you mean. Finally, there is a fourth channel called alpha. The alpha channel is the transparency channel. So when you're talking about a 24-bit PNG with transparency, you're really talking about a 32-bit image. Why is this important to know? This is because some graphics programs (such as Photoshop) have 24-bit PNG as an option with a checkbox for transparency. But some other programs (such as the 3D software we used called Lightwave) have an option for a 24-bit PNG and a 32-bit PNG. This is essentially the same as the Photoshop options, but with different names. By understanding what these bits per channel are and what they do, you can navigate your image-creating software options better. So, what's an 8-bit PNG, and why is it so important to differentiate it from an 8-bit per channel PNG (or 24-bit PNG)? It is because an 8-bit PNG is highly compressed. Much like a GIF, it uses indexed colors. It also uses a great algorithm to "dither" or blend the colors to fill them in to avoid banding. 8-bit PNG files are extremely efficient on resources (that is, they are much smaller files), but they still look good, unless they have transparency. Because they are so highly compressed, the alpha channel is included in the 8-bits. So, if you use 8-bit PNG files for objects that require transparency, they will end up with a white-ghosting effect around them and look terrible on screen, much like a weather report where the weather reporter's green screen is bad. So, the rule is… So, what all this means to you is pretty simple. For objects that require transparency channels, always use 24-bit PNG files with transparency (also called 8 bits per channel, or 32-bit images). For objects that have no transparency (such as block-shaped obstacles and objects), use 8-bit PNG files. By following this rule, you'll keep your game looking great while avoiding bloating your project files. In the end, Buildbox repacks all of the images in your project into atlases (which we'll cover later) that are 32 bit. However, it's always a good practice to stay lean. If you were a Buildbox 1.x user, you may remember that Buildbox had some issues with DPI (dots per inch) between the standard 72 and 144 on retina displays. This issue is a thing of the past with Buildbox 2. Image sequences Think of a film strip. It's just a sequence of still-images known as frames. Your standard United States film runs at 24 frames per second (well, really 23.976, but let's just round up for our purposes). Also, in the US, television runs at 30 frames per second (again, 29.97, but whatever…let's round up). Remember that each image in our sequence is a full image with all of the resources associated with it. We can quite literally cut our necessary resources in half by cutting this to 15 frames per second (fps). If you open the content you downloaded, and navigate to Projects/RamblinRover/Characters/Rover001-Body, you'll see that the images are named Rover001-body_001.png, Rover001-body_002.png and so on. The final number indicates the number that should play in the sequence (first 001, then 002, and so on). The animation is really just the satellite dish rotating, and the scanner light in the window rotating as well. But what you'll really notice is that this animation is loopable. All loopable means is that the animation can loop (play over and over again) without you noticing a bump in the footage (the final frame leads seamlessly back to the first). If you're not creating these animations yourself, you'll need to make sure to specify to your graphics professional to make these animations loopable at 15 fps. They should understand exactly what you mean, and if they don't…you may consider finding a new animator. Recommended software for graphics assets For the purposes of context (now that you understand more about graphics and Buildbox), a bit of reinforcement couldn't hurt. A key piece of graphics software is the Adobe Creative Cloud subscription (http://www.adobe.com/CreativeCloud ). Given its bang for the buck, it just can't be beaten. With it, you'll get Photoshop (which can be used for all graphics assets from your game's icon to obstacles and other objects), Illustrator (which is great for navigational buttons), After Effects (very useful for animated image sequences), Premiere Pro (a video editing application for marketing videos from screen-captured gameplay), and Audition (for editing all your sound). You may also want some 3D software, such as Lightwave, 3D Studio Max, or Maya. This can greatly improve the ability to make characters, enemies, and to create still renders for menus and backgrounds. Most of the assets in Ramblin' Rover were created with the 3D software Lightwave. There are free options for all of these tools. However, there are not nearly as many tutorials and resources available on the web to help you learn and create using these. One key thing to remember when using free software: if it's free…you're the product. In other words, some benefits come with paid software, such as better support, and being part of the industry standard. Free software seems to be in a perpetual state of "beta testing." If using free software, read your End User License Agreement (known as a EULA) very carefully. Some software may require you to credit them in some way for the privilege of using their software for profit. They may even lay claim to part of your profits. Okay, let's get to actually using our graphics in Ramblin' Rover… Summary See? It's not that tough to follow. By using plain-English explanations combined with demonstrating some significant and intricate processes, you'll be taken on a journey meant to stimulate your imagination and educate you on how to use the software. Along with the book comes the complete project files and assets to help you follow along the entire way through the build process. You'll be making your own games in no time! Resources for Article: Further resources on this subject: What Makes a Game a Game? [article] Alice 3: Controlling the Behavior of Animations [article] Building a Gallery Application [article]
Read more
  • 0
  • 0
  • 22740

article-image-language-modeling-with-deep-learning
Mohammad Pezeshki
23 Sep 2016
5 min read
Save for later

Language Modeling with Deep Learning

Mohammad Pezeshki
23 Sep 2016
5 min read
Language modeling is defining a joint probability distribution over a sequence of tokens (words or characters). Considering a sequence of tokens fx1; :::; xT g. A language model defines P (x1; : : : ; xT ), which can be used in many areas of natural language processing. Language modelings define a joint probability distribution over a sequence of tokens (words or characters). Consider a sequence of tokens x1; : : : ; xT. For example, a language model can significantly improve the accuracy of a speech recognition system. As an example, in the case of two words that have the same sound but different meanings, a language model can fix the problem of recognizing the right word. In Figure 1, the speech recognizer (aka acoustic model) has assigned the same high probabilities to the words meet" and meat". It is even possible that the speech recognizer assigns a higher probability to meet" rather than meat". However, by conditioning the language model on the three rst tokens (I-cooked-some"), the next word could be sh", pasta", or meat" with a reasonable probability higher than the probability of meet". To get the final answer, we can simply multiply two tables of probabilities and normalize them. Now the word meat" has a very high relative probability! One family of deep learning models that are capable of modeling sequential data (such as language) is Recurrent Neural Networks (RNNs). RNNs have recently achieved impressive results on different problems such as the language modeling. In this article, we briefly describe RNNs and demonstrate how to code them using the Blocks library on top of Theano. Consider a sequence of T input elements x1; : : : ; xT . RNN models the sequence by applying the same operation in a recursive way. Formally, ht = f(ht   1; xt); (1) yt = g(h t); (2)   Where ht is the internal hidden representation of the RNN and yt is the output at tth time-step. For the very first time-step, we also have an initial state h0. f and g are two functions, which are shared across the time axis. In the simplest case, f and g can be a linear transformation followed by a non-linearity. There are more complicated forms of f and g such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). Here we skip the exact formulations of f and g to use LSTM as a black box. Consequently, suppose we have B sequences, each with a length of T, such that each time-step is presented in a vector of size F . So the input can be seen as a 3D tensor with size T xBxF, the hidden representation with size T xBxF 0, and the output with size T xBxF 00. Let's build a character-level language model that can model the joint probability P (x1; : : : ; xT ) using the chain rule: P (x1; : : : ; xT ) = P (x1)P (x2jx1)P (x3jx1; x2):::P (xT jx1::T1) (3) We can model P (xtjx1::t1) using an RNN by predicting xt given xt1::1. In other words, given a sequence fx1; : : : ; xT g, the input sequence is fx1; : : : ; xT1g and the target sequence is fx2; : : : ; xT g. To define input and target, we can write: Now to define the model, we need a linear transformation from the input to the LSTM, and from the LSTM to the output. To train the model, we use the cross entropy between the model output and the true target:   Now assuming that data is provided to us, using data stream, we can start training by initializing the model, and tuning parameters: After the model is trained, we can condition the model on an initial sequence and start generating the next token. We can repeatedly feed the predicted token into the model and get the next token. We can even just start from the initial state and ask the model to hallucinate! Here is a sample generated text from a model trained on a 96 MB text data of wikipedia (figure adapted from here): Here is a visualization of the model's output. The first line is the real data and the next six lines are the candidate with the highest output probability of for each character. The more red a cell is, the higher probability the model assigns to that character. For example, as soon as the model sees ttp://ww, it is confident that the next character is also a w" and the next one is a .". Butat this point, there is no more clue about the next character. So the model assigns almost the same probability to all the characters (figure adapted from here): In this post we learned about language modeling and one of its applications in speech recognition. We also learned how to code a recurrent neural network in order to train such a model. You can find the complete code and experiment on a bunch of datasets such as wikipedia at Github. The code is written by my close friend Eloi Zablocki and me. About the author Mohammad Pezeshk is a master’s student in the LISA lab of Universite de Montreal working under the supervision of Yoshua Bengio and Aaron Courville. He obtained his bachelor's in computer engineering from Amirkabir University of Technology (Tehran Polytechnic) in July 2014 and then started his master’s in September 2014. His research interests lie in the fields of artifitial intelligence, machine learning, probabilistic models and specifically deep learning.
Read more
  • 0
  • 0
  • 17231
article-image-jira-101
Packt
22 Sep 2016
15 min read
Save for later

JIRA 101

Packt
22 Sep 2016
15 min read
In this article by Ravi Sagar, author of the book, Mastering Jira 7 - Second Edition, you will learn the basics about JIRA. We will look into the components available in the product, their application, and how to use them. We will try our hands on a few JQL queries, after which we will create project reports on JIRA for issue tracking, and then derive information from them using various built-in reports that JIRA comes with. We will also take a look at the gadgets that JIRA provides, which are helpful for reporting purposes. Finally, we will take take a look at the migrating options, which JIRA provides, to fully restore a JIRA instance for a specific project. (For more resources related to this topic, see here.) Product introduction Atlassian JIRA is a proprietary issue tracking system. It is used for tracking bugs, issues, and project management. There are many such tools available, but the best thing about JIRA is that it can be configured very easily and it offers a wide range of customizations. Out of the box, JIRA offers a defect/bug tracking functionality, but it can be customized to act like a helpdesk system, simple test management suite, or a project management system with end-to-end traceability. The much awaited JIRA 7 was released in October 2015 and it is now offered in the following three different application variants: JIRA Core JIRA Software JIRA Service Desk Let us discuss each one of them separately. JIRA Core This comprises the base application of JIRA that you may be familiar with, of course with some new features. JIRA Core is a simplified version of the JIRA features that we have used till 6.x versions. JIRA Software This comprises of all the features of JIRA Core + JIRA Agile. From JIRA 7 onwards, JIRA Agile will no longer be offered as an add-on. You will not be able to install JIRA Agile from the marketplace. JIRA Service Desk This comprises of all the features of JIRA Core + JIRA Service Desk. Just like JIRA Software, the JIRA Service Desk will no longer be offered as an add-on and you cannot install it from the marketplace. Applications, uses, and examples The ability to customize JIRA is what makes it popular among various companies who use it. The following are the various applications of JIRA: Defect/bug tracking Change requests Helpdesk/support tickets Project management Test-case management Requirements management Process management Let's take a look at the implementation of test-case management: The issue types: Test campaign: This will be the standard issue type Test case: This will be subtask The workflow for test campaign: New states: Published Under Execution Conditions: A test campaign will only pass when all the test cases are passed Only reporter can move this test campaign to Closed Post function: When the test campaign is closed, send an email to everyone in a particular group Workflow for a test case: New states: Blocked Passed Failed In Review Condition: Only the assigned user can move the test case to Passed state Post function: When the test case is moved to Failed state, change the issue priority to major Custom fields: Name Type Values Field configuration Category Select list   Customer name Select list   Steps to reproduce Text area   Mandatory Expected input Text area   Mandatory Expected output Text area   Mandatory Precondition Text area   Postcondition Text area Campaign type Select list Unit Functional Endurance Benchmark Robustness Security Backward compatibility Certification with baseline Automation status Select list Automatic Manual Partially automatic JIRA core concepts Let's take a look at the architecture of JIRA; it will help you understand the core concepts: Project Categories: When there are too many projects in JIRA, it becomes important to segregate them into various categories. JIRA will let you create several categories that can represent the business units, clients, or teams in your company. Projects: A JIRA project is a collection of issues. Your team can use a JIRA project to coordinate the development of a product, track a project, manage a help desk, and so on, depending on your requirements. Components: Components are subsections of a project. They are used to group issues within a project to smaller parts. Versions: Versions are point-in-time for a project. They help you schedule and organize your releases. Issue Types: JIRA will let you create more than one issue types that are different from each other in terms of what kind of information they store. JIRA comes with default-issue types, such as bug, task, and subtask, but you can create more issue types that can follow their own workflow as well as have a different set of fields. Subtasks: Issue types are of two types, namely standard and subtasks, which are children of a standard task. For instance, you can have test campaigns as standard-issue type and test cases as subtasks. Introduction to JQL JIRA Query Language (JQL) is one of the best features in JIRA that lets you search the issues efficiently and offers lots of handy features. The best part about JQL is that it is very easy to learn, thanks to the autocomplete functionality in the Advanced search, which helps the user with suggestions based on keywords typed. JQL consists of questions, whether single or multiple, that can be combined together to form complex questions. Basic JQL syntax JQL has a field followed by an operator. For instance, to retrieve all the issues of the CSTA project, you can use a simple query like this: project = CSTA Now, within this project, if you want to find the issues assigned to a specific user, use the following query: project = CSTA and assignee = ravisagar There may be several hundred issues assigned to a user and, maybe, we just want to focus on issues whose priority is either Critical or Blocker, you can use the following query: project = CSTA and assignee = ravisagar and priority in (Blocker, "Critical) What if, instead of issues assigned to a specific user, we want to find the issues assigned to all other users except one? It can be achieved in the following way: project = CSTA and assignee != ravisagar and priority in (Blocker, "Critical) So, you can see that JQL consists of one or more queries. Project reports Once you start using JIRA for issue tracking of any type, it becomes imperative to derive useful information out of it. JIRA comes with built-in reports that show real-time statistics for projects, users, and other fields. Let's take a look at each of these reports. Open any project in JIRA that contains a lot of issues and has around 5 to 10 users, which are either assignee or reporters. When you open any project page, the default view is the Summary view that contains a 30 day summary report and Activity Stream that shows whatever is happening in the project-like creation of new issues, update of status, comments, and basically any change in the project. On the left-hand side of the project summary page, there are links for Issues and Reports. Average Age Report This report displays the average number of days for which issues are in an unresolved state on a given date. Created vs. Resolved Issues Report This report displays the number of issues that were created over the period of time versus the number of issues that were resolved in that period: Pie Chart Report This chart shows the breakup of data. For instance, in your project, if you are interested in finding out the issue count for all the issue types, then this report can be used to fetch this information. Recently Created Issues Report This report displays the statistical information on the number of issues created for the selected period and days. The report also displays status of the issues. Resolution Time Report There are cases when you are interested in understanding the speed of your team every month. How soon can your team resolve the issues? This report displays the average resolution time of the issues in a given month. Single Level Group By Report It is a simple report that just lists the issues grouped by a particular field, such as Assignee, Issue Type, Resolution, Status, Priority, and so on. Time Since Issues Report This report is useful in finding out how many issues were created in a specific quarter over the past one year and, not only that, there are various date-based fields supported by this report. Time Tracking Report This comprehensive report displays the estimated effort and remaining effort of all the issues. Not only that, the report will also give you indication on the overall progress of the project. User Workload Report This report can tell us about the occupancy of the resources in all the projects. It really helps in distributing the tasks among users. Version Workload Report If your project has various versions that are related to the actual releases or fixes, then it becomes important to understand the status of all such issues. Gadgets for reporting purposes JIRA comes with a lot of useful gadgets that you can add in the dashboard and use for reporting purposes. Additional gadgets can be added in JIRA by installing add-ons. Let's take a look at some of these gadgets. Activity Stream This gadget will display all the latest updates in your JIRA instance. It's also possible to limit this stream to a particular filter as well. This gadget is quite useful because it displays up-to-date information on the dashboard: Created vs. Resolved Chart The project summary page has a chart to display all the issues that were created and resolved in the past 30 days. There is a similar gadget to display this information. You can also change the duration from 30 days to whatever you like. This gadget can be created for a specific project: Pie Chart Just like the pie chart, which is there in project reports, there is a similar gadget that you can add in the dashboard. For instance, for a particular project, you can generate a Pie Chart based on Priority: Issue Statistics This gadget is quite useful in generating simple statistics for various fields. Here, we are interested in finding out the breakup of the project in terms of Issue Statistics: Two Dimensional Filter Statistics The Issue Statistics gadget can display the breakup of project issues for every Status. What if you want to further segregate this information? For instance, how many issues are open and to which Issue Type they belong to? In such scenarios, Two Dimensional Filter Statistics can be used. You just need to select two fields that will be used to generate this report, one for x axis and another for y axis: These are certain common gadgets that can be used in the dashboard; however, there are many more gadgets. Click on the Add Gadget option on the top-right corner to see all such gadgets in your JIRA instance. Some gadgets come out of the box with JIRA and others are a part of add-ons that you can install. After you select all these gadgets in your dashboard, this is how it looks: This is the new dashboard that we have just created and configured for a specific project, but it's also possible to create more than one dashboard. To add another dashboard, just click on the Create Dashboard option under Tools on the top-right corner. If you have more than one dashboard, then you can switch between them using the links on the top-left corner of the screen, as shown in the following screenshot: The simple CSV import Let's understand how to perform a simple import of the CSV data. The first thing to do is prepare the CSV file that can be imported into JIRA. For this exercise, we will import issues into a particular project; these issues will have data, such as issue Summary, Status, Dates, and a few other fields. Preparing the CSV file We’ll use MS Excel to prepare the CSV file with the following data: If your existing tool has the option to export directly into the CSV file, then you can skip this step, but we recommend reviewing your data before importing it into JIRA. Usually, the CSV import will not work if the format of the CSV file and the data is not correct. It's very easy to generate a CSV file from an Excel file. Perform the following steps: Go to File | Save As | File name: and select Save as type: as CSV (comma delimited). If you don't have Microsoft Excel installed, you can use LibreOffice Calc, which is an open source alternative for Microsoft Office Excel: You can open the CSV file to verify its format too: Our CSV file has the following fields: CSV Field Purpose Project JIRA's project key needs to be specified in this field Summary This field is mandatory and needs to be specified in the CSV file Issue Type This is important to specify the issue type Status This displays the status of the issue; these are workflow states that need to exist in JIRA and the project workflow should have the states that be imported into the CSV file Priority The priorities mentioned here should exist in JIRA before import Resolution The resolutions mentioned here should exist in JIRA before import Assignee This specifies the assignee of the issue Reporter This specifies the reporter of the issue Created This is the issue creation date Resolved This is the issue resolution date Performing the CSV import Once your CSV file is prepared, then you are ready to perform the import in JIRA: Navigate to JIRA Administration | System | External System Import | CSV (under IMPORT & EXPORT). On the File import screen in the CSV Source File field, click on the Browse… button to select the CSV file that you just prepared on your machine. Once you select the CSV file, the Next button will be enabled: On the Setup screen, select Import to Project as DOPT, which is the name of our project. Verify Date format. It should match the format of the date values in the CSV file. Click on the Next button to continue. On the Map fields screen, we need to map the fields in the CSV file to JIRA fields. This step is crucial because in your old system, the field name can be different from JIRA fields; so, in this step, map these fields to the respective JIRA fields. Click on the Next button to continue. On the Map values screen, map the values of Status, in fact, this mapping of field values can be done for any field. In our case, the values in the status field are the same as in JIRA, so click on the Begin Import button. You will finally get a confirmation that the issues are imported successfully: If you encounter any errors during the CSV import, then it's usually due to some problem with the CSV format. Read the error messages carefully and correct these issues. As mentioned earlier, the CSV import needs to be performed on the test environment first. Migrate JIRA configurations using Configuration Manager add-on JIRA has provision to fully restore a JIRA instance from a backup file, restore a specific project, and use the CSV import functionality to important data in it. These utilities are quite important as it really makes the life of JIRA administrators a lot easier. They can perform these activities right from the JIRA user interface. The project-import utility and CSV import is used to migrate one or more projects from one instance of JIRA to another, but the target instance should have the required configuration in place otherwise these utilities will not work. For instance, if there is a project in source instance with custom workflow states along with a few custom fields, then the exact similar configurations of workflow and custom fields should exist already in the target instance. Recreating these configurations and schemes can be a time consuming and error prone process. Additionally, in various organizations, there is a test environment or a staging server for JIRA where all the new configurations are first tested before they are rolled out to the production instance. Currently, there is no such way to selectively migrate the configurations from one instance to another. It has to be done manually on the target instance. Configuration Manager is an add-on that does this job. Using this add-on, the project-specific configuration can be migrated from one instance to another. Summary In this article we looked at the different products offered by JIRA. Then we learned about concepts of JIRA. We then tried our hands on JQL and a few examples of it. We saw the different types of reports provided by JIRA and the various gadgets available for reporting purposes. Finally we saw how to migrate JIRA configurations using the configuration manager add-on. Resources for Article: Further resources on this subject: Working with JIRA [article] JIRA Workflows [article] JIRA – an Overview [article]
Read more
  • 0
  • 0
  • 15564

article-image-openstack-networking-nutshell
Packt
22 Sep 2016
13 min read
Save for later

OpenStack Networking in a Nutshell

Packt
22 Sep 2016
13 min read
Information technology (IT) applications are rapidly moving from dedicated infrastructure to cloud based infrastructure. This move to cloud started with server virtualization where a hardware server ran as a virtual machine on a hypervisor. The adoptionof cloud based applicationshas accelerated due to factors such as globalization and outsourcing where diverse teams need to collaborate in real time. Server hardware connects to network switches using Ethernet and IP to establish network connectivity. However, as servers move from physical to virtual, the network boundary also moves from the physical network to the virtual network.Traditionally applications, servers and networking were tightly integrated. But modern enterprises and IT infrastructure demand flexibility in order to support complex applications. The flexibility of cloud infrastructure requires networking to be dynamic and scalable. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) play a critical role in data centers in order to deliver the flexibility and agility demanded by cloud based applications. By providing practical management tools and abstractions that hide underlying physical network’s complexity, SDN allows operators to build complex networking capabilities on demand. OpenStack is an open source cloud platform that helps build public and private cloud at scale. Within OpenStack, the name for OpenStack Networking project is Neutron. The functionality of Neutron can be classified as core and service. In this article by Sriram Subramanian and SreenivasVoruganti, authors of the book Software Defined Networking (SDN) with OpenStack, aims to provide a short introduction toOpenStack Networking. We will cover the following topics in this article: Understand traffic flows between virtual and physical networks Neutron entities that support Layer 2 (L2) networking Layer 3 (L3) or routing between OpenStack Networks Securing OpenStack network traffic Advanced networking services in OpenStack OpenStack and SDN The terms Neutron and OpenStack Networking are used interchangeably throughout this article. (For more resources related to this topic, see here.) Virtual and physical networking Server virtualization led to the adoption of virtualized applications and workloads running inside physical servers. While physical servers are connected to the physical network equipment, modern networking has pushed the boundary of networking into the virtual domain as well. Virtual switches, firewalls and routers play a critical role in the flexibility provided by cloud infrastructure. Figure 1: Networking components for server virtualization The preceding figure describes a typical virtualized server and the various networking components. The virtual machines are connected to a virtual switch inside the compute node (or server). The traffic is secured using virtual routers and firewalls. The compute node is connected to a physical switch which is the entry point into the physical network. Let us now walk through different traffic flow scenarios using the picture above as the background. In Figure 2, traffic from one VM to another on same compute node is forwarded by the virtual switch itself. It does not reach the physical network. You can even apply firewall rules to traffic between the two virtual machine. Figure 2: Traffic flow between two virtual machines on the same server Next, let us have a look at how traffic flows between virtual machines across two compute nodes. In Figure 3, the traffic comes out from compute node and then reaches the physical switch. The physical switch forwards the traffic to the second compute node and the virtual switch within the second compute node steers the traffic to appropriate VM. Figure 3: Traffic flow between two virtual machines on the different servers Finally, here is the depiction of traffic flow when a virtual machine sends or receives traffic from the Internet. The physical switch forwards the traffic to the physical router and firewall which is presumed to be connected to the internet. Figure 4: Traffic flow from a virtual machine to external network As seen from the above diagrams, the physical and the virtual network components work together to provide connectivity to virtual machines and applications. Tenant isolation As a cloud platform, OpenStack supports multiple users grouped into tenants. One of the key requirements of a multi-tenant cloud is to provide isolation of data traffic belonging to one tenant from rest of the tenants that use the same infrastructure. OpenStack supports different ways of achieving isolation and the it is the responsibility of the virtual switch to implement the isolation. Layer 2 (L2) capabilities in OpenStack The connectivity to a physical or virtual switch is also known as Layer 2 (L2) connectivity in networking terminology. Layer 2 connectivity is the most fundamental form of network connectivity needed for virtual machines. As mentioned earlier OpenStack supports core and service functionality. The L2 connectivity for virtual machines falls under the core capability of OpenStack Networking, whereas Router, Firewall etc., fall under the service category. The L2 connectivity in OpenStack is realized using two constructs called Network and Subnet. Operators can use OpenStack CLI or the web interface to create Networks and Subnets. And virtual machines are instantiated, the operators can associate them appropriate Networks. Creatingnetwork using OpenStack CLI A Network defines the Layer 2 (L2) boundary for all the instances that are associated with it. All the virtual machines within a Network are part of the same L2 broadcast domain. The Liberty release has introduced new OpenStack CLI (Command Line Interface) for different services. We will use the new CLI and see how to create a Network. Creating Subnet using OpenStack CLI A Subnet is a range of IP addresses that are assigned to virtual machines on the associated network. OpenStack Neutron configures a DHCP server with this IP address range and it starts one DHCP server instance per Network, by default. We will now show you how to create a Subnet using OpenStack CLI. Note: Unlike Network, for Subnet, we need to use the regular neutron CLI command in the Liberty release. Associating a network and Subnet to a virtual machine To give a complete perspective, we will create a virtual machine using OpenStack web interface and show you how to associate a Network and Subnet to a virtual machine. In your OpenStack web interface, navigate to Project|Compute|Instances. Click on Launch Instances action on the right hand side as highlighted above. In the resulting window enter the name for your instance and how you want to boot your instance. To associate a network and a subnet with the instance, click on Networking tab. If you have more than one tenant network, you will be able to choose the network you want to associate with the instance. If you have exactly one network, the web interface will automatically select it. As mentioned earlier, providing isolation for Tenant network traffic is a key requirement for any cloud. OpenStack Neutron uses Network and Subnet to define the boundaries and isolate data traffic between different tenants. Depending on Neutron configuration, the actual isolation of traffic is accomplished by the virtual switches. VLAN and VXLAN are common networking technologies used to isolate traffic. Layer 3 (L3) capabilities in OpenStack Once L2 connectivity is established, the virtual machines within one Network can send or receive traffic between themselves. However, two virtual machines belonging to two different Networks will not be able communicate with each other automatically. This is done to provide privacy and isolation for Tenant networks. In order to allow traffic from one Network to reach another Network, OpenStack Networking supports an entity called Router. The default implementation of OpenStack uses Namespaces to support L3 routing capabilities. Creating Router using OpenStack CLI Operators can create Routers using OpenStack CLI or web interface. They can then add more than one Subnets as interface to the Router. This allows the Networks associated with the router to exchange traffic with one another. The command to create a Router is as follows: This command creates a Router with the specified name. Associating Subnetwork to a Router Once a Router is created, the next step is to associate one or more sub-networks to the Router. The command to accomplish this is: The Subnet represented by subnet1 is now associated to the Router router1. Securing network traffic in OpenStack The security of network traffic is very critical and OpenStack supports two mechanisms to secure network traffic. Security Groups allow traffic within a tenant’s network to be secured. Linux iptables on the compute nodes are used to implement OpenStack security groups. The traffic that goes outside of a tenant’s network – to another Network or the Internet, is secured using OpenStackFirewall Service functionality. Like Routing, Firewall is a service with Neutron. Firewall service also uses iptables but the scope of iptables is limited to the OpenStack Router used as part of the Firewall Service. Usingsecurity groups to secure traffic within a network In order to secure traffic going from one VM to another within a given Network, we must create a security group. The command to create a security group is: The next step is to create one or more rules within the security group. As an example let us create a rule which allows only UDP, incoming traffic on port 8080 from any Source IP address. The final step is to associate this security group and the rules to a virtual machine instance. We will use the nova boot command for this: Once the virtual machine instance has a security group associated with it, the incoming traffic will be monitored and depending upon the rules inside the security group, data traffic may be blocked or permitted to reach the virtual machine. Note: it is possible to block ingress or egress traffic using security groups. Using firewall service to secure traffic We have seen that security groups provide a fine grain control over what traffic is allowed to and from a virtual machine instance. Another layer of security supported by OpenStack is the Firewall as a Service (FWaaS). The FWaaS enforces security at the Router level whereas security groups enforce security at a virtual machine interface level. The main use case of FWaaS is to protect all virtual machine instances within a Network from threats and attacks from outsidethe Network. This could be virtual machines part of another Network in the same OpenStack cloud or some entity on the Internet trying to make an unauthorized access. Let us now see how FWaaS is used in OpenStack. In FWaaS, a set of firewall rules are grouped into a firewall policy. And then a firewall is created that implements one policy at a time. This firewall is then associated to a Router. Firewall rule can be created using neutron firewall-rule-create command as follows: This rule blocks ICMP protocol so applications like Ping will be blocked by the firewall. The next step is to create a Firewall policy. In real world scenarios the security administrators will define several rules and consolidate them under a single policy. For example all rules that block various types of traffic can be combined into a single Policy. The command to create a firewall policy is: The final step is to create a firewall and associate it with a router. The command to do this is: In the command above we did not specify any Routers and the OpenStack behavior is to associate the firewall (and in turn the policy and rules) to all the Routers available for that tenant. The neutron firewall-create command supports an option to pick a specific Router as well. Advanced networking services Besides routing and firewall, there are few other commonly used networking technologies supported by OpenStack. Let’s take a quick look at these without delving deep into the respective commands. Load Balancing as a Service (LBaaS) Virtual machines instances created in OpenStack are used to run applications. Most applications are required to support redundancy and concurrent access. For example, a web server may be accessed by a large number of users at the same time. One of the common strategies to handle scale and redundancy is to implement load-balancing for incoming requests. In this approach, aLoad Balancer distributes an incoming service request onto a pool of servers, which processes the request thus providing higher throughput. If one of the servers in the pool fails, the Load Balancer removes it from the pool and the subsequent service requests are distributed among the remaining servers. User of the application use the IP address of the Load Balancer to access the application and are unaware of the pool of servers. OpenStack implements Load Balancer using HAProxy software and Linux Namespace. Virtual Private Network as a Service (VPNaaS) As mentioned earlier tenant isolation requires data traffic to be segregated and secured within an OpenStack cloud. However, there are times when external entities need to be part of the same Network without removing the firewall based security. This can be accomplished using a Virtual Private Network or VPN. A Virtual Private Network (VPN) connects two endpoints on different networks over a public Internet connection, such that the endpoints appear to be directly connected to each other.  VPNs also provide confidentiality and integrity of transmitted data. Neutron provides a service plugin that enables OpenStack users to connect two networks using a Virtual Private Network (VPN).  The reference implementation of VPN plugin in Neutron uses Openswan to create an IPSec based VPN. IPSec is a suite of protocols that provides secure connection between two endpoints by encrypting each IP packet transferred between them. OpenStack and SDN context So far in this article we have seen the different networking capabilities provided by OpenStack. Let us know look at two capabilities in OpenStack that enable SDN to be leveraged effectively. Choice of technology OpenStack being an open source platform bundles open source networking solutions as default implementation for these networking capabilities. For example, Routing is supported using Namespace, security using iptables and Load balancing using HAproxy. Historically these networking capabilities were implemented using customized hardware and software, most of them being proprietary solutions. These custom solutions are capable of much higher performance and are well supported by their vendors. And hence they have a place in the OpenStack and SDN ecosystem. From it initial releases OpenStack has been designed for extensibility. Vendors can write their own extensions and then can easily configure OpenStack to use their extension instead of the default solutions. This allows the operators to deploy the networking technology of their choice. OpenStack API for networking One of the most powerful capabilities of OpenStack is the extensive support for APIs. All services of OpenStack interact with one another using well defined RESTful APIs. This allows custom implementations and pluggable components to provide powerful enhancements for practical cloud implementation. For example, when a Network is created using OpenStack web interface, a RESTful request is sent to Horizon service. This in turn invokes a RESTful API to validate the user using Keystone service. Once validated, Horizon sends another RESTful API request to Neutron to actually create the Network. Summary As seen in this article, OpenStack supports a wide variety of networking functionality right out of the box. The importance of isolating tenant traffic and the need to allow customized solution requires OpenStack to support flexible configuration. We also highlighted some key aspects of OpenStack that will play a key role in deploying Software Defined Networking in datacenters, thereby supporting powerful cloud architecture and solution. Resources for Article: Further resources on this subject: Setting Up a Network Backup Server with Bacula [article] Jenkins 2.0: The impetus for DevOps Movement [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 19085

article-image-string-management-in-swift
Jorge Izquierdo
21 Sep 2016
7 min read
Save for later

String management in Swift

Jorge Izquierdo
21 Sep 2016
7 min read
One of the most common tasks when building a production app is translating the user interface into multiple languages. I won't go into much detail explaining this or how to set it up, because there are lots of good articles and tutorials on the topic. As a summary, the default system is pretty straightforward. You have a file named Localizable.strings with a set of keys and then different values depending on the file's language. To use these strings from within your app, there is a simple macro in Foundation, NSLocalizedString(key, comment: comment), that will take care of looking up that key in your localizable strings and return the value for the user's device language. Magic numbers, magic strings The problem with this handy macro is that as you can add a new string inline, you will presumably end up with dozens of NSLocalizedStrings in the middle of the code of your app, resulting in something like this: mainLabel.text = NSLocalizedString("Hello world", comment: "") Or maybe, you will write a simple String extension for not having to write it every time. That extension would be something like: extension String { var localized: String { return NSLocalizedString(self, comment: "") } } mainLabel.text = "Hello world".localized This is an improvement, but you still have the problem that the strings are all over the place in the app, and it is difficult to maintain a scalable format for the strings as there is not a central repository of strings that follows the same structure. The other problem with this approach is that you have plain strings inside your code, where you could change a character and not notice it until seeing a weird string in the user interface. For that not to happen, you can take advantage of Swift's awesome strongly typed nature and make the compiler catch these errors with your strings, so that nothing unexpected happens at runtime. Writing a Swift strings file So that is what we are going to do. The goal is to be able to have a data structure that will hold all the strings in your app. The idea is to have something like this: enum Strings { case Title enum Menu { case Feed case Profile case Settings } } And then whenever you want to display a string from the app, you just do: Strings.Title.Feed // "Feed" Strings.Title.Feed.localized // "Feed" or the value for "Feed" in Localizable.strings This system is not likely to scale when you have dozens of strings in your app, so you need to add some sort of organization for the keys. The basic approach would be to just set the value of the enum to the key: enum Strings: String { case Title = "app.title" enum Menu: String { case Feed = "app.menu.feed" case Profile = "app.menu.profile" case Settings = "app.menu.settings" } } But you can see that this is very repetitive and verbose. Also, whenever you add a new string, you need to write its key in the file and then add it to the Localizable.strings file. We can do better than this. Autogenerating the keys Let’s look into how you can automate this process so that you will have something similar to the first example, where you didn't write the key, but you want an outcome like the second example, where you get a reasonable key organization that will be scalable as you add more and more strings during development. We will take advantage of protocol extensions to do this. For starters, you will define a Localizable protocol to make the string enums conform to: protocol Localizable { var rawValue: String { get } } enum Strings: String, Localizable { case Title case Description } And now with the help of a protocol extension, you can get a better key organization: extension Localizable { var localizableKey: String { return self.dynamicType.entityName + "." rawValue } static var entityName: String { return String(self) } } With that key, you can fetch the localized string in a similar way as we did with the String extension: extension Localizable { var localized: String { return NSLocalizedString(localizableKey, comment: "") } } What you have done so far allows you to do Strings.Title.localized, which will look in the localizable strings file for the key Strings.Title and return the value for that language. Polishing the solution This works great when you only have one level of strings, but if you want to group a bit more, say Strings.Menu.Home.Title, you need to make some changes. The first one is that each child needs to know who its parent is in order to generate a full key. That is impossible to do in Swift in an elegant way today, so what I propose is to explicitly have a variable that holds the type of the parent. This way you can recurse back the strings tree until the parent is nil, where we assume it is the root node. For this to happen, you need to change your Localizable protocol a bit: public protocol Localizable { static var parent: LocalizeParent { get } var rawValue: String { get } } public typealias LocalizeParent = Localizable.Type? Now that you have the parent idea in place, the key generation needs to recurse up the tree in order to find the full path for the key. rivate let stringSeparator: String = "." private extension Localizable { static func concatComponent(parent parent: String?, child: String) -> String { guard let p = parent else { return child.snakeCaseString } return p + stringSeparator + child.snakeCaseString } static var entityName: String { return String(self) } static var entityPath: String { return concatComponent(parent: parent?.entityName, child: entityName) } var localizableKey: String { return self.dynamicType.concatComponent(parent: self.dynamicType.entityPath, child: rawValue) } } And to finish, you have to make enums conform to the updated protocol: enum Strings: String, Localizable { case Title enum Menu: String, Localizable { case Feed case Profile case Settings static let parent: LocalizeParent = Strings.self } static let parent: LocalizeParent = nil } With all this in place you can do the following in your app: label.text = Strings.Menu.Settings.localized And the label will have the value for the "strings.menu.settings" key in Localizable.strings. Source code The final code for this article is available on Github. You can find there the instructions for using it within your project. But also you can just add the Localize.swift and modify it according to your project's needs. You can also check out a simple example project to see the whole solution together.  Next time The next steps we would need to take in order to have a full solution is a way for the Localizable.strings file to autogenerate. The solution for this at the current state of Swift wouldn't be very elegant, because it would require either inspecting the objects using the ObjC runtime (which would be difficult to do since we are dealing with pure Swift types here) or defining all the children of a given object explicitly, in the same way as open source XCTest does. Each test case defines all of its tests in a static property. About the author Jorge Izquierdo has been developing iOS apps for 5 years. The day Swift was released, he starting hacking around with the language and built the first Swift HTTP server, Taylor. He has worked on several projects and right now works as an iOS development contractor.
Read more
  • 0
  • 0
  • 28088
article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 18098

article-image-jenkins-20-impetus-devops-movement
Packt
19 Sep 2016
15 min read
Save for later

Jenkins 2.0: The impetus for DevOps Movement

Packt
19 Sep 2016
15 min read
In this article by Mitesh Soni, the author of the book DevOps for Web Development, provides some insight into DevOps movement, benefits of DevOps culture, Lifecycle of DevOps, how Jenkins 2.0 is bridging the gaps between Continuous Integration and Continuous Delivery using new features and UI improvements, installation and configuration of Jenkins 2.0. (For more resources related to this topic, see here.) Understanding the DevOps movement Let's try to understand what DevOps is. Is it a real, technical word? No, because DevOps is not just about technical stuff. It is also neither simply a technology nor an innovation. In simple terms, DevOps is a blend of complex terminologies. It can be considered as a concept, culture, development and operational philosophy, or a movement. To understand DevOps, let's revisit the old days of any IT organization. Consider there are multiple environments where an application is deployed. The following sequence of events takes place when any new feature is implemented or bug fixed:   The development team writes code to implement a new feature or fix a bug. This new code is deployed to the development environment and generally tested by the development team. The new code is deployed to the QA environment, where it is verified by the testing team. The code is then provided to the operations team for deploying it to the production environment. The operations team is responsible for managing and maintaining the code.   Let's list the possible issues in this approach: The transition of the current application build from the development environment to the production environment takes weeks or months. The priorities of the development team, QA team, and IT operations team are different in an organization and effective, and efficient co-ordination becomes a necessity for smooth operations. The development team is focused on the latest development release, while the operations team cares about the stability of the production environment. The development and operations teams are not aware of each other's work and work culture. Both teams work in different types of environments; there is a possibility that the development team has resource constraints and they therefore use a different kind of configuration. It may work on the localhost or in the dev environment. The operations team works on production resources and there will therefore be a huge gap in the configuration and deployment environments. It may not work where it needs to run—the production environment. Assumptions are key in such a scenario, and it is improbable that both teams will work under the same set of assumptions. There is manual work involved in setting up the runtime environment and configuration and deployment activities. The biggest issue with the manual application-deployment process is its nonrepeatability and error-prone nature. The development team has the executable files, configuration files, database scripts, and deployment documentation. They provide it to the operations team. All these artifacts are verified on the development environment and not in production or staging. Each team may take a different approach for setting up the runtime environment and the configuration and deployment activities, considering resource constraints and resource availability. In addition, the deployment process needs to be documented for future usage. Now, maintaining the documentation is a time-consuming task that requires collaboration between different stakeholders. Both teams work separately and hence there can be a situation where both use different automation techniques. Both teams are unaware of the challenges faced by each other and hence may not be able to visualize or understand an ideal scenario in which the application works. While the operations team is busy in deployment activities, the development team may get another request for a feature implementation or bug fix; in such a case, if the operations team faces any issues in deployment, they may try to consult the development team, who are already occupied with the new implementation request. This results in communication gaps, and the required collaboration may not happen. There is hardly any collaboration between the development team and the operations team. Poor collaboration causes many issues in the application's deployment to different environments, resulting in back-and-forth communication through e-mail, chat, calls, meetings, and so on, and it often ends in quick fixes. Challenges for the development team: The competitive market creates pressure of on-time delivery. They have to take care of production-ready code management and new feature implementation. The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment. Challenges for the operations team: Resource contention: It's difficult to handle increasing resource demands Redesigning or tweaking: This is needed to run the application in the production environment Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation The benefits of DevOps This diagram covers all the benefits of DevOps: Collaboration among different stakeholders brings many business and technical benefits that help organizations achieve their business goals. The DevOps lifecycle – it's all about "continuous" Continuous Integration(CI),Continuous Testing(CT), and Continuous Delivery(CD) are significant part of DevOps culture. CI includes automating builds, unit tests, and packaging processes while CD is concerned with the application delivery pipeline across different environments. CI and CD accelerate the application development process through automation across different phases, such as build, test, and code analysis, and enable users achieve end-to-end automation in the application delivery lifecycle: Continuous integration and continuous delivery or deployment are well supported by cloud provisioning and configuration management. Continuous monitoring helps identify issues or bottlenecks in the end-to-end pipeline and helps make the pipeline effective. Continuous feedback is an integral part of this pipeline, which directs the stakeholders whether are close to the required outcome or going in the different direction. "Continuous effort – not strength or intelligence – is the key to unlocking our potential"                                                                                            -Winston Churchill Continuous integration What is continuous integration? In simple words, CI is a software engineering practice where each check-in made by a developer is verified by either of the following: Pull mechanism: Executing an automated build at a scheduled time Push mechanism: Executing an automated build when changes are saved in the repository This step is followed by executing a unit test against the latest changes available in the source code repository. The main benefit of continuous integration is quick feedback based on the result of build execution. If it is successful, all is well; else, assign responsibility to the developer whose commit has broken the build, notify all stakeholders, and fix the issue. Read more about CI at http://martinfowler.com/articles/continuousIntegration.html. So why is CI needed? Because it makes things simple and helps us identify bugs or errors in the code at a very early stage of development, when it is relatively easy to fix them. Just imagine if the same scenario takes place after a long duration and there are too many dependencies and complexities we need to manage. In the early stages, it is far easier to cure and fix issues; consider health issues as an analogy, and things will be clearer in this context. Continuous integration is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. CI is a significant part and in fact a base for the release-management strategy of any organization that wants to develop a DevOps culture. Following are immediate benefits of CI: Automated integration with pull or push mechanism Repeatable process without any manual intervention Automated test case execution Coding standard verification Execution of scripts based on requirement Quick feedback: build status notification to stakeholders via e-mail Teams focused on their work and not in the managing processes Jenkins, Apache Continuum, Buildbot, GitLabCI, and so on are some examples of open source CI tools. AnthillPro, Atlassian Bamboo, TeamCity, Team Foundation Server, and so on are some examples of commercial CI tools. Continuous integration tools – Jenkins Jenkins was originally an open source continuous integration software written in Java under the MIT License. However, Jenkins 2 an open source automation server that focuses on any automation, including continuous integration and continuous delivery. Jenkins can be used across different platforms, such as Windows, Ubuntu/Debian, Red Hat/Fedora, Mac OS X, openSUSE, and FreeBSD. Jenkins enables user to utilize continuous integration services for software development in an agile environment. It can be used to build freestyle software projects based on Apache Ant and Maven 2/Maven 3. It can also execute Windows batch commands and shell scripts. It can be easily customized with the use of plugins. There are different kinds of plugins available for customizing Jenkins based on specific needs for setting up continuous integration. Categories of plugins include source code management (the Git, CVS, and Bazaar plugins), build triggers (the Accelerated Build Now and Build Flow plugins), build reports (the Code Scanner and Disk Usage plugins), authentication and user management (the Active Directory and GitHub OAuth plugins), and cluster management and distributed build (Amazon EC2 and Azure Slave plugins). To know more about all plugins, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugins. To explore how to create a new plugin, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial. To download different versions of plugins, visit https://updates.jenkins-ci.org/download/plugins/. Visit the Jenkins website at http://jenkins.io/. Jenkins accelerates the software development process through automation: Key features and benefits Here are some striking benefits of Jenkins: Easy install, upgrade, and configuration. Supported platforms: Windows, Ubuntu/Debian, Red Hat/Fedora/CentOS, Mac OS X, openSUSE, FreeBSD, OpenBSD, Solaris, and Gentoo. Manages and controls development lifecycle processes. Non-Java projects supported by Jenkins: Such as .NET, Ruby, PHP, Drupal, Perl, C++, Node.js, Python, Android, and Scala. A development methodology of daily integrations verified by automated builds. Every commit can trigger a build. Jenkins is a fully featured technology platform that enables users to implement CI and CD. The use of Jenkins is not limited to CI and CD. It is possible to include a model and orchestrate the entire pipeline with the use of Jenkins as it supports shell and Windows batch command execution. Jenkins 2.0 supports a delivery pipeline that uses a Domain-Specific Language (DSL) for modeling entire deployments or delivery pipelines. Pipeline as code provides a common language—DSL—to help the development and operations teams to collaborate in an effective manner. Jenkins 2 brings a new GUI with stage view to observe the progress across the delivery pipeline. Jenkins 2.0 is fully backward compatible with the Jenkins 1.x series. Jenkins 2 now requires Servlet 3.1 to run. You can use embedded Winstone-Jetty or a container that supports Servlet 3.1 (such as Tomcat 8). GitHub, Collabnet, SVN, TFS code repositories, and so on are supported by Jenkins for collaborative development. Continuous integration: Automate build and test—automated testing (continuous testing), package, and static code analysis. Supports common test frameworks such as HP ALM Tools, Junit, Selenium, and MSTest. For continuous testing, Jenkins has plugins for both; Jenkins slaves can execute test suites on different platforms. Jenkins supports static code analysis tools such as code verification by CheckStyle and FindBug. It also integrates with Sonar. Continuous delivery and continuous deployment: It automates the application deployment pipeline, integrates with popular configuration management tools, and automates environment provisioning. To achieve continuous delivery and deployment, Jenkins supports automatic deployment; it provides a plugin for direct integration with IBM uDeploy. Highly configurable: Plugins-based architecture that provides support to many technologies, repositories, build tools, and test tools; it has an open source CI server and provides over 400 plugins to achieve extensibility. Supports distributed builds: Jenkins supports "master/slave" mode, where the workload of building projects is delegated to multiple slave nodes. It has a machine-consumable remote access API to retrieve information from Jenkins for programmatic consumption, to trigger a new build, and so on. It delivers a better application faster by automating the application development lifecycle, allowing faster delivery. The Jenkins build pipeline (quality gate system) provides a build pipeline view of upstream and downstream connected jobs, as a chain of jobs, each one subjecting the build to quality-assurance steps. It has the ability to define manual triggers for jobs that require intervention prior to execution, such as an approval process outside of Jenkins. In the following diagram Quality Gates and Orchestration of Build Pipeline is illustrated: Jenkins can be used with the following tools in different categories as shown here: Language Java .Net Code repositories Subversion, Git, CVS, StarTeam Build tools Ant, Maven NAnt, MS Build Code analysis tools Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool Continuous integration Jenkins Continuous testing Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest) Infrastructure provisioning Configuration management tool—Chef Virtualization/cloud service provider VMware, AWS, Microsoft Azure (IaaS), traditional environment Continuous delivery/deployment Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands Installing Jenkins Jenkins provides us with multiple ways to install it for all types of users. We can install it on at least the following operating systems: Ubuntu/Debian Windows Mac OS X OpenBSD FreeBSD openSUSE Gentoo CentOS/Fedora/Red Hat One of the easiest options I recommend is to use a WAR file. A WAR file can be used with or without a container or web application server. Having Java is a must before we try to use a WAR file for Jenkins, which can be done as follows: Download the jenkins.war file from https://jenkins.io/. Open command prompt in Windows or a terminal in Linux, go to the directory where the jenkins.war file is stored, and execute the following command: java – jar jenkins.war Once Jenkins is fully up and running, as shown in the following screenshot, explore it in the web browser by visiting http://localhost:8080. By default, Jenkins works on port 8080. Execute the following command from the command line: java -jar jenkins.war --httpPort=9999 For HTTPS, use the following command: java -jar jenkins.war --httpsPort=8888 Once Jenkins is running, visit the Jenkins home directory. In our case, we have installed Jenkins 2 on a CentOS 6.7 virtual machine. Go to /home/<username>/.jenkins, as shown in the following screenshot. If you can't see the .jenkins directory, make sure hidden files are visible. In CentOS, press Ctrl+H to make hidden files visible. Setting up Jenkins Now that we have installed Jenkins, let's verify whether Jenkins is running. Open a browser and navigate to http://localhost:8080 or http://<IP_ADDRESS>:8080. If you've used Jenkins earlier and recently downloaded the Jenkins 2 WAR file, it will ask for a security setup. To unlock Jenkins, follow these steps: Go to the .Jenkins directory and open the initialAdminPassword file from the secrets subdirectory: Copy the password in that file, paste it in the Administrator password box, and click on Continue, as shown here: Clicking on Continue will redirect you to the Customize Jenkins page. Click on Install suggested plugins: The installation of the plugins will start. Make sure that you have a working Internet connection. Once all the required plugins have been installed, you will seethe Create First Admin User page. Provide the required details, and click on Save and Finish: Jenkins is ready! Our Jenkins setup is complete. Click on Start using Jenkins: Get Jenkins plugins from https://wiki.jenkins-ci.org/display/JENKINS/Plugins. Summary We have covered some brief details on DevOps culture and Jenkins 2.0 and its new features. DevOps for Web Developmentprovides more details on extending Continuous Integration to Continuous Delivery and Continuous Deployment using Configuration management tools such as Chef and Cloud Computing platforms such Microsoft Azure (App Services) and AWS (Amazon EC2 and AWS Elastic Beanstalk), you refer at https://www.packtpub.com/networking-and-servers/devops-web-development. To get more details Jenkins, refer to JenkinsEssentials, https://www.packtpub.com/application-development/jenkins-essentials. Resources for Article: Further resources on this subject: Setting Up and Cleaning Up [article] Maven and Jenkins Plugin [article] Exploring Jenkins [article]
Read more
  • 0
  • 0
  • 14222
Modal Close icon
Modal Close icon