Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-setting-mongodb
Packt
12 Aug 2016
10 min read
Save for later

Setting up MongoDB

Packt
12 Aug 2016
10 min read
In this article by Samer Buna author of the book Learning GraphQL and Relay, we're mostly going to be talking about how an API is nothing without access to a database. Let's set up a local MongoDB instance, add some data in there, and make sure we can access that data through our GraphQL schema. (For more resources related to this topic, see here.) MongoDB can be locally installed on multiple platforms. Check the documentation site for instructions for your platform (https://docs.mongodb.com/manual/installation/). For Mac, the easiest way is probably Homebrew: ~ $ brew install mongodb Create a db folder inside a data folder. The default location is /data/db: ~ $ sudo mkdir -p /data/db Change the owner of the /data folder to be the current logged-in user: ~ $ sudo chown -R $USER /data Start the MongoDB server: ~ $ mongod If everything worked correctly, we should be able to open a new terminal and test the mongo CLI: ~/graphql-project $ mongo MongoDB shell version: 3.2.7 connecting to: test > db.getName() test > We're using MongoDB version 3.2.7 here. Make sure that you have this version or newer versions of MongoDB. Let's go ahead and create a new collection to hold some test data. Let's name that collection users: > db.createCollection("users")" { "ok" : 1 } Now we can use the users collection to add documents that represent users. We can use the MongoDB insertOne() function for that: > db.users.insertOne({ firstName: "John"," lastName: "Doe"," email: "john@example.com" }) We should see an output like: { "acknowledged" : true, "insertedId" : ObjectId("56e729d36d87ae04333aa4e1") } Let's go ahead and add another user: > db.users.insertOne({ firstName: "Jane"," lastName: "Doe"," email: "jane@example.com" }) We can now verify that we have two user documents in the users collection using: > db.users.count() 2 MongoDB has a built-in unique object ID which you can see in the output for insertOne(). Now that we have a running MongoDB, and we have some test data in there, it's time to see how we can read this data using a GraphQL API. To communicate with a MongoDB from a Node.js application, we need to install a driver. There are many options that we can choose from, but GraphQL requires a driver that supports promises. We will use the official MongoDB Node.js driver which supports promises. Instructions on how to install and run the driver can be found at: https://docs.mongodb.com/ecosystem/drivers/node-js/. To install the MongoDB official Node.js driver under our graphql-project app, we do: ~/graphql-project $ npm install --save mongodb └─┬ mongodb@2.2.4 We can now use this mongodb npm package to connect to our local MongoDB server from within our Node application. In index.js: const mongodb = require('mongodb'); const assert = require('assert'); const MONGO_URL = 'mongodb'://localhost:27017/test'; mongodb.MongoClient.connect(MONGO_URL, (err, db) => { assert.equal(null, err); console.log('Connected' to MongoDB server'); // The readline interface code }); The MONGO_URL variable value should not be hardcoded in code like this. Instead, we can use a node process environment variable to set it to a certain value before executing the code. On a production machine, we would be able to use the same code and set the process environment variable to a different value. Use the export command to set the environment variable value: export MONGO_URL=mongodb://localhost:27017/test Then in the Node code, we can read the exported value by using: process.env.MONGO_URL If we now execute the node index.js command, we should see the Connected to MongoDB server line right before we ask for the Client Request. At this point, the Node.js process will not exit after our interaction with it. We'll need to force exit the process with Ctrl + C to restart it. Let's start our database API with a simple field that can answer this question: How many total users do we have in the database? The query could be something like: { usersCount } To be able to use a MongoDB driver call inside our schema main.js file, we need access to the db object that the MongoClient.connect() function exposed for us in its callback. We can use the db object to count the user documents by simply running the promise: db.collection('users').count() .then(usersCount => console.log(usersCount)); Since we only have access to the db object in index.js within the connect() function's callback, we need to pass a reference to that db object to our graphql() function. We can do that using the fourth argument for the graphql() function, which accepts a contextValue object of globals, and the GraphQL engine will pass this context object to all the resolver functions as their third argument. Modify the graphql function call within the readline interface in index.js to be: graphql.graphql(mySchema, inputQuery, {}, { db }).then(result => { console.log('Server' Answer :', result.data); db.close(() => rli.close()); }); The third argument to the graphql() function is called the rootValue, which gets passed as the first argument to the resolver function on the top level type. We are not using that feature here. We passed the connected database object db as part of the global context object. This will enable us to use db within any resolver function. Note also how we're now closing the rli interface within the callback for the operation that closes the db. We should not leave any open db connections behind. Here's how we can now use the resolver third argument to resolve our usersCount top-level field with the db count() operation: fields: { // "hello" and "diceRoll"..." usersCount: { type: GraphQLInt, resolve: (_, args, { db }) => db.collection('users').count() } } A couple of things to notice about this code: We destructured the db object from the third argument for the resolve() function so that we can use it directly (instead of context.db). We returned the promise itself from the resolve() function. The GraphQL executor has native support for promises. Any resolve() function that returns a promise will be handled by the executor itself. The executor will either successfully resolve the promise and then resolve the query field with the promise-resolved value, or it will reject the promise and return an error to the user. We can test our query now: ~/graphql-project $ node index.js Connected to MongoDB server Client Request: { usersCount } Server Answer : { usersCount: 2 } *** #GitTag: chapter1-setting-up-mongodb *** Setting up an HTTP interface Let's now see how we can use the graphql() function under another interface, an HTTP one. We want our users to be able to send us a GraphQL request via HTTP. For example, to ask for the same usersCount field, we want the users to do something like: /graphql?query={usersCount} We can use the Express.js node framework to handle and parse HTTP requests, and within an Express.js route, we can use the graphql() function. For example (don't add these lines yet): const app = express(); app.use('/graphql', (req, res) => { // use graphql.graphql() to respond with JSON objects }); However, instead of manually handling the req/res objects, there is a GraphQL Express.js middleware that we can use, express-graphql. This middleware wraps the graphql() function and prepares it to be used by Express.js directly. Let's go ahead and bring in both the Express.js library and this middleware: ~/graphql-project $ npm install --save express express-graphql ├─┬ express@4.14.0 └─┬ express-graphql@0.5.3 In index.js, we can now import both express and the express-graphql middleware: const graphqlHTTP = require('express-graphql'); const express = require('express'); const app = express(); With these imports, the middleware main function will now be available as graphqlHTTP(). We can now use it in an Express route handler. Inside the MongoClient.connect() callback, we can do: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db } })); app.listen(3000, () => console.log('Running Express.js on port 3000') ); Note that at this point we can remove the readline interface code as we are no longer using it. Our GraphQL interface from now on will be an HTTP endpoint. The app.use line defines a route at /graphql and delegates the handling of that route to the express-graphql middleware that we imported. We pass two objects to the middleware, the mySchema object, and the context object. We're not passing any input query here because this code just prepares the HTTP endpoint, and we will be able to read the input query directly from a URL field. The app.listen() function is the call we need to start our Express.js app. Its first argument is the port to use, and its second argument is a callback we can use after Express.js has started. We can now test our HTTP-mounted GraphQL executor with: ~/graphql-project $ node index.js Connected to MongoDB server Running Express.js on port 3000 In a browser window go to: http://localhost:3000/graphql?query={usersCount} *** #GitTag: chapter1-setting-up-an-http-interface *** The GraphiQL editor The graphqlHTTP() middleware function accepts another property on its parameter object graphiql, let's set it to true: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db }, graphiql: true })); When we restart the server now and navigate to http://localhost:3000/graphql, we'll get an instance of the GraphiQL editor running locally on our GraphQL schema: GraphiQL is an interactive playground where we can explore our GraphQL queries and mutations before we officially use them. GraphiQL is written in React and GraphQL, and it runs completely within the browser. GraphiQL has many powerful editor features such as syntax highlighting, code folding, and error highlighting and reporting. Thanks to GraphQL introspective nature, GraphiQL also has intelligent type-ahead of fields, arguments, and types. Put the cursor in the left editor area, and type a selection set: { } Place the cursor inside that selection set and press Ctrl + space. You should see a list of all fields that our GraphQL schema support, which are the three fields that we have defined so far (hello, diceRoll, and usersCount): If Ctrl +space does not work, try Cmd + space, Alt + space, or Shift + space. The __schema and __type fields can be used to introspectively query the GraphQL schema about what fields and types it supports. When we start typing, this list starts to get filtered accordingly. The list respects the context of the cursor, if we place the cursor inside the arguments of diceRoll(), we'll get the only argument we defined for diceRoll, the count argument. Go ahead and read all the root fields that our schema support, and see how the data gets reported on the right side with the formatted JSON object: *** #GitTag: chapter1-the-graphiql-editor *** Summary In this article, we learned how to set up a local MongoDB instance, add some data in there, so that we can access that data through our GraphQL schema. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Getting Started with Java Driver for MongoDB [article] Documents and Collections in Data Modeling with MongoDB [article]
Read more
  • 0
  • 0
  • 14694

article-image-laravel-50-essentials
Packt
12 Aug 2016
9 min read
Save for later

Laravel 5.0 Essentials

Packt
12 Aug 2016
9 min read
In this article by Alfred Nutile from the book, Laravel 5.x Cookbook, we will learn the following topics: Setting up Travis to Auto Deploy when all is Passing Working with Your .env File Testing Your App on Production with Behat (For more resources related to this topic, see here.) Setting up Travis to Auto Deploy when all is Passing Level 0 of any work should be getting a deployment workflow setup. What that means in this case is that a push to GitHub will trigger our Continuous Integration (CI). And then from the CI, if the tests are passing, we trigger the deployment. In this example I am not going to hit the URL Forge gives you but I am going to send an Artifact to S3 and then have call CodeDeploy to deploy this Artifact. Getting ready… You really need to see the section before this, otherwise continue knowing this will make no sense. How to do it… Install the travis command line tool in Homestead as noted in their docs https://github.com/travis-ci/travis.rb#installation. Make sure to use Ruby 2.x: sudo apt-get install ruby2.0-dev sudo gem install travis -v 1.8.2 --no-rdoc --no-ri Then in the recipe folder I run the command > travis setup codedeploy I answer all the questions keeping in mind:     The KEY and SECRET are the ones we made of the I AM User in the Section before this     The S3 KEY is the filename not the KEY we used for a above. So in my case I just use the name again of the file latest.zip since it sits inside the recipe-artifact bucket. Finally I open the .travis.yml file, which the above modifies and I update the before-deploy area so the zip command ignores my .env file otherwise it would overwrite the file on the server. How it works… Well if you did the CodeDeploy section before this one you will know this is not as easy as it looks. After all the previous work we are able to, with the one command travis setup codedeploy punch in securely all the needed info to get this passing build to deploy. So after phpunit reports things are passing we are ready. With that said we had to have a lot of things in place, S3 bucket to put the artifact, permission with the KEY and SECRET to access the Artifact and CodeDeploy, and a CodeDeploy Group and Application to deploy to. All of this covered in the previous section. After that it is just the magic of Travis and CodeDeploy working together to make this look so easy. See also… Travis Docs: https://docs.travis-ci.com/user/deployment/codedeploy https://github.com/travis-ci/travis.rb https://github.com/travis-ci/travis.rb#installation Working with Your .env File The workflow around this can be tricky. Going from Local, to TravisCI, to CodeDeploy and then to AWS without storing your info in .env on GitHub can be a challenge. What I will show here are some tools and techniques to do this well. Getting ready…. A base install is fine I will use the existing install to show some tricks around this. How to do it… Minimize using Conventions as much as possible     config/queue.php I can do this to have one or more Queues     config/filesystems.php Use the Config file as much as possible. For example this is in my .env If I add config/marvel.php and then make it look like this My .env can be trimmed down by KEY=VALUES later on I can call to those:    Config::get('marvel.MARVEL_API_VERSION')    Config::get('marvel.MARVEL_API_BASE_URL') Now to easily send to Staging or Production using the EnvDeployer library >composer require alfred-nutile-inc/env-deployer:dev-master Follow the readme.md for that library. Then as it says in the docs setup your config file so that it matches the destination IP/URL and username and path for those services. I end up with this config file config/envdeployer.php Now the trick to this library is you start to enter KEY=VALUES into your .env file stacked on top of each other. For example, my database settings might look like this. so now I can type: >php artisan envdeployer:push production Then this will push over SSH your .env to production and swap out the related @production values for each KEY they are placed above. How it works… The first mindset to follow is conventions before you put a new KEY=VALUE into the .env file set back and figure out defaults and conventions around what you already must have in this file. For example must haves, APP_ENV, and then I always have APP_NAME so those two together do a lot to make databases, queues, buckets and so on. all around those existing KEYs. It really does add up, whether you are working alone or on a team focus on these conventions and then using the config/some.php file workflow to setup defaults. Then libraries like the one I use above that push this info around with ease. Kind of like Heroku you can command line these settings up to the servers as needed. See also… Laravel Validator for the .env file: https://packagist.org/packages/mathiasgrimm/laravel-env-validator Laravel 5 Fundamentals: Environments and Configuration: https://laracasts.com/series/laravel-5-fundamentals/episodes/6 Testing Your App on Production with Behat So your app is now on Production! Start clicking away at hundreds of little and big features so you can make sure everything went okay or better yet run Behat! Behat on production? Sounds crazy but I will cover some tips on how to do this including how to setup some remote conditions and clean up when you are done. Getting ready… Any app will do. In my case I am going to hit production with some tests I made earlier. How to do it… Tag a Behat test @smoke or just a Scenario that you know it is safe to run on Production for example features/home/search.feature. Update behat.yml adding a profile call production. Then run > vendor/bin/behat -shome_ui --tags=@smoke --profile=production I run an Artisan command to run all these Then you will see it hit the production url and only the Scenarios you feel are safe for Behat. Another method is to login as a demo user. And after logging in as that user you can see data that is related to that user only so you can test authenticated level of data and interactions. For example database/seeds/UserTableSeeder.php add the demo user to the run method Then update your .env. Now push that .env setting up to Production.  >php artisan envdeploy:push production Then we update our behat.yml file to run this test even on Production features/auth/login.feature. Now we need to commit our work and push to GitHub so TravisCI can deploy and changes: Since this is a seed and not a migration I need to rerun seeds on production. Since this is a new site, and no one has used it this is fine BUT of course this would have been a migration if I had to do this later in the applications life. Now let's run this test, from our vagrant box > vendor/bin/behat -slogin_ui --profile=production But it fails because I am setting up the start of this test for my local database not the remote database features/bootstrap/LoginPageUIContext.php. So I can basically begin to create a way to setup the state of the world on the remote server. > php artisan make:controller SetupBehatController And update that controller to do the setup. And make the route app/Http/routes.php Then update the behat test features/bootstrap/LoginPageUIContext.php And we should do some cleanup! First add a new method to features/bootstrap/LoginPageUIContext.php. Then add that tag to the Scenarios this is related to features/auth/login.feature Then add the controller like before and route app/Http/Controllers/CleanupBehatController.php Then Push and we are ready test this user with fresh state and then clean up when they are done! In this case I could test editing the Profile from one state to another. How it works… Not to hard! Now we have a workflow that can save us a ton of clicking around Production after every deployment. To begin with I add the tag @smoke to tests I considered safe for production. What does safe mean? Basically read only tests that I know will not effect that site's data. Using the @smoke tag I have a consistent way to make Suites or Scenarios as safe to run on Production. But then I take it a step further and create a way to test authenticated related state. Like make a Favorite or updating a Profile! By using some simple routes and a user I can begin to tests many other things on my long list of features I need to consider after every deploy. All of this happens with the configurability of Behat and how it allows me to manage different Profiles and Suites in the behat.yml file! Lastly I tie into the fact that Behat has hooks. I this case I tie in to the @AfterScenario by adding that to my Annotation. And I add another hooks @profile so it only runs if the Scenario has that Tag. That is it, thanks to Behat, Hooks and how easy it is to make Routes in Laravel I can easily take care of a large percentage of what otherwise would be a tedious process after every deployment! See also… Behat Docus on Hooks—http://docs.behat.org/en/v3.0/guides/3.hooks.html Saucelabs—on behat.yml setting later and you can test your site on numerous devices: https://saucelabs.com/. Summary This article gives a summary of Setting up Travis, working with .env files and Behat.  Resources for Article: Further resources on this subject: CRUD Applications using Laravel 4 [article] Laravel Tech Page [article] Eloquent… without Laravel! [article]
Read more
  • 0
  • 0
  • 25383

article-image-null-7
Packt
11 Aug 2016
3 min read
Save for later

Open Source Project Royalty Scheme

Packt
11 Aug 2016
3 min read
Open Source Project Royalty Scheme Packt believes in Open Source and helping to sustain and support its unique projects and communities. Therefore, when we sell a book written on an Open Source project, we pay a royalty directly to that project. As a result of purchasing one of our Open Source books, Packt will have given some of the money received to the Open Source project. In the long term, we see ourselves and yourselves, as customers and readers of our books, as part of the Open Source ecosystem, providing sustainable revenue for the projects we publish on. Our aim at Packt is to establish publishing royalties as an essential part of the service and support business model that sustains Open Source.   Some of the things people have said about the Open Source Project Royalty Scheme: “Moodle is grateful for the royalty donations that Packt have volunteered to send us as part of their Open Source Project Royalty Scheme. The money donated helps us fund a developer for a few months a year and thus contributes directly towards Moodle core development, support and improvements in the future.”  - Martin Dougiamas, founder of acclaimed Open-Source e-learning software, Moodle. “Most of the money that we've used, donated from Packt has gone towards running jQuery conferences for the community and bringing together the jQuery team to do development work together. The financial contributions have been very valuable and in that regard, have resulted in a team that's able to operate much more efficiently and effectively.”  - John Resig, the founder of the popular JavaScript library, jQuery "The Drupal project and its community have grown sharply over the last couple of years. The support that Packt has shown, through its book royalties and awards, has contributed to that success and helped the project handle its growth. The Drupal Association uses the money that Packt donates on a number of things including, server infrastructure and the organization of events." -Dries Buytaert, founder of renowned Content Management System, Drupal   To read up on the projects that are supported by the Packt Open Source Project Royalty Scheme, click the appropriate categories below: All Open Source Projects Content Management System (CMS) Customer Relationship Management (CRM) e-Commerce e-Learning Networking and Telephony Web Development Web Graphics and Video   Are you part of an Open Source project that Packt has published a book on? Packt believes in Open Source and your project may be able to receive support through the Open Source Project Royalty Scheme. Simply contact Packt: royalty@packtpub.com.
Read more
  • 0
  • 0
  • 2360

article-image-migrating-version-3
Packt
11 Aug 2016
11 min read
Save for later

Migrating from Version 3

Packt
11 Aug 2016
11 min read
In this article by Matt Lambert, the author of the book Learning Bootstrap 4, has covered how to migrate your Bootstrap 3 project to Version 4 of Bootstrap is a major update. Almost the entire framework has been rewritten to improve code quality, add new components, simplify complex components, and make the tool easier to use overall. We've seen the introduction of new components like Cards and the removal of a number of basic components that weren't heavily used. In some cases, Cards present a better way of assembling a layout than a number of the removed components. Let's jump into this article by showing some specific class and behavioral changes to Bootstrap in version 4. (For more resources related to this topic, see here.) Browser support Before we jump into the component details, let's review the new browser support. If you are currently running on version 3 and support some older browsers, you may need to adjust your support level when migrating to Bootstrap 4. For desktop browsers, Internet Explorer version 8 support has been dropped. The new minimum Internet Explorer version that is supported is version 9. Switching to mobile, iOS version 6 support has been dropped. The minimum iOS supported is now version 7. The Bootstrap team has also added support for Android v5.0 Lollipop's browser and WebView. Earlier versions of the Android Browser and WebView are not officially supported by Bootstrap. Big changes in version 4 Let's continue by going over the biggest changes to the Bootstrap framework in version 4. Switching to Sass Perhaps the biggest change in Bootstrap 4 is the switch from Less to Sass. This will also likely be the biggest migration job you will need to take care of. The good news is you can use the sample code we've created in the book as a starting place. Luckily, the syntax for the two CSS pre-processors is not that different. If you haven't used Sass before, there isn't a huge learning curve that you need to worry about. Let's cover some of the key things you'll need to know when updating your stylesheets for Sass. Updating your variables The main difference in variables is the symbol used to denote one. In Less we use an @ symbol for our variables, while in Sass you use the $ symbol. Here are a couple of examples for you: /* LESS */ @red: #c00; @black: #000; @white: #fff; /* SASS */ $red: #c00; $black: #000; $white: #fff; As you can see, that is pretty easy to do. A simple find and replace should do most of the work for you. However, if you are using @import in your stylesheets, make sure there remains an @ symbol. Updating @import statements Another small change in Sass is how you import different stylesheets using the @import keyword. First, let's take a look at how you do this in Less: @import "components/_buttons.less"; Now let's compare how we do this using Sass: @import "components/_buttons.scss"; As you can see, it's almost identical. You just need to make sure you name all your files with the .scss extension. Then update your file names in the @import to use .scss and not .less. Updating mixins One of the biggest differences between Less and Sass are mixins. Here we'll need to do a little more heavy lifting when we update the code to work for Sass. First, let's take a look at how we would create a border-radius, or round corners, mixin in Less: .border-radius (@radius: 2px) { -moz-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } In Less, all elements that use the border-radius mixin will have a border radius of 2px. That is added to a component, like this: button { .border-radius } Now let's compare how you would do the same thing using Sass. Check out the mixin code: @mixin border-radius($radius) { -webkit-border-radius: $radius; -moz-border-radius: $radius; -ms-border-radius: $radius; border-radius: $radius; } There are a few differences here that you need to note: You need to use the @mixin keyword to initialize any mixin We don't actually define a global value to use with the mixin To use the mixin with a component, you would code it like this: button { @include border-radius(2px); } This is also different from Less in a few ways: First, you need to insert the @include keyword to call the mixin Next, you use the mixin name you defined earlier, in this case, border-radius Finally, you need to set the value for the border-radius for each element, in this case, 2px Personally, I prefer the Less method as you can set the value once and then forget about it. However, since Bootstrap has moved to Sass, we have to learn and use the new syntax. That concludes the main differences that you will likely encounter. There are other differences and if you would like to research them more, I would check out this page: http://sass-lang.com/guide. Additional global changes The change to Sass is one of the bigger global differences in version 4 of Bootstrap. Let's take a look at a few others you should be aware of.  Using REM units In Bootstrap 4, px has been replaced with rem for the primary unit of measure. If you are unfamiliar with rem it stands for root em. Rem is a relative unit of measure where pixels are fixed. Rem looks at the value for font-size on the root element in your stylesheet. It then uses your value declaration, in rems, to determine the computer pixel value. Let's use an example to make this easier to understand: html { font-size: 24px; } p { font-size: 2rem; } In this case, the computed font-size for the <p> tag would be 48px. This is different from the em unit because ems will be affected by wrapping elements that may have a different size. Whereas rem takes a simpler approach and just calculates everything from the root HTML element. It removes the size cascading that can occur when using ems and nested, complicated elements. This may sound confusing, but it is actually easier to use em units. Just remember your root font-size and use that when figuring out your rem values. What this means for migration is that you will need to go through your stylesheet and change any px or em values to use ems. You'll need to recalculate everything to make sure it fits the new format if you want to maintain the same look and feel for your project. Other font updates The trend for a long while has been to make text on a screen larger and easier to read for all users. In the past, we used tons of small typefaces that might have looked cool but were hard to read for anyone visually challenged. To that end, the base font-size for Bootstrap has been changed from 14px to 16px. This is also the standard size for most browsers and makes the readability of text better. Again, from a migration standpoint, you'll need to review your components to ensure they still look correct with the increased font size. You may need to make some changes if you have components that were based off the 14px default font-size in Bootstrap 3. New grid size With the increased use of mobile devices, Bootstrap 4 includes a new smaller grid tier for small screen devices. The new grid tier is called extra small and is configured for devices under 480px in width. For the migration story this shouldn't have a big effect. What it does do is allow you a new breakpoint if you want to further optimize your project for smaller screens. That concludes the main global changes to Bootstrap that you should be aware of when migrating your projects. Next, let's take a look at components. Migrating components With the release of Bootstrap 4, a few components have been dropped and a couple new ones have been added. The most significant change is around the new Cards component. Let's start by breaking down this new option. Migrating to the Cards component With the release of the Cards component, the Panels, Thumbnails, and Wells components have been removed from Bootstrap 4. Cards combines the best of these elements into one and even adds some new functionality that is really useful. If you are migrating from a Bootstrap 3 project, you'll need to update any Panels, Thumbnails, or Wells to use the Cards component instead. Since the markup is a bit different, I would recommend just removing the old components all together, and then recode them using the same content as Cards. Using icon fonts The Glyph-icons icon font has been removed from Bootstrap 4. I'm guessing this is due to licensing reasons as the library was not fully open source. If you don't want to update your icon code, simply download the library from the Glyph-icons website at: http://glyphicons.com/ The other option would be to change the icon library to a different one like Font Awesome. If you go down this route, you'll need to update all of your <i> tags to use the proper CSS class to render the icon. There is a quick reference tool that will allow you to do this called GlyphSearch. This tool supports a number of icon libraries and I use it all the time. Check it out at: http://glyphsearch.com/. Those are the key components you need to be aware of. Next let's go over what's different in JavaScript. Migrating JavaScript The JavaScript components have been totally rewritten in Bootstrap 4. Everything is now coded in ES6 and compiled with Babel which makes it easier and faster to use. On the component size, the biggest difference is the Tooltips component. The Tooltip is now dependant on an external library called Tether, which you can download from: http://github.hubspot.com/tether/. If you are using Tooltips, make sure you include this library in your template. The actual markup looks to be the same for calling a Tooltip but you must include the new library when migrating from version 3 to 4. Miscellaneous migration changes Aside from what I've gone over already, there are a number of other changes you need to be aware of when migrating to Bootstrap 4. Let's go through them all below. Migrating typography The .page-header class has been dropped from version 4. Instead, you should look at using the new display CSS classes on your headers if you want to give them a heading look and feel. Migrating images If you've ever used responsive images in the past, the class name has changed. Previously, the class name was .image-responsive but it is now named .image-fluid. You'll need to update that class anywhere it is used. Migrating tables For the table component, a few class names have changed and there are some new classes you can use. If you would like to create a responsive table, you can now simply add the class .table-responsive to the <table> tag. Previously, you had to wrap the class around the <table> tag. If migrating, you'll need to update your HTML markup to the new format. The .table-condensed class has been renamed to .table-sm. You'll need to update that class anywhere it is used. There are a couple of new table styles you can add called .table-inverse or .table-reflow. Migrating forms Forms are always a complicated component to code. In Bootstrap 4, some of the class names have changed to be more consistent. Here's a list of the differences you need to know about: control-label is now .form-control-label input-lg and .input-sm are now .form-control-lg and .form-control-sm The .form-group class has been dropped and you should instead use .form-control You likely have these classes throughout most of your forms. You'll need to update them anywhere they are used. Migrating buttons There are some minor CSS class name changes that you need to be aware of: btn-default is now .btn-secondary The .btn-xs class has been dropped from Bootstrap 4 Again, you'll need to update these classes when migrating to the new version of Bootstrap. There are some other minor changes when migrating on components that aren't as commonly used. I'm confident my explanation will cover the majority of use cases when using Bootstrap 4. However, if you would like to see the full list of changes, please visit: http://v4-alpha.getbootstrap.com/migration/. Summary That brings this article to a close! Hope you are able to migrate your Bootstrap 3 project to Bootstrap 4. Resources for Article: Further resources on this subject: Responsive Visualizations Using D3.js and Bootstrap [article] Advanced Bootstrap Development Tools [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 39091

article-image-aws-mobile-developers-getting-started-mobile-hub
Raka Mahesa
11 Aug 2016
5 min read
Save for later

AWS for Mobile Developers - Getting Started with Mobile Hub

Raka Mahesa
11 Aug 2016
5 min read
Amazon Web Services, also known as AWS, is a go-to destination for developers to host their server-side apps. AWS isn’t exactly beginner-friendly, however. So if you're a mobile developer who doesn't have much knowledge in the backend field, AWS, with its countless weirdly-named services, may look like a complex beast. Well, Amazon has decided to rectify that. In late 2015, they rolled out the AWS Mobile Hub, a new dashboard for managing Amazon services on a mobile platform. The most important part about it is that it’s easy to use. AWS Mobile Hub features the following services: User authentication via Amazon Cognito Data, file, and resource storage using Amazon S3 and Cloudfront Backend code with Amazon Lambda Push notification using Amazon SNS Analytics using Amazon Analytics After seeing this list of services, you may still think it's complicated and that you only need one or two services from that list. The good news is that the hub allows you to cherry-pick the service you want instead of forcing you to use all of them. So, if your mobile app only needs to access some files on the Internet, then you can choose to only use the resource storage functionality and skip the other features. So, is AWS Mobile Hub basically a more focused version of the AWS dashboard? Well, it's more than just that. The hub is able to configure the Amazon services that you're going to use, so they're automatically tailored to your app. The hub will then generate Android and iOS codes for connecting to the services you just set up, so that you can quickly copy them to your app and use the services right away. Do note that most of the stuff that was done automatically by the hub can also be achieved by integrating the AWS SDK and configuring each service manually. Fortunately, you can easily add Amazon services that aren’t included in the hub. So, if you want to also use the Amazon DynamoDB service on your app, all you have to do is call the relevant DynamoDB functions from the AWS SDK and you're good to go. All right, enough talking. Let's give the AWS Mobile Hub a whirl! We're going to use the hub for the Android platform, so make sure you satisfy the following requirements: Android Studio v1.2 or above Android 4.4 (API 19) SDK Android SDK Build-tools 23.0.1 Let's start by opening the AWS Mobile Hub to create a new mobile project (you will be asked to log in to your Amazon account if you haven't done so). After entering the project name, you are presented with the hub dashboard, where you can choose the service you want to add to your app. Let's start by adding the User Sign-in functionality. There are a couple of steps that must be completed to configure the authentication service. First you need to figure out whether your app can be used without logging in or not (for example, users can use Imgur without logging in, but they have to log in to use Facebook). If a user doesn't need to be logged in, choose "Sign-in is optional"; otherwise, choose that sign-in is required. The next step is to add the actual authentication method. You can use your own authentication method, but that requires setting up a server, so let's go with a 3rd party authentication instead. When choosing a 3rd party authentication, create a corresponding app on the 3rd party website and then copy the required information to the Mobile Hub dashboard. When that's done, save the changes you made and return to the service picker. Except for the Cloud Logic service, the configurations for the other services are quite straightforward, so let's add User Data Storage and App Content Delivery services. If you want to integrate Cloud Logic, you will be directed to the Amazon Lambda dashboard, where you will need to write a function with JavaScript that will be run on the server. So let's leave it to another time for now. All right, you should be all set up now, so let's proceed with building the base Android app. Click on the build button on the menu to the left and then choose Android. The Hub will then configure all the services you chose earlier and provide you with an Android project that has been integrated with all of the necessary SDK, including the SDK for the 3rd party authentication. It's pretty nice, isn't it? Download the Android project and unzip it, and make note of the "MySampleApp" folder inside it. Fire up Android Studio and import (File > New > Import Project...) that folder. Wait for the project to finish syncing, and once it's done, try running it in your Android device to see if AWS was integrated successfully or not. And that's it. All of the code needed to connect to the Amazon services you have set up earlier can be found in the MySampleApp project. Now you can simply copy that to your actual project or use the project as a base to build the app you want. Check out the build section of the dashboard for a more detailed explanation of the generated codes.  About the author Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR (https://play.google.com/store/apps/details?id=com.rakamahesa.corridoom) being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 6856

article-image-application-logging
Packt
11 Aug 2016
8 min read
Save for later

Application Logging

Packt
11 Aug 2016
8 min read
In this article by Travis Marlette, author of Splunk Best Practices, will cover the following topics: (For more resources related to this topic, see here.) Log messengers Logging formats Within the working world of technology, there are hundreds of thousands of different applications, all (usually) logging in different formats. As Splunk experts, our job is make all those logs speak human, which is often the impossible task. With third-party applications that provide support, sometimes log formatting is out of our control. Take for instance, Cisco or Juniper, or any other leading application manufacturer. We won't be discussing these kinds of logs in this article, but instead the logs that we do have some control over. The logs I am referencing belong to proprietary in-house (also known as "home grown") applications that are often part of the middle-ware, and usually they control some of the most mission-critical services an organization can provide. Proprietary applications can be written in any language, however logging is usually left up to the developers for troubleshooting and up until now the process of manually scraping log files to troubleshoot quality assurance issues and system outages has been very specific. I mean that usually, the developer(s) are the only people that truly understand what those log messages mean. That being said, oftentimes developers write their logs in a way that they can understand them, because ultimately it will be them doing the troubleshooting / code fixing when something breaks severely. As an IT community, we haven't really started taking a look at the way we log things, but instead we have tried to limit the confusion to developers, and then have them help other SME's that provide operational support to understand what is actually happening. This method is successful, however it is slow, and the true value of any SME is reducing any system’s MTTR, and increasing uptime. With any system, the more transactions processed means the larger the scale of a system, which means that, after about 20 machines, troubleshooting begins to get more complex and time consuming with a manual process. This is where something like Splunk can be extremely valuable, however Splunk is only as good as the information that is coming into it. I will say this phrase for the people who haven't heard it yet; "garbage in… garbage out". There are some ways to turn proprietary logging into a powerful tool, and I have personally seen the value of these kinds of logs, after formatting them for Splunk, they turn into a huge asset in an organization’s software life cycle. I'm not here to tell you this is easy, but I am here to give you some good practices about how to format proprietary logs. To do that I'll start by helping you appreciate a very silent and critical piece of the application stack. To developers, a logging mechanism is a very important part of the stack, and the log itself is mission critical. What we haven't spent much time thinking about before log analyzers, is how to make log events/messages/exceptions more machine friendly so that we can socialize the information in a system like Splunk, and start to bridge the knowledge gap between development and operations. The nicer we format the logs, the faster Splunk can reveal the information about our systems, saving everyone time and from headaches. Loggers Here I'm giving some very high level information on loggers. My intention is not to recommend logging tools, but simply to raise awareness of their existence for those that are not in development, and allow for independent research into what they do. With the right developer, and the right Splunker, the logger turns into something immensely valuable to an organization. There is an array of different loggers in the IT universe, and I'm only going to touch on a couple of them here. Keep in mind that I'm only referencing these due to the ease of development I've seen from personal experience, and experiences do vary. I'm only going to touch on three loggers and then move on to formatting, as there are tons of logging mechanisms and the preference truly depends on the developer. Anatomy of a log I'm going to be taking some very broad strokes with the following explanations in order to familiarize you, the Splunker, with the logger. If you would like to learn more information, please either seek out a developer to help you understand the logic better or acquire some education how to develop and log in independent study. There are some pretty basic components to logging that we need to understand to learn which type of data we are looking at. I'll start with the four most common ones: Log events: This is the entirety of the message we see within a log, often starting with a timestamp. The event itself contains all other aspects of application behavior such as fields, exceptions, messages, and so on… think of this as the "container" if you will, for information. Messages: These are often made by the developer of the application and provide some human insight into what's actually happening within an application. The most common messages we see are things like unauthorized login attempt <user> or Connection Timed out to <ip address>. Message Fields: These are the pieces of information that give us the who, where, and when types of information for the application’s actions. They are handed to the logger by the application itself as it either attempts or completes an activity. For instance, in the log event below, the highlighted pieces are what would be fields, and often those that people look for when troubleshooting: "2/19/2011 6:17:46 AM Using 'xplog70.dll' version '2009.100.1600' to execute extended store procedure 'xp_common_1' operation failed to connect to 'DB_XCUTE_STOR'" Exceptions: These are the uncommon, but very important pieces of the log. They are usually only written when something went wrong, and offer developer insight into the root cause at the application layer. They are usually only printed when an error occurs, and used for debugging. These exceptions can print a huge amount of information into the log depending on the developer and the framework. The format itself is not easy and in some cases not even possible for a developer to manage. Log4* This is an open source logger that is often used in middleware applications. Pantheios This is a logger popularly used for Linux, and popular for its performance and multi-threaded handling of logging. Commonly, Pantheios is used for C/C++ applications, but it works with a multitude of frameworks. Logging – logging facility for Python This is a logger specifically for Python, and since Python is becoming more and more popular, this is a very common package used to log Python scripts and applications. Each one of these loggers has their own way of logging, and the value is determined by the application developer. If there is no standardized logging, then one can imagine the confusion this can bring to troubleshooting. Example of a structured log This is an example of a Java exception in a structured log format: When Java prints this exception, it will come in this format and a developer doesn't control what that format is. They can control some aspects about what is included within an exception, though the arrangement of the characters and how it's written is done by the Java framework itself. I mention this last part in order to help operational people understand where the control of a developer sometimes ends. My own personal experience has taught me that attempting to change a format that is handled within the framework itself is an attempt at futility. Pick your battles right? As a Splunker, you can save yourself a headache on this kind of thing. Summary While I say that, I will add an addendum by saying that Splunk, mixed with a Splunk expert and the right development resources, can also make the data I just mentioned extremely valuable. It will likely not happen as fast as they make it out to be at a presentation, and it will take more resources than you may have thought, however at the end of your Splunk journey, you will be happy. This article was to help you understand the importance of logs formatting, and how logs are written. We often don't think about our logs proactively, and I encourage you to do so. Resources for Article: Further resources on this subject: Logging and Monitoring [Article] Logging and Reports [Article] Using Events, Interceptors, and Logging Services [Article]
Read more
  • 0
  • 0
  • 1841
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-go-programming-control-flow
Packt
10 Aug 2016
13 min read
Save for later

Go Programming Control Flow

Packt
10 Aug 2016
13 min read
In this article by Vladimir Vivien author of the book Learning Go programming explains some basic control flow of Go programming language. Go borrows several of the control flow syntax from its C-family of languages. It supports all of the expected control structures including if-else, switch, for-loop, and even goto. Conspicuously absent though are while or do-while statements. The following topics examine Go's control flow elements. Some of which you may already be familiar and others that bring new set of functionalities not found in other languages. The if statement The switch statement The type Switch (For more resources related to this topic, see here.) The If Statement The if-statement, in Go, borrows its basic structural form from other C-like languages. The statement conditionally executes a code-block when the Boolean expression that follows the if keyword which evaluates to true as illustrated in the following abbreviated program that displays information about the world currencies. import "fmt" type Currency struct { Name string Country string Number int } var CAD = Currency{ Name: "Canadian Dollar", Country: "Canada", Number: 124} var FJD = Currency{ Name: "Fiji Dollar", Country: "Fiji", Number: 242} var JMD = Currency{ Name: "Jamaican Dollar", Country: "Jamaica", Number: 388} var USD = Currency{ Name: "US Dollar", Country: "USA", Number: 840} func main() { num0 := 242 if num0 > 100 || num0 < 900 { mt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } } func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } The if statement in Go looks similar to other languages. However, it sheds a few syntactic rules while enforcing new ones. The parentheses, around the test expression, are not necessary. While the following if-statement will compile, it is not idiomatic: if (num0 > 100 || num0 < 900) { fmt.Println("Currency: ", num0) printCurr(num0) } Use instead: if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } The curly braces for the code block are always required. The following snippet will not compile: if num0 > 100 || num0 < 900 printCurr(num0) However, this will compile: if num0 > 100 || num0 < 900 {printCurr(num0)} It is idiomatic, however, to write the if statement on multiple lines (no matter how simple the statement block may be). This encourages good style and clarity. The following snippet will compile with no issues: if num0 > 100 || num0 < 900 {printCurr(num0)} However, the preferred idiomatic layout for the statement is to use multiple lines as follows: if num0 > 100 || num0 < 900 { printCurr(num0) } The if statement may include an optional else block which is executed when the expression in the if block evaluates to false. The code in the else block must be wrapped in curly braces using multiple lines as shown in the following. if num0 > 100 || num0 < 900 { fmt.Println("Currency: ", num0) printCurr(num0) } else { fmt.Println("Currency unknown") } The else keyword may be immediately followed by another if statement forming an if-else-if chain as used in function printCurr() from the source code listed earlier. if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) The if-else-if statement chain can grow as long as needed and may be terminated by an optional else statement to express all other untested conditions. Again, this is done in the printCurr() function which tests four conditions using the if-else-if blocks. Lastly, it includes an else statement block to catch any other untested conditions: func printCurr(number int) { if CAD.Number == number { fmt.Printf("Found: %+vn", CAD) } else if FJD.Number == number { fmt.Printf("Found: %+vn", FJD) } else if JMD.Number == number { fmt.Printf("Found: %+vn", JMD) } else if USD.Number == number { fmt.Printf("Found: %+vn", USD) } else { fmt.Println("No currency found with number", number) } } In Go, however, the idiomatic and cleaner way to write such a deep if-else-if code block is to use an expressionless switch statement. This is covered later in the section on SwitchStatement. If Statement Initialization The if statement supports a composite syntax where the tested expression is preceded by an initialization statement. At runtime, the initialization is executed before the test expression is evaluated as illustrated in this code snippet (from the program listed earlier). if num1 := 388; num1 > 100 || num1 < 900 { fmt.Println("Currency:", num1) printCurr(num1) } The initialization statement follows normal variable declaration and initialization rules. The scope of the initialized variables is bound to the if statement block beyond which they become unreachable. This is a commonly used idiom in Go and is supported in other flow control constructs covered in this article. Switch Statements Go also supports a switch statement similarly to that found in other languages such as C or Java. The switch statement in Go achieves multi-way branching by evaluating values or expressions from case clauses as shown in the following abbreviated source code: import "fmt" type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, Curr{"EUR", "Euro", "Greece", 978}, Curr{"HTG", "Gourde", "Haiti", 332}, ... } func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } func isDollar2(curr Curr) bool { dollars := []Curr{currencies[2], currencies[6], currencies[9]} switch curr { default: return false case dollars[0]: fallthrough case dollars[1]: fallthrough case dollars[2]: return true } return false } func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } func main() { curr := Curr{"EUR", "Euro", "Italy", 978} if isDollar(curr) { fmt.Printf("%+v is Dollar currencyn", curr) } else if isEuro(curr) { fmt.Printf("%+v is Euro currencyn", curr) } else { fmt.Println("Currency is not Dollar or Euro") } dol := Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344} if isDollar2(dol) { fmt.Println("Dollar currency found:", dol) } } The switch statement in Go has some interesting properties and rules that make it easy to use and reason about. Semantically, Go's switch-statement can be used in two contexts: An expression-switch statement A type-switch statement The break statement can be used to escape out of a switch code block early The switch statement can include a default case when no other case expressions evaluate to a match. There can only be one default case and it may be placed anywhere within the switch block. Using Expression Switches Expression switches are flexible and can be used in many contexts where control flow of a program needs to follow multiple path. An expression switch supports many attributes as outlined in the following bullets. Expression switches can test values of any types. For instance, the following code snippet (from the previous program listing) tests values of struct type Curr. func isDollar(curr Curr) bool { var bool result switch curr { default: result = false case Curr{"AUD", "Australian Dollar", "Australia", 36}: result = true case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: result = true case Curr{"USD", "US Dollar", "United States", 840}: result = true } return result } The expressions in case clauses are evaluated from left to right, top to bottom, until a value (or expression) is found that is equal to that of the switch expression. Upon encountering the first case that matches the switch expression, the program will execute the statements for the case block and then immediately exist the switch block. Unlike other languages, the Go case statement does not need to use a break to avoid falling through the next case. For instance, calling isDollar(Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}) will match the second case statement in the function above. The code will set result to true and exist the switch code block immediately. Case clauses can have multiple values (or expressions) separated by commas with logical OR operator implied between them. For instance, in the following snippet, the switch expression curr is tested against values currencies[2], currencies[4], or currencies[10] using one case clause until a match is found. func isEuro(curr Curr) bool { switch curr { case currencies[2], currencies[4], currencies[10]: return true default: return false } } The switch statement is the cleaner and preferred idiomatic approach to writing complex conditional statements in Go. This is evident when the snippet above is compared to the following which does the same comparison using if statements. func isEuro(curr Curr) bool { if curr == currencies[2] || curr == currencies[4], curr == currencies[10]{ return true }else{ return false } } Fallthrough Cases There is no automatic fall through in Go's case clause as it does in the C or Java switch statements. Recall that a switch block that will exit after executing its first matching case. The code must explicitly place the fallthrough keyword, as the last statement in a case block, to force the execution flow to fall through the successive case block. The following code snippet shows a switch statement with a fallthrough in each case block. func isDollar2(curr Curr) bool { switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}: fallthrough case Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}: fallthrough case Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } } When a case is matched, the fallthrough statements cascade down to the first statement of the successive case block. So if curr = Curr{"AUD", "Australian Dollar", "Australia", 36}, the first case will be matched. Then the flow cascades down to the first statement of the second case block which is also a fallthrough statement. This causes the first statement, return true, of the third case block to execute. This is functionally equivalent to following snippet. switch curr { case Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"HKD", "Hong Kong Dollar", "Hong Koong", 344}, Curr{"USD", "US Dollar", "United States", 840}: return true default: return false } Expressionless Switches Go supports a form of the switch statement that does not specify an expression. In this format, each case expression must evaluate to a Boolean value true. The following abbreviated source code illustrates the uses of an expressionless switch statement as listed in function find(). The function loops through the slice of Curr values to search for a match based on field values in the struct passed in: import ( "fmt" "strings" ) type Curr struct { Currency string Name string Country string Number int } var currencies = []Curr{ Curr{"DZD", "Algerian Dinar", "Algeria", 12}, Curr{"AUD", "Australian Dollar", "Australia", 36}, Curr{"EUR", "Euro", "Belgium", 978}, Curr{"CLP", "Chilean Peso", "Chile", 152}, ... } func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } Notice in the previous example, the switch statement in function find() does not include an expression. Each case expression is separated by a comma and must be evaluated to a Boolean value with an implied OR operator between each case. The previous switch statement is equivalent to the following use of if statement to achieve the same logic. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] if strings.Contains(c.Currency, name) || strings.Contains(c.Name, name) || strings.Contains(c.Country, name){ fmt.Println("Found", c) } } } Switch Initializer The switch keyword may be immediately followed by a simple initialization statement where variables, local to the switch code block, may be declared and initialized. This convenient syntax uses a semicolon between the initializer statement and the switch expression to declare variables which may appear anywhere in the switch code block. The following code sample shows how this is done by initializing two variables name and curr as part of the switch declaration. func assertEuro(c Curr) bool { switch name, curr := "Euro", "EUR"; { case c.Name == name: return true case c.Currency == curr: return true } return false } The previous code snippet uses an expressionless switch statement with an initializer. Notice the trailing semicolon to indicate the separation between the initialization statement and the expression area for the switch. In the example, however, the switch expression is empty. Type Switches Given Go's strong type support, it should be of little surprise that the language supports the ability to query type information. The type switch is a statement that uses the Go interface type to compare underlying type information of values (or expressions). A full discussion on interface types and type assertion is beyond the scope of this section. For now all you need to know is that Go offers the type interface{}, or empty interface, as a super type that is implemented by all other types in the type system. When a value is assigned type interface{}, it can be queried using the type switch as, shown in function findAny() in following code snippet, to query information about its underlying type. func find(name string) { for i := 0; i < 10; i++ { c := currencies[i] switch { case strings.Contains(c.Currency, name), strings.Contains(c.Name, name), strings.Contains(c.Country, name): fmt.Println("Found", c) } } } func findNumber(num int) { for _, curr := range currencies { if curr.Number == num { fmt.Println("Found", curr) } } } func findAny(val interface{}) { switch i := val.(type) { case int: findNumber(i) case string: find(i) default: fmt.Printf("Unable to search with type %Tn", val) } } func main() { findAny("Peso") findAny(404) findAny(978) findAny(false) } The function findAny() takes an interface{} as its parameter. The type switch is used to determine the underlying type and value of the variable val using the type assertion expression: switch i := val.(type) Notice the use of the keyword type in the type assertion expression. Each case clause will be tested against the type information queried from val.(type). Variable i will be assigned the actual value of the underlying type and is used to invoke a function with the respective value. The default block is invoked to guard against any unexpected type assigned to the parameter val parameter. Function findAny may then be invoked with values of diverse types, as shown in the following code snippet. findAny("Peso") findAny(404) findAny(978) findAny(false) Summary This article gave a walkthrough of the mechanism of control flow in Go including if, switch statements. While Go’s flow control constructs appear simple and easy to use, they are powerful and implement all branching primitives expected for a modern language. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Boost.Asio C++ Network Programming [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 12171

article-image-third-dimension
Packt
10 Aug 2016
13 min read
Save for later

The Third Dimension

Packt
10 Aug 2016
13 min read
In this article by Sebastián Di Giuseppe, author of the book, Building a 3D game with LibGDX, describes about how to work in 3 dimensions! For which we require new camera techniques. The third dimension adds a new axis, instead of having just the x and y grid, a slightly different workflow, and lastly new render methods are required to draw our game. We'll learn the very basics of this workflow in this article for you to have a sense of what's coming, like moving, scaling, materials, environment, and some others and we are going to move systematically between them one step at a time. (For more resources related to this topic, see here.) The following topics will be covered in this article: Camera techniques Workflow LibGDX's 3D rendering API Math Camera techniques The goal of this article is to successfully learn about working with 3D as stated. In order to achieve this we will start at the basics, making a simple first person camera. We will facilitate the functions and math that LibGDX contains. Since you probably have used LibGDX more than once, you should be familiar with the concepts of the camera in 2D. The way 3D works is more or less the same, except there is a z axis now for the depth . However instead of an OrthographicCamera class, a PerspectiveCamera class is used to set up the 3D environment. Creating a 3D camera is just as easy as creating a 2D camera. The constructor of a PerspectiveCamera class requires three arguments, the field of vision, camera width and camera height. The camera width and height are known from 2D cameras, the field of vision is new. Initialization of a PerspectiveCamera class looks like this: float FoV = 67; PerspectiveCamera camera = new PerspectiveCamera(FoV, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); The first argument, field of vision, describes the angle the first person camera can see. The image above gives a good idea what the field of view is. For first person shooters values up to 100 are used. Higher than 100 confuses the player, and with a lower field of vision the player is bound to see less. Displaying a texture. We will start by doing something exciting, drawing a cube on the screen! Drawing a cube First things first! Let's create a camera. Earlier, we showed the difference between the 2D camera and the 3D camera, so let's put this to use. Start by creating a new class on your main package (ours is com.deeep.spaceglad) and name it as you like. The following imports are used on our test: import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.graphics.Color; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.PerspectiveCamera; import com.badlogic.gdx.graphics.VertexAttributes; import com.badlogic.gdx.graphics.g3d.*; import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute; import com.badlogic.gdx.graphics.g3d.environment.DirectionalLight; import com.badlogic.gdx.graphics.g3d.utils.ModelBuilder; Create a class member called cam of type PerspectiveCamera; public PerspectiveCamera cam; Now this camera needs to be initialized and needs to be configured. This will be done in the create method as shown below. public void create() { cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(10f, 10f, 10f); cam.lookAt(0,0,0); cam.near = 1f; cam.far = 300f; cam.update(); } In the above code snippet we are setting the position of the camera, and looking towards a point set at 0, 0, 0 . Next up, is getting a cube ready to draw. In 2D it was possible to draw textures, but textures are flat. In 3D, models are used. Later on we will import those models. But we will start with generated models. LibGDX offers a convenient class to build simple models such as: spheres, cubes, cylinders, and many more to choose from. Let's add two more class members, a Model and a ModelInstance. The Model class contains all the information on what to draw, and the resources that go along with it. The ModelInstance class has information on the whereabouts of the model such as the location rotation and scale of the model. public Model model; public ModelInstance instance; Add those class members. We use the overridden create function to initialize our new class members. public void create() { … ModelBuilder modelBuilder = new ModelBuilder();Material mat = new Material(ColorAttribute.createDiffuse(Color.BLUE));model = modelBuilder.createBox(5, 5, 5, mat, VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);instance = new ModelInstance(model); } We use a ModelBuilder class to create a box. The box will need a material, a color. A material is an object that holds different attributes. You could add as many as you would like. The attributes passed on to the material changes the way models are perceived and shown on the screen. We could, for example, add FloatAttribute.createShininess(8f) after the ColorAttribute class, that will make the box to shine with lights around. There are more complex configurations possible but we will leave that out of the scope for now. With the ModelBuilder class, we create a box of (5, 5, 5). Then we pass the material in the constructor, and the fifth argument are attributes for the specific box we are creating. We use a bitwise operator to combine a position attribute and a normal attribute. We tell the model that it has a position, because every cube needs a position, and the normal is to make sure the lighting works and the cube is drawn as we want it to be drawn. These attributes are passed down to openGL on which LibGDX is build. Now we are almost ready for drawing our first cube. Two things are missing, first of all: A batch to draw to. When designing 2D games in LibGDX a SpriteBatch class is used. However since we are not using sprites anymore, but rather models, we will use a ModelBatch class. Which is the equivalent for models. And lastly, we will have to create an environment and add lights to it. For that we will need two more class members: public ModelBatchmodelBatch; public Environment environment; And they are to be initialized, just like the other class members: public void create() { .... modelBatch = new ModelBatch(); environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, - 1f, -0.8f, -0.2f)); } Here we add two lights, an ambient light, which lights up everything that is being drawn (a general light source for all the environment), and a directional light, which has a direction (most similar to a "sun" type of source). In general, for lights, you can experiment directions, colors, and different types. Another type of light would be PointLight and it can be compared to a flashlight. Both lights start with 3 arguments, for the color, which won't make a difference yet as we don't have any textures. The directional lights constructor is followed by a direction. This direction can be seen as a vector. Now we are all set to draw our environment and the model in it @Override public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(instance, environment); modelBatch.end(); } It directly renders our cube. The ModelBatch catch behaves just like a SpriteBatch, as can be seen if we run it, it has to be started (begin), then ask for it to render and give them the parameters (models and environment in our case), and then make it stop. We should not forget to release any resources that our game allocated. The model we created allocates memory that should be disposed of. @Override public void dispose() { model.dispose(); } Now we can look at our beautiful cube! It's only very static and empty. We will add some movement to it in our next subsection! Translation Translating rotating and scaling are a bit different to that of a 2D game. It's slightly more mathematical. The easier part are vectors, instead of a vector2D, we can now use a vector3D, which is essentially the same, just that, it adds another dimension. Let's look at some basic operations of 3D models. We will use the cube that we previously created. With translation we are able to move the model along all three the axis. Let's create a function that moves our cube along the x axis. We add a member variable to our class to store the position in for now. A Vector3 class. Vector3 position = new Vector3(); private void movement() { instance.transform.getTranslation(position); position.x += Gdx.graphics.getDeltaTime(); instance.transform.setTranslation(position); } The above code snippet retrieves the translation, adds the delta time to the x attribute of the translation. Then we set the translation of the ModelInstance. The 3D library returns the translation a little bit different than normally. We pass a vector, and that vector gets adjusted to the current state of the object. We have to call this function every time the game updates. So therefore we put it in our render loop before we start drawing. @Override public void render() { movement(); ... } It might seem like the cube is moving diagonally, but that's because of the angle of our camera. In fact it's' moving towards one face of the cube. That was easy! It's only slightly annoying that it moves out of bounds after a short while. Therefor we will change the movement function to contain some user input handling. private void movement() { instance.transform.getTranslation(position); if(Gdx.input.isKeyPressed(Input.Keys.W)){ position.x+=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.D)){ position.z+=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.A)){ position.z-=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.S)){ position.x-=Gdx.graphics.getDeltaTime(); } instance.transform.setTranslation(position); } The rewritten movement function retrieves our position, updates it based on the keys that are pressed, and sets the translation of our model instance. Rotation Rotation is slightly different from 2D. Since there are multiple axes on which we can rotate, namely the x, y, and z axis. We will now create a function to showcase the rotation of the model. First off let us create a function in which  we can rotate an object on all axis private void rotate() { if (Gdx.input.isKeyPressed(Input.Keys.NUM_1)) instance.transform.rotate(Vector3.X, Gdx.graphics.getDeltaTime() * 100); if (Gdx.input.isKeyPressed(Input.Keys.NUM_2)) instance.transform.rotate(Vector3.Y, Gdx.graphics.getDeltaTime() * 100); if (Gdx.input.isKeyPressed(Input.Keys.NUM_3)) instance.transform.rotate(Vector3.Z, Gdx.graphics.getDeltaTime() * 100); } And let's not forget to call this function from the render loop, after we call the movement function @Override public void render() { ... rotate(); } If we press the number keys 1, 2 or 3, we can rotate our model. The first argument of the rotate function is the axis to rotate on. The second argument is the amount to rotate. These functions are to add a rotation. We can also set the value of an axis, instead of add a rotation, with the following function: instance.transform.setToRotation(Vector3.Z, Gdx.graphics.getDeltaTime() * 100); However say, we want to set all three axis rotations at the same time, we can't simply call setToRotation function three times in a row for each axis, as they eliminate any other rotation done before that. Luckily LibGDX has us covered with a function that is able to take all three axis. float rotation; private void rotate() { rotation = (rotation + Gdx.graphics.getDeltaTime() * 100) % 360; instance.transform.setFromEulerAngles(0, 0, rotation); } The above function will continuously rotate our cube. We face one last problem. We can't seem to move the cube! The setFromEulerAngles function clears all the translation and rotation properties. Lucky for us the setFromEulerAngles returns a Matrix4 type, so we can chain and call another function from it. A function which translates the matrix for example. For that we use the trn(x,y,z) function. Short for translate. Now we can update our rotation function, although it also translates. instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z); Now we can set our cube to a rotation, and translate it! These are the most basic operations which we will use a lot throughout the book. As you can see this function does both the rotation and the translation. So we can remove the last line in our movement function instance.transform.setTranslation(position); Our latest rotate function looks like the following: private void rotate() { rotation = (rotation + Gdx.graphics.getDeltaTime() * 100) % 360; instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z); } The setFromEulerAngles function will be extracted to a function of its own, as it serves multiple purposes now and is not solely bound to our rotate function. private void updateTransformation(){ instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z).scale(scale,scale,scale); } This function should be called after we've calculated our rotation and translation public void render() { rotate(); movement(); updateTransformation(); ... } Scaling We've almost had all of the transformations we can apply to models. The last one being described in this book is the scaling of a model. LibGDX luckily contains all the required functions and methods for this. Let's extend our previous example and make our box growing and shrinking over time. We first create a function that increments and subtracts from a scale variable. boolean increment;float scale = 1; void scale(){ if(increment) { scale = (scale + Gdx.graphics.getDeltaTime()/5); if (scale >= 1.5f) { increment = false; } else { scale = (scale - Gdx.graphics.getDeltaTime()/5); if(scale <= 0.5f) increment = true; } } Now to apply this scaling we can adjust our updateTransformation function to include the scaling. private void updateTransformation(){ instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z).scale(scale,scale,scale); } Our render method should now include the scaling function as well public void render() { rotate(); movement(); scale(); updateTransformation(); ... } And there you go, we can now successfully move, rotate and scale our cube! Summary In this article we learned about the workflow of LibGDX 3D API. We are now able to apply multiple kinds of transformations to a model, and understand the differences between 2D and 3D. We also learned how to apply materials to models, which will change the appearance of the model and lets us create cool effects. Note that there's plenty more information that you can learn about 3D and a lot of practice to go with it to fully understand it. There's also subjects not covered here, like how to create your own materials, and how to make and use of shaders. There's plenty room for learning and experimenting. In the next article we will start on applying the theory that's learned in this article, and start working towards an actual game! We will also go more in depth on the environment and lights, as well as collision detection. So plenty to look forward to. Resources for Article: Further resources on this subject: 3D Websites [Article] Your 3D World [Article] Using 3D Objects [Article]
Read more
  • 0
  • 0
  • 35025

article-image-consistency-conflicts
Packt
10 Aug 2016
11 min read
Save for later

Consistency Conflicts

Packt
10 Aug 2016
11 min read
In this article by Robert Strickland, author of the book Cassandra 3.x High Availability - Second Edition, we will discuss how for any given call, it is possible to achieve either strong consistency or eventual consistency. In the former case, we can know for certain that the copy of the data that Cassandra returns will be the latest. In the case of eventual consistency, the data returned may or may not be the latest, or there may be no data returned at all if the node is unaware of newly inserted data. Under eventual consistency, it is also possible to see deleted data if the node you're reading from has not yet received the delete request. (For more resources related to this topic, see here.) Depending on the read_repair_chance setting and the consistency level chosen for the read operation, Cassandra might block the client and resolve the conflict immediately, or this might occur asynchronously. If data in conflict is never requested, the system will resolve the conflict the next time nodetool repair is run. How does Cassandra know there is a conflict? Every column has three parts: key, value, and timestamp. Cassandra follows last-write-wins semantics, which means that the column with the latest timestamp always takes precedence. Now, let's discuss one of the most important knobs a developer can turn to determine the consistency characteristics of their reads and writes. Consistency levels On every read and write operation, the caller must specify a consistency level, which lets Cassandra know what level of consistency to guarantee for that one call. The following table details the various consistency levels and their effects on both read and write operations: Consistency level Reads Writes ANY This is not supported for reads. Data must be written to at least one node, but permits writes via hinted handoff. Effectively allows a write to any node, even if all nodes containing the replica are down. A subsequent read might be impossible if all replica nodes are down. ONE The replica from the closest node will be returned. Data must be written to at least one replica node (both commit log and memtable). Unlike ANY, hinted handoff writes are not sufficient. TWO The replicas from the two closest nodes will be returned. The same as ONE, except two replicas must be written. THREE The replicas from the three closest nodes will be returned. The same as ONE, except three replicas must be written. QUORUM Replicas from a quorum of nodes will be compared, and the replica with the latest timestamp will be returned. Data must be written to a quorum of replica nodes (both commit log and memtable) in the entire cluster, including all data centers. SERIAL Permits reading uncommitted data as long as it represents the current state. Any uncommitted transactions will be committed as part of the read. Similar to QUORUM, except that writes are conditional based on the support for lightweight transactions. LOCAL_ONE Similar to ONE, except that the read will be returned by the closest replica in the local data center. Similar to ONE, except that the write must be acknowledged by at least one node in the local data center. LOCAL_QUORUM Similar to QUORUM, except that only replicas in the local data center are compared. Similar to QUORUM, except the quorum must only be met using the local data center. LOCAL_SERIAL Similar to SERIAL, except only local replicas are used. Similar to SERIAL, except only writes to local replicas must be acknowledged. EACH_QUORUM The opposite of LOCAL_QUORUM; requires each data center to produce a quorum of replicas, then returns the replica with the latest timestamp. The opposite of LOCAL_QUORUM; requires a quorum of replicas to be written in each data center. ALL Replicas from all nodes in the entire cluster (including all data centers) will be compared, and the replica with the latest timestamp will be returned. Data must be written to all replica nodes (both commit log and memtable) in the entire cluster, including all data centers. As you can see, there are numerous combinations of read and write consistency levels, all with different ultimate consistency guarantees. To illustrate this point, let's assume that you would like to guarantee absolute consistency for all read operations. On the surface, it might seem as if you would have to read with a consistency level of ALL, thus sacrificing availability in the case of node failure. But there are alternatives depending on your use case. There are actually two additional ways to achieve strong read consistency: Write with consistency level of ALL: This has the advantage of allowing the read operation to be performed using ONE, which lowers the latency for that operation. On the other hand, it means the write operation will result in UnavailableException if one of the replica nodes goes offline. Read and write with QUORUM or LOCAL_QUORUM: Since QUORUM and LOCAL_QUORUM both require a majority of nodes, using this level for both the write and the read will result in a full consistency guarantee (in the same data center when using LOCAL_QUORUM), while still maintaining availability during a node failure. You should carefully consider each use case to determine what guarantees you actually require. For example, there might be cases where a lost write is acceptable, or occasions where a read need not be absolutely current. At times, it might be sufficient to write with a level of QUORUM, then read with ONE to achieve maximum read performance, knowing you might occasionally and temporarily return stale data. Cassandra gives you this flexibility, but it's up to you to determine how to best employ it for your specific data requirements. A good rule of thumb to attain strong consistency is that the read consistency level plus write consistency level should be greater than the replication factor. If you are unsure about which consistency levels to use for your specific use case, it's typically safe to start with LOCAL_QUORUM (or QUORUM for a single data center) reads and writes. This configuration offers strong consistency guarantees and good performance while allowing for the inevitable replica failure. It is important to understand that even if you choose levels that provide less stringent consistency guarantees, Cassandra will still perform anti-entropy operations asynchronously in an attempt to keep replicas up to date. Repairing data Cassandra employs a multifaceted anti-entropy mechanism that keeps replicas in sync. Data repair operations generally fall into three categories: Synchronous read repair: When a read operation requires comparing multiple replicas, Cassandra will initially request a checksum from the other nodes. If the checksum doesn't match, the full replica is sent and compared with the local version. The replica with the latest timestamp will be returned and the old replica will be updated. This means that in normal operations, old data is repaired when it is requested. Asynchronous read repair: Each table in Cassandra has a setting called read_repair_chance (as well as its related setting, dclocal_read_repair_chance), which determines how the system treats replicas that are not compared during a read. The default setting of 0.1 means that 10 percent of the time, Cassandra will also repair the remaining replicas during read operations. Manually running repair: A full repair (using nodetool repair) should be run regularly to clean up any data that has been missed as part of the previous two operations. At a minimum, it should be run once every gc_grace_seconds, which is set in the table schema and defaults to 10 days. One might ask what the consequence would be of failing to run a repair operation within the window specified by gc_grace_seconds. The answer relates to Cassandra's mechanism to handle deletes. As you might be aware, all modifications (or mutations) are immutable, so a delete is really just a marker telling the system not to return that record to any clients. This marker is called a tombstone. Cassandra performs garbage collection on data marked by a tombstone each time a compaction occurs. If you don't run the repair, you risk deleted data reappearing unexpectedly. In general, deletes should be avoided when possible as the unfettered buildup of tombstones can cause significant issues. In the course of normal operations, Cassandra will repair old replicas when their records are requested. Thus, it can be said that read repair operations are lazy, such that they only occur when required. With all these options for replication and consistency, it can seem daunting to choose the right combination for a given use case. Let's take a closer look at this balance to help bring some additional clarity to the topic. Balancing the replication factor with consistency There are many considerations when choosing a replication factor, including availability, performance, and consistency. Since our topic is high availability, let's presume your desire is to maintain data availability in the case of node failure. It's important to understand exactly what your failure tolerance is, and this will likely be different depending on the nature of the data. The definition of failure is probably going to vary among use cases as well, as one case might consider data loss a failure, whereas another accepts data loss as long as all queries return. Achieving the desired availability, consistency, and performance targets requires coordinating your replication factor with your application's consistency level configurations. In order to assist you in your efforts to achieve this balance, let's consider a single data center cluster of 10 nodes and examine the impact of various configuration combinations (where RF corresponds to the replication factor): RF Write CL Read CL Consistency Availability Use cases 1 ONE QUORUM ALL ONE QUORUM ALL Consistent Doesn't tolerate any replica loss Data can be lost and availability is not critical, such as analysis clusters 2 ONE ONE Eventual Tolerates loss of one replica Maximum read performance and low write latencies are required, and sometimes returning stale data is acceptable 2 QUORUM ALL ONE Consistent Tolerates loss of one replica on reads, but none on writes Read-heavy workloads where some downtime for data ingest is acceptable (improves read latencies) 2 ONE QUORUM ALL Consistent Tolerates loss of one replica on writes, but none on reads Write-heavy workloads where read consistency is more important than availability 3 ONE ONE Eventual Tolerates loss of two replicas Maximum read and write performance are required, and sometimes returning stale data is acceptable 3 QUORUM ONE Eventual Tolerates loss of one replica on write and two on reads Read throughput and availability are paramount, while write performance is less important, and sometimes returning stale data is acceptable 3 ONE QUORUM Eventual Tolerates loss of two replicas on write and one on reads Low write latencies and availability are paramount, while read performance is less important, and sometimes returning stale data is acceptable 3 QUORUM QUORUM Consistent Tolerates loss of one replica Consistency is paramount, while striking a balance between availability and read/write performance 3 ALL ONE Consistent Tolerates loss of two replicas on reads, but none on writes Additional fault tolerance and consistency on reads is paramount at the expense of write performance and availability 3 ONE ALL Consistent Tolerates loss of two replicas on writes, but none on reads Low write latencies and availability are paramount, but read consistency must be guaranteed at the expense of performance and availability 3 ANY ONE Eventual Tolerates loss of all replicas on write and two on read Maximum write and read performance and availability are paramount, and often returning stale data is acceptable (note that hinted writes are less reliable than the guarantees offered at CL ONE) 3 ANY QUORUM Eventual Tolerates loss of all replicas on write and one on read Maximum write performance and availability are paramount, and sometimes returning stale data is acceptable 3 ANY ALL Consistent Tolerates loss of all replicas on writes, but none on reads Write throughput and availability are paramount, and clients must all see the same data, even though they might not see all writes immediately There are also two additional consistency levels, SERIAL and LOCAL_SERIAL, which can be used to read the latest value, even if it is part of an uncommitted transaction. Otherwise, they follow the semantics of QUORUM and LOCAL_QUORUM, respectively. As you can see, there are numerous possibilities to consider when choosing these values, especially in a scenario involving multiple data centers. This discussion will give you greater confidence as you design your applications to achieve the desired balance. Summary In this article, we introduced the foundational concept of consistency. In our discussion, we outlined the importance of the relationship between replication factor and consistency level, and their impact on performance, data consistency, and availability. Resources for Article: Further resources on this subject: Cassandra Design Patterns [Article] Cassandra Architecture [Article] About Cassandra [Article]
Read more
  • 0
  • 0
  • 3362

article-image-introduction-soa-testing
Packt
09 Aug 2016
13 min read
Save for later

Introduction to SOA Testing

Packt
09 Aug 2016
13 min read
In this article by Pranai Nandan, the author of Mastering SoapUI, we will see how the increase in implementation of service-oriented architecture (SOA) and architecture across applications leads to various technological and business advantages to the organizations implementing it. But as it's said; There are two sides to every coin, with SOA architecture came advantages like: Reusability Better scalability Platform independency Business agility Enhanced security But there are also disadvantages like: Increased response time Service management effort is high Implementation cost is high (For more resources related to this topic, see here.) In this article we will study the following topics: Introduction to SOA SoapUI architecture Test levels in SOA testing SOA testing approach Introduction to functional, performance & security testing using SoapUI Is SOA really advantageous? Well, let's talk about a few of the advantages of SOA architecture. Reusability: If we want to reuse the same piece of functionality exposed via a web service we should be absolutely sure that the functionality of the service is working as expected; security of the service is reliable and has no performance bottlenecks. Business Agility: With more functional changes being easily adopted in a web service, we make the web service prone to functional Bugs. Enhanced Security: Web services are usually wrapped around systems that are being protected by several layers of security like SSL and usage of Security tokens. Use of the business layer to protect the technical services to be directly exposed is usually handled by these layers. If the security of these layers is removed, the web service is highly vulnerable. Also the use of XML as a communication protocol opens the service to XML based attacks. So to mitigate risks we have SOA Testing, and to help you test SOA architecture we have multiple testing tools on the market for example; SOAP UI, SoapUI Pro, HP Service Test, ITKO LISA and SOA Parasoft. But the most widely used and open source tool in the SOA testing arena is SOAP UI. Following is a comparative analysis of the most famous tools in the Web service testing & test automation arena. Comparative Analysis: S.No Factors SoapUI SaopUI PRO ITKO LISA SOA Parasoft 1 Cost Open source 400 $/License Highly Costly Highly Costly 2 Multilayer testing Yes Yes Yes Yes 3 Scripting support Yes Yes Yes Yes 4 Protocol support Yes Yes Yes Yes 5 CI support Yes Yes Yes Yes 6 Ease Of Use 8/10 9/10 9/10 9/10 7 Learning curve 8/10 8/10 6/10 6/10 As we can see by the preceding comparison metrics, Ease of Use, Learning curve, and Cost play a major role in selection of a tool for any project. So to learn ITKO LISA or SOA Parasoft there is very limited, or no, material available on the Internet. To get resources trained you need to go to the owners of these tools and pay extra and then pay more if you need the training a second time. This gives additional advantages to SaopUI and SoapUI Pro to be the first choice for Test Architects and Test Managers for their projects. Now let's talk about the closely related brothers in this subset; SoapUI & SoapUI pro both are from the same family, Eviware, which is now SmartBear. However, SoapUI Pro has an enriched functionality and GUI which have additional functionalities to help reduce the time for testing, justifying its cost as compared to SoapUI open source. Following is a quick comparison Criteria SoapUI SoapUI Pro Reporting Very limited, no rich reporting Reports are available in different formats XPath Builder Not Available Available Data source Not Available Multiple options for data sources available Data sink Not Available Available XQuery Builder Not Available Available The additional functionality that is available in SoapUI pro can be achieved by SoapUI using Groovy script. To sum up everything that is given as UI functionality in SoapUI PRO is achievable with little effort in SoapUI which finally makes SoapUI open source the preferred choice for tool pickers. SoapUI architecture Before we move onto the architecture let's take a look the capabilities of SOAP UI and how can we use it for the benefit of our projects. SoapUI provides the following testing features to the test team: Functional testing [manual] Function test automation Performance testing Security testing Web service interoperability testing Apart from these, SOAP UI is also capable of integration with the following: LoadUI for advanced performance testing Selenium for multilayer testing Jenkins for continuous integration. HP QC for end-to-end test Automation management and execution. Soap UI has a comparatively simple architecture as compared to other tools in the SOA Testing world. The following image, shows the architecture of SoapUI at an overview level: Let's talk about the architecture in detail: Test config files: Files which require power to test this includes test data, expected results, database connections variables and any other environmental or test specific details. 3rd party API: Third-party API helps create an optimized test automation framework example. JExcel API to help integrate with Microsoft Excel to create a data driven framework. Selenium:Selenium JARs to be used for UI Automation. SOAP UI Runner: This is used to run the soap UI project and is a very useful utility for test automation as it allows you to run the test from the command line and acts as a trigger for test automation. Groovy: This library is used to enable SoapUI to provide its users with groovy as a scripting language. Properties: Test request properties to hold any dynamically generated data. We also have Test properties to configure SSL and other security configurations for test requests. Test Report: SoapUI provides a Junit style report and user Jasper reporting utility for reporting of test results. Test architecture in detail Soap UI Architecture is composed of many key components which help provide the users of SOAP UI with advanced functionality like virtualization, XPath, invoking services with JMS endpoints, logging, and debugging. Let's discuss these key components in detail: Jetty: Service virtualization / mock Service We can create replicas of services in cases where the service is not ready or buggy to test. In the meantime, we want to create our test cases, for that we can use service virtualization or mocking and use that service. Jetty is used for hosting virtual services. Provided by Eclipse, Java based web server. Works for both Soap and Rest. Jasper: Is used to generate reports Open source reporting tool Saxon XSLT and XQuery processor: We can use Path and XQuery to process service results The Saxon platform provides us with the option to process results using Path and XQuery Log4J: Used for logging Provides SoapUI, error, HTTP, Jetty, and memory logs JDBC driver: To interact with different databases we would need the respective drivers Hermes MS: Is used in applications where high volume of transactions take place It is used to send messages to the JMS Queue Receiver results from the JMS Queue We can incorporate Java JMS using Hermes JMS Scripting Language: We can choose with groovy or JavaScript We can select language for each project We can set language at project property level Monitoring To check what is sent to the service and what is received from the service Runners Execution can be run without using SoapUI Run from the command line Execution can be run without using SoapUI Test runner LoadTestRunner SecurityTestRunner MockServiceRunner Can also be executed from build tools like Jenkins Test approaches in SOA testing Approaches to test SOA architecture are usually based on the scope of the project and requirements to test. Let's look at an example: Following is a diagram of a three-tier architecture based on SOA architecture: Validation1 or V1: Validation of integration between Presentation Layer to the Services Layer Validation2 or V2: Validation of integration between Services Layer to the Service Catalogue Layer Validation3 or V3: Validation of integration between Product catalogue layer and the database or backend Layer So we have three integration points which makes us understand that we need integration testing also with functional, performance and security testing. So let's sum up the types of testing that are required to test end-to-end Greenfield projects. Functional testing Integration testing Performance testing Security testing Automation testing Functional testing A web service may expose single or multiple functionalities via operations and sometimes we need to test a business flow which requires calling multiple services in sequence which is known as orchestration testing in which we validate that a particular business flow meets the requirement. Let's see how to configure a SOAP service in SoapUI for functional Testing Open SoapUI by clicking on the launch icon. Click on File in upper-left corner of the top navigation bar. Click on New SOAP Project heading in the File menu. Verify that a popup opens up which asks for the WSDL or WADL details. There are two ways you can pass a URL to the web location of the WSDL, or you can pass a link to the downloaded WSDL on your local system. Enter the project name details and the WSDL location which can either be on your local machine or be called from a URL, then click on OK. You may verify that the WSDL is successfully loaded in SOAP UI with all the operations. Now you can see that service is successfully loaded in the workspace of SoapUI. Now, the first step toward an organized test suite is to create a test suite and relevant test cases. To achieve this, click on the operation request: When you click on Add to TestCase you are asked for the test suite name and then a test case name and finally you will be presented with the following popup: Here you can create a TestCase and add validations to it at run time. After clicking OK you are ready to start your functional and integration testing: Let's take an example of how to test a simple web service functionally. Test case: Validate that Search Customer searches for the customer from the system database using an MSISDN (Telecom Service). Please note MSISDN is a unique identifier for a user to be searched in the database and is a mandatory parameter. API to be tested, Search Customer: Request body: <v11:SearchCustomerRequest> <v11:username>TEST_Agent1</v11:username> <v11:orgID>COM01</v11:orgID> <v11:MSISDN>447830735969</v11:MSISDN> So to test it we pass the mandatory parameters and verify the response which should get us the response parameters expected to be fetched. By this we validate that searching for the customer using some Search criteria is successful or not, similarly, in order to test this service from a business point of view we need to validate this service with multiple scenarios. Following is a list of a few of them. Considering it's a telecom application search customer service: Verify that a prepay customer is successfully searched for using Search customer Verify that a post-pay customer is successfully searched for using Search customer Verify that agents are successfully searched for using search customer Verify that the results retrieved in response have the right data Verify that all the mandatory parameters are presenting the response of the service Here is how the response looks: Response Search Customer <TBD> Performance testing So is it really possible to perform performance testing in SoapUI? The answer is yes, if you just want to do a very simple test on your service itself, not on the orchestration. Soap UI does have limitations when it comes to performance testing but it does provide you a functionality to generate load on your web service with different strategies. So to start with, once you have created your SoapUI project for a service operation, you can just convert the same to a simple load test. Here is how: Right-click on the Load Test option available: Now select the name of the load test; a more relevant one will help you in future runs. You will now see that the load test popup appears and the load test is created: There are several strategies to generate load in SoapUI. The strategies are given below:    Simple    Burst    Thread    Variance Security testing API and web services are highly vulnerable to security attacks and we need to be absolutely sure about the security of the exposed web service depending on the architecture of the web service and the nature of its use. Some of the common attacks types include: Boundary attack Cross-site scripting XPath injection SQL injection Malformed XML XML bomb Malicious attachment Soap UI security Testing functionality provides scans for every attack type and also, if you want to try a custom attack on the service by writing a custom Script. So the scans provided by SOAP UI are: Boundary scan Cross-site scripting scan XPath injection scan SQL injection scan Malformed XML scan XML bomb scan Malicious attachment scan Fuzzing scan Custom script Following are the steps for how we configure a security test in SoapUI: You can see an option for security test just below load test in SoapUI. To add a test, right-click on the Security Test and select New Security Test: Now select New Security Test and verify that a popup asking the name of the security test opens: Select the name of the security test and click on OK. After that, you should see the security test configuration window opened on the screen. For the Service operation of your test case, in case of multiple operation in the same test case, you can configure for multiple operations in a single security test as well. For this pane you can select and configure scans on your service operations. To add a scan, click on the selected icon in the following screenshot: After selecting the icon, you can now select the scan you want to generate on your operation: After that you can configure your scan for the relevant parameter by configuring the XPath of the parameter in the request. After that you can select Assertions and Strategy tabs from the below options: You are now ready to run you security test with Boundary Scan: Summary So now we have been introduced to the key features of SoapUI and by the end of this article the readers of this article will now be familiar with SOA and SOA Testing. They now will have basic understanding of functional, load, and security testing in SOA using SoapUI. Resources for Article: Further resources on this subject: Methodology for Modeling Business Processes in SOA [article] Additional SOA Patterns – Supporting Composition Controllers [article] Web Services Testing and soapUI [article]
Read more
  • 0
  • 0
  • 3339
article-image-expanding-your-data-mining-toolbox
Packt
09 Aug 2016
15 min read
Save for later

Expanding Your Data Mining Toolbox

Packt
09 Aug 2016
15 min read
In this article by Megan Squire, author of Mastering Data Mining with Python, when faced with sensory information, human beings naturally want to find patterns to explain, differentiate, categorize, and predict. This process of looking for patterns all around us is a fundamental human activity, and the human brain is quite good at it. With this skill, our ancient ancestors became better at hunting, gathering, cooking, and organizing. It is no wonder that pattern recognition and pattern prediction were some of the first tasks humans set out to computerize, and this desire continues in earnest today. Depending on the goals of a given project, finding patterns in data using computers nowadays involve database systems, artificial intelligence, statistics, information retrieval, computer vision, and any number of other various subfields of computer science, information systems, mathematics, or business, just to name a few. No matter what we call this activity – knowledge discovery in databases, data mining, data science – its primary mission is always to find interesting patterns. (For more resources related to this topic, see here.) Despite this humble-sounding mission, data mining has existed for long enough and has built up enough variation in how it is implemented that it has now become a large and complicated field to master. We can think of a cooking school, where every beginner chef is first taught how to boil water and how to use a knife before moving to more advanced skills, such as making puff pastry or deboning a raw chicken. In data mining, we also have common techniques that even the newest data miners will learn: how to build a classifier and how to find clusters in data. The aim is to teach you some of the techniques you may not have seen yet in earlier data mining projects. In this article, we will cover the following topics: What is data mining? We will situate data mining in the growing field of other similar concepts, and we will learn a bit about the history of how this discipline has grown and changed. How do we do data mining? Here, we compare several processes or methodologies commonly used in data mining projects. What are the techniques used in data mining? In this article, we will summarize each of the data analysis techniques that are typically included in a definition of data mining. How do we set up a data mining work environment? Finally, we will walk through setting up a Python-based development environment. What is data mining? We explained earlier that the goal of data mining is to find patterns in data, but this oversimplification falls apart quickly under scrutiny. After all, could we not also say that finding patterns is the goal of classical statistics, or business analytics, or machine learning, or even the newer practices of data science or big data? What is the difference between data mining and all of these other fields, anyway? And while we are at it, why is it called data mining if what we are really doing is mining for patterns? Don't we already have the data? It was apparent from the beginning that the term data mining is indeed fraught with many problems. The term was originally used as something of a pejorative by statisticians who cautioned against going on fishing expeditions, where a data analyst is casting about for patterns in data without forming proper hypotheses first. Nonetheless, the term rose to prominence in the 1990s, as the popular press caught wind of exciting research that was marrying the mature field of database management systems with the best algorithms from machine learning and artificial intelligence. The inclusion of the word mining inspires visions of a modern-day Gold Rush, in which the persistent and intrepid miner will discover (and perhaps profit from) previously hidden gems. The idea that data itself could be a rare and precious commodity was immediately appealing to the business and technology press, despite efforts by early pioneers to promote more the holistic term knowledge discovery in databases (KDD). The term data mining persisted, however, and ultimately some definitions of the field attempted to re-imagine the term data mining to refer to just one of the steps in a longer, more comprehensive knowledge discovery process. Today, data mining and KDD are considered very similar, closely related terms. What about other related terms, such as machine learning, predictive analytics, big data, and data science? Are these the same as data mining or KDD? Let's draw some comparisons between each of these terms: Machine learning is a very specific subfield of computer science that focuses on developing algorithms that can learn from data in order to make predictions. Many data mining solutions will use techniques from machine learning, but not all data mining is trying to make predictions or learn from data. Sometimes we just want to find a pattern in the data. In fact, in this article we will be exploring a few data mining solutions that do use machine learning techniques, and many more that do not. Predictive analytics, sometimes just called analytics, is a general term for computational solutions that attempt to make predictions from data in a variety of domains. We can think of the terms business analytics, media analytics, and so on. Some, but not all, predictive analytics solutions will use machine learning techniques to perform their predictions. But again, in data mining, we are not always interested in prediction. Big data is a term that refers to the problems and solutions of dealing with very large sets of data, irrespective of whether we are searching for patterns in that data, or simply storing it. In terms of comparing big data to data mining, many data mining problems are made more interesting when the data sets are large, so solutions discovered for dealing with big data might come in handy to solve a data mining problem. Nonetheless, these two terms are merely complementary, not interchangeable. Data science is the closest of these terms to being interchangeable with the KDD process, of which data mining is one step. Because data science is an extremely popular buzzword at this time, its meaning will continue to evolve and change as the field continues to mature. To show the relative search interest for these various terms over time, we can look at Google Trends. This tool shows how frequently people are searching for various keywords over time. In the following figure, the newcomer term data science is currently the hot buzzword, with data mining pulling into second place, followed by machine learning, data science, and predictive analytics. (I tried to include the search term knowledge discovery in databases as well, but the results were so close to zero that the line was invisible.) The y-axis shows the popularity of that particular search term as a 0-100 indexed value. In addition, I combined the weekly index values that Google Trends gives into a monthly average for each month in the period 2004-2015. Google Trends search results for four common data-related terms How do we do data mining? Since data mining is traditionally seen as one of the steps in the overall KDD process, and increasingly in the data science process, in this article we get acquainted with the steps involved. There are several popular methodologies for doing the work of data mining. Here we highlight four methodologies: two that are taken from textbook introductions to the theory of data mining, one taken from a very practical process used in industry, and one designed for teaching beginners. The Fayyad et al. KDD process One early version of the knowledge discovery and data mining process was defined by Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth in a 1996 article (The KDD Process for Extracting Useful Knowledge from Volumes of Data). This article was important at the time for refining the rapidly-changing KDD methodology into a concrete set of steps. The following steps lead from raw data at the beginning to knowledge at the end: Data selection: The input to this step is raw data, and the output of this selection step is a smaller subset of the data, called the target data. Data pre-processing: The target data is cleaned, oddities and outliers are removed, and missing data is accounted for. The output of this step is pre-processed data, or cleaned data. Data transformation: The cleaned data is organized into a format appropriate for the mining step, and the number of features or variables is reduced if need be. The output of this step is transformed data. Data Mining: The transformed data is mined for patterns using one or more data mining algorithms appropriate to the problem at hand. The output of this step is the discovered patterns. Data Interpretation/Evaluation: The discovered patterns are evaluated for their ability to solve the problem at hand. The output of this step is knowledge. Since this process leads from raw data to knowledge, it is appropriate that these authors were the ones who were really committed to the term knowledge discovery in databases rather than simply data mining. The Han et al. KDD process Another version of the knowledge discovery process is described in the popular data mining textbook Data Mining: Concepts and Techniques by Jiawei Han, Micheline Kamber, and Jian Pei as the following steps, which also lead from raw data to knowledge at the end: Data cleaning: The input to this step is raw data, and the output is cleaned data Data integration: In this step, the cleaned data is integrated (if it came from multiple sources). The output of this step is integrated data. Data selection: The data set is reduced to only the data needed for the problem at hand. The output of this step is a smaller data set. Data transformation: The smaller data set is consolidated into a form that will work with the upcoming data mining step. This is called transformed data. Data Mining: The transformed data is processed by intelligent algorithms that are designed to discover patterns in that data. The output of this step is one or more patterns. Pattern evaluation: The discovered patterns are evaluated for their interestingness and their ability to solve the problem at hand. The output of this step is an interestingness measure applied to each pattern, representing knowledge. Knowledge representation: In this step, the knowledge is communicated to users through various means, including visualization. In both the Fayyad and Han methodologies, it is expected that the process will iterate multiple times over steps, if such iteration is needed. For example, if during the transformation step the person doing the analysis realized that another data cleaning or pre-processing step is needed, both of these methodologies specify that the analyst should double back and complete a second iteration of the incomplete earlier step. The CRISP-DM process A third popular version of the KDD process that is used in many business and applied domains is called CRISP-DM, which stands for CRoss-Industry Standard Process for Data Mining. It consists of the following steps: Business Understanding: In this step, the analyst spends time understanding the reasons for the data mining project from a business perspective. Data Understanding: In this step, the analyst becomes familiar with the data and its potential promises and shortcomings, and begins to generate hypotheses. The analyst is tasked to reassess the business understanding (step 1) if needed. Data Preparation: This step includes all the data selection, integration, transformation, and pre-processing steps that are enumerated as separate steps in the other models. The CRISP-DM model has no expectation of what order these tasks will be done in. Modeling: This is the step in which the algorithms are applied to the data to discover the patterns. This step is closest to the actual data mining steps in the other KDD models. The analyst is tasked to reassess the data preparation step (step 3) if the modeling and mining step requires it. Evaluation: The model and discovered patterns are evaluated for their value in answering the business problem at hand. The analyst is tasked with revisiting the business understanding (step 1) if necessary. Deployment: The discovered knowledge and models are presented and put into production to solve the original problem at hand. One of the strengths of this methodology is that iteration is built in. Between specific steps, it is expected that the analyst will check that the current step is still in agreement with certain previous steps. Another strength of this method is that the analyst is explicitly reminded to keep the business problem front and center in the project, even down in the evaluation steps. The Six Steps process When I teach the introductory data science course at my university, I use a hybrid methodology of my own creation. This methodology is called the Six Steps, and I designed it to be especially friendly for teaching. My Six Steps methodology removes some of the ambiguity that inexperienced students may have with open-ended tasks from CRISP-DM, such as Business Understanding, or a corporate-focused task such as Deployment. In addition, the Six Steps method keeps the focus on developing students' critical thinking skills by requiring them to answer Why are we doing this? and What does it mean? at the beginning and end of the process. My Six Steps method looks like this: Problem statement: In this step, the students identify what the problem is that they are trying to solve. Ideally, they motivate the case for why they are doing all this work. Data collection and storage: In this step, students locate data and plan their storage for the data needed for this problem. They also provide information about where the data that is helping them answer their motivating question came from, as well as what format it is in and what all the fields mean. Data cleaning: In this phase, students carefully select only the data they really need, and pre-process the data into the format required for the mining step. Data mining: In this step, students formalize their chosen data mining methodology. They describe what algorithms they used and why. The output of this step is a model and discovered patterns. Representation and visualization: In this step, the students show the results of their work visually. The outputs of this step can be tables, drawings, graphs, charts, network diagrams, maps, and so on. Problem resolution: This is an important step for beginner data miners. This step explicitly encourages the student to evaluate whether the patterns they showed in step 5 are really an answer to the question or problem they posed in step 1. Students are asked to state the limitations of their model or results, and to identify parts of the motivating question that they could not answer with this method. Which data mining methodology is the best? A 2014 survey of the subscribers of Gregory Piatetsky-Shapiro's very popular data mining email newsletter KDNuggets included the question What main methodology are you using for your analytics, data mining, or data science projects? 43% of the poll respondents indicated that they were using the CRISP-DM methodology 27% of the respondents were using their own methodology or a hybrid 7% were using the traditional KDD methodology These results are generally similar to the 2007 results from the same newsletter asking the same question. My best advice is that it does not matter too much which methodology you use for a data mining project, as long as you just pick one. If you do not have any methodology at all, then you run the risk of forgetting important steps. Choose one of the methods that seems like it might work for your project and your needs, and then just do your best to follow the steps. We will vary our data mining methodology depending on which technique we are looking at in a given article. For example, even though the focus of the article as a whole is on the data mining step, we still need to motivate of project with a healthy dose of Business Understanding (CRISP-DM) or Problem Statement (Six Steps) so that we understand why we are doing the tasks and what the results mean. In addition, in order to learn a particular data mining method, we may also have to do some pre-processing, whether we call that data cleaning, integration, or transformation. But in general, we will try to keep these tasks to a minimum so that our focus on data mining remains clear. Finally, even though data visualization is typically very important for representing the results of your data mining process to your audience, we will also keep these tasks to a minimum so that we can remain focused on the primary job at hand: data mining. Summary In this article, we learned what it would take to expand our data mining toolbox to the master level. First we took a long view of the field as a whole, starting with the history of data mining as a piece of the knowledge discovery in databases (KDD) process. We also compared the field of data mining to other similar terms such as data science, machine learning, and big data. Next, we outlined the common tools and techniques that most experts consider to be most important to the KDD process, paying special attention to the techniques that are used most frequently in the mining and analysis steps. To really master data mining, it is important that we work on problems that are different than simple textbook examples. For this reason we will be working on more exotic data mining techniques such as generating summaries and finding outliers, and focusing on more unusual data types, such as text and networks.  Resources for Article: Further resources on this subject: Python Data Structures [article] Mining Twitter with Python – Influence and Engagement [article] Data mining [article]
Read more
  • 0
  • 0
  • 6491

article-image-rdo-installation
Packt
09 Aug 2016
26 min read
Save for later

RDO Installation

Packt
09 Aug 2016
26 min read
In this article by Dan Radez, author of OpenStack Essentials - Second Edition, we will see how OpenStack has a very modular design, and because of this design, there are lots of moving parts. It is overwhelming to start walking through installing and using OpenStack without understanding the internal architecture of the components that make up OpenStack. In this article, we'll look at these components. Each component in OpenStack manages a different resource that can be virtualized for the end user. Separating the management of each of the types of resources that can be virtualized into separate components makes the OpenStack architecture very modular. If a particular service or resource provided by a component is not required, then the component is optional to an OpenStack deployment. Once the components that make up OpenStack have been covered, we will discuss the configuration of a community-supported distribution of OpenStack called RDO. (For more resources related to this topic, see here.) OpenStack architecture Let's start by outlining some simple categories to group these services into. Logically, the components of OpenStack are divided into three groups: Control Network Compute The control tier runs the Application Programming Interface (API) services, web interface, database, and message bus. The network tier runs network service agents for networking, and the compute tier is the virtualization hypervisor. It has services and agents to handle virtual machines. All of the components use a database and/or a message bus. The database can be MySQL, MariaDB, or PostgreSQL. The most popular message buses are RabbitMQ, Qpid, and ActiveMQ. For smaller deployments, the database and messaging services usually run on the control node, but they could have their own nodes if required. In a simple multi-node deployment, the control and networking services are installed on one server and the compute services are installed onto another server. OpenStack could be installed on one node or more than two nodes, but a good baseline for being able to scale out later is to put control and network together and compute by itself Now that a base logical architecture of OpenStack has been defined, let's look at what components make up this basic architecture. To do that, we'll first touch on the web interface and then work toward collecting the resources necessary to launch an instance. Finally, we will look at what components are available to add resources to a launched instance. Dashboard The OpenStack dashboard is the web interface component provided with OpenStack. You'll sometimes hear the terms dashboard and Horizon used interchangeably. Technically, they are not the same thing. This article will refer to the web interface as the dashboard. The team that develops the web interface maintains both the dashboard interface and the Horizon framework that the dashboard uses. More important than getting these terms right is understanding the commitment that the team that maintains this code base has made to the OpenStack project. They have pledged to include support for all the officially accepted components that are included in OpenStack. Visit the OpenStack website (http://www.openstack.org/) to get an official list of OpenStack components. The dashboard cannot do anything that the API cannot do. All the actions that are taken through the dashboard result in calls to the API to complete the task requested by the end user. Throughout this article, we will examine how to use the web interface and the API clients to execute tasks in an OpenStack cluster. Next, we will discuss both the dashboard and the underlying components that the dashboard makes calls to when creating OpenStack resources. Keystone Keystone is the identity management component. The first thing that needs to happen while connecting to an OpenStack deployment is authentication. In its most basic installation, Keystone will manage tenants, users, and roles and be a catalog of services and endpoints for all the components in the running cluster. Everything in OpenStack must exist in a tenant. A tenant is simply a grouping of objects. Users, instances, and networks are examples of objects. They cannot exist outside of a tenant. Another name for a tenant is a project. On the command line, the term tenant is used. In the web interface, the term project is used. Users must be granted a role in a tenant. It's important to understand this relationship between the user and a tenant via a role. For now, understand that a user cannot log in to the cluster unless they are a member of a tenant. Even the administrator has a tenant. Even the users the OpenStack components use to communicate with each other have to be members of a tenant to be able to authenticate. Keystone also keeps a catalog of services and endpoints of each of the OpenStack components in the cluster. This is advantageous because all of the components have different API endpoints. By registering them all with Keystone, an end user only needs to know the address of the Keystone server to interact with the cluster. When a call is made to connect to a component other than Keystone, the call will first have to be authenticated, so Keystone will be contacted regardless. Within the communication to Keystone, the client also asks Keystone for the address of the component the user intended to connect to. This makes managing the endpoints easier. If all the endpoints were distributed to the end users, then it would be a complex process to distribute a change in one of the endpoints to all of the end users. By keeping the catalog of services and endpoints in Keystone, a change is easily distributed to end users as new requests are made to connect to the components. By default, Keystone uses username/password authentication to request a token and the acquired tokens for subsequent requests. All the components in the cluster can use the token to verify the user and the user's access. Keystone can also be integrated into other common authentication systems instead of relying on the username and password authentication provided by Keystone Glance Glance is the image management component. Once we're authenticated, there are a few resources that need to be available for an instance to launch. The first resource we'll look at is the disk image to launch from. Before a server is useful, it needs to have an operating system installed on it. This is a boilerplate task that cloud computing has streamlined by creating a registry of pre-installed disk images to boot from. Glance serves as this registry within an OpenStack deployment. In preparation for an instance to launch, a copy of a selected Glance image is first cached to the compute node where the instance is being launched. Then, a copy is made to the ephemeral disk location of the new instance. Subsequent instances launched on the same compute node using the same disk image will use the cached copy of the Glance image. The images stored in Glance are sometimes called sealed-disk images. These images are disk images that have had the operating system installed but have had things such as the Secure Shell (SSH) host key and network device MAC addresses removed. This makes the disk images generic, so they can be reused and launched repeatedly without the running copies conflicting with each other. To do this, the host-specific information is provided or generated at boot. The provided information is passed in through a post-boot configuration facility called cloud-init. Usually, these images are downloaded from distribution's download pages. If you search the internet for your favorite distribution's name and cloud image, you will probably get a link to where to download a generic pre-built copy of a Glance image, also known as a cloud image. The images can also be customized for special purposes beyond a base operating system installation. If there was a specific purpose for which an instance would be launched many times, then some of the repetitive configuration tasks could be performed ahead of time and built into the disk image. For example, if a disk image was intended to be used to build a cluster of web servers, it would make sense to install a web server package on the disk image before it was used to launch an instance. It would save time and bandwidth to do it once before it is registered with Glance instead of doing this package installation and configuration over and over each time a web server instance is booted. There are quite a few ways to build these disk images. The simplest way is to do a virtual machine installation manually, make sure that the host-specific information is removed, and include cloud-init in the built image. Cloud-init is packaged in most major distributions; you should be able to simply add it to a package list. There are also tools to make this happen in a more autonomous fashion. Some of the more popular tools are virt-install, Oz, and appliance-creator. The most important thing about building a cloud image for OpenStack is to make sure that cloud-init is installed. Cloud-init is a script that should run post boot to connect back to the metadata service. Neutron Neutron is the network management component. With Keystone, we're authenticated, and from Glance, a disk image will be provided. The next resource required for launch is a virtual network. Neutron is an API frontend (and a set of agents) that manages the Software Defined Networking (SDN) infrastructure for you. When an OpenStack deployment is using Neutron, it means that each of your tenants can create virtual isolated networks. Each of these isolated networks can be connected to virtual routers to create routes between the virtual networks. A virtual router can have an external gateway connected to it, and external access can be given to each instance by associating a floating IP on an external network with an instance. Neutron then puts all the configuration in place to route the traffic sent to the floating IP address through these virtual network resources into a launched instance. This is also called Networking as a Service (NaaS). NaaS is the capability to provide networks and network resources on demand via software. By default, the OpenStack distribution we will install uses Open vSwitch to orchestrate the underlying virtualized networking infrastructure. Open vSwitch is a virtual managed switch. As long as the nodes in your cluster have simple connectivity to each other, Open vSwitch can be the infrastructure configured to isolate the virtual networks for the tenants in OpenStack. There are also many vendor plugins that would allow you to replace Open vSwitch with a physical managed switch to handle the virtual networks. Neutron even has the capability to use multiple plugins to manage multiple network appliances. As an example, Open vSwitch and a vendor's appliance could be used in parallel to manage virtual networks in an OpenStack deployment. This is a great example of how OpenStack is built to provide flexibility and choice to its users. Networking is the most complex component of OpenStack to configure and maintain. This is because Neutron is built around core networking concepts. To successfully deploy Neutron, you need to understand these core concepts and how they interact with one another. Nova Nova is the instance management component. An authenticated user who has access to a Glance image and has created a network for an instance to live on is almost ready to tie all of this together and launch an instance. The last resources that are required are a key pair and a security group. A key pair is simply an SSH key pair. OpenStack will allow you to import your own key pair or generate one to use. When the instance is launched, the public key is placed in the authorized_keys file so that a password-less SSH connection can be made to the running instance. Before that SSH connection can be made, the security groups have to be opened to allow the connection to be made. A security group is a firewall at the cloud infrastructure layer. The OpenStack distribution we'll use will have a default security group with rules to allow instances to communicate with each other within the same security group, but rules will have to be added for Internet Control Message Protocol (ICMP), SSH, and other connections to be made from outside the security group. Once there's an image, network, key pair, and security group available, an instance can be launched. The resource's identifiers are provided to Nova, and Nova looks at what resources are being used on which hypervisors, and schedules the instance to spawn on a compute node. The compute node gets the Glance image, creates the virtual network devices, and boots the instance. During the boot, cloud-init should run and connect to the metadata service. The metadata service provides the SSH public key needed for SSH login to the instance and, if provided, any post-boot configuration that needs to happen. This could be anything from a simple shell script to an invocation of a configuration management engine. Cinder Cinder is the block storage management component. Volumes can be created and attached to instances. Then they are used on the instances as any other block device would be used. On the instance, the block device can be partitioned and a filesystem can be created and mounted. Cinder also handles snapshots. Snapshots can be taken of the block volumes or of instances. Instances can also use these snapshots as a boot source. There is an extensive collection of storage backends that can be configured as the backing store for Cinder volumes and snapshots. By default, Logical Volume Manager (LVM) is configured. GlusterFS and Ceph are two popular software-based storage solutions. There are also many plugins for hardware appliances. Swift Swift is the object storage management component. Object storage is a simple content-only storage system. Files are stored without the metadata that a block filesystem has. These are simply containers and files. The files are simply content. Swift has two layers as part of its deployment: the proxy and the storage engine. The proxy is the API layer. It's the service that the end user communicates with. The proxy is configured to talk to the storage engine on the user's behalf. By default, the storage engine is the Swift storage engine. It's able to do software-based storage distribution and replication. GlusterFS and Ceph are also popular storage backends for Swift. They have similar distribution and replication capabilities to those of Swift storage. Ceilometer Ceilometer is the telemetry component. It collects resource measurements and is able to monitor the cluster. Ceilometer was originally designed as a metering system for billing users. As it was being built, there was a realization that it would be useful for more than just billing and turned into a general-purpose telemetry system. Ceilometer meters measure the resources being used in an OpenStack deployment. When Ceilometer reads a meter, it's called a sample. These samples get recorded on a regular basis. A collection of samples is called a statistic. Telemetry statistics will give insights into how the resources of an OpenStack deployment are being used. The samples can also be used for alarms. Alarms are nothing but monitors that watch for a certain criterion to be met. Heat Heat is the orchestration component. Orchestration is the process of launching multiple instances that are intended to work together. In orchestration, there is a file, known as a template, used to define what will be launched. In this template, there can also be ordering or dependencies set up between the instances. Data that needs to be passed between the instances for configuration can also be defined in these templates. Heat is also compatible with AWS CloudFormation templates and implements additional features in addition to the AWS CloudFormation template language. To use Heat, one of these templates is written to define a set of instances that needs to be launched. When a template launches, it creates a collection of virtual resources (instances, networks, storage devices, and so on); this collection of resources is called a stack. When a stack is spawned, the ordering and dependencies, shared configuration data, and post-boot configuration are coordinated via Heat. Heat is not configuration management. It is orchestration. It is intended to coordinate launching the instances, passing configuration data, and executing simple post-boot configuration. A very common post-boot configuration task is invoking an actual configuration management engine to execute more complex post-boot configuration. OpenStack installation The list of components that have been covered is not the full list. This is just a small subset to get you started with using and understanding OpenStack. Further components that are defaults in an OpenStack installation provide many advanced capabilities that we will not be able to cover. Now that we have introduced the OpenStack components, we will illustrate how they work together as a running OpenStack installation. To illustrate an OpenStack installation, we first need to install one. Let's use the RDO Project's OpenStack distribution to do that. RDO has two installation methods; we will discuss both of them and focus on one of them throughout this article. Manual installation and configuration of OpenStack involves installing, configuring, and registering each of the components we covered in the previous part, and also multiple databases and a messaging system. It's a very involved, repetitive, error-prone, and sometimes confusing process. Fortunately, there are a few distributions that include tools to automate this installation and configuration process. One such distribution is the RDO Project distribution. RDO, as a name, doesn't officially mean anything. It is just the name of a community-supported distribution of OpenStack. The RDO Project takes the upstream OpenStack code, packages it in RPMs and provides documentation, forums, IRC channels, and other resources for the RDO community to use and support each other in running OpenStack on RPM-based systems. There are no modifications to the upstream OpenStack code in the RDO distribution. The RDO project packages the code that is in each of the upstream releases of OpenStack. This means that we'll use an open source, community-supported distribution of vanilla OpenStack for our example installation. RDO should be able to be run on any RPM-based system. We will now look at the two installation tools that are part of the RDO Project, Packstack and RDO Triple-O. We will focus on using RDO Triple-O in this article. The RDO Project recommends RDO Triple-O for installations that intend to deploy a more feature-rich environment. One example is High Availability. RDO Triple-O is able to do HA deployments and Packstack is not. There is still great value in doing an installation with Packstack. Packstack is intended to give you a very lightweight, quick way to stand up a basic OpenStack installation. Let's start by taking a quick look at Packstack so you are familiar with how quick and lightweight is it. Installing RDO using Packstack Packstack is an installation tool for OpenStack intended for demonstration and proof-of-concept deployments. Packstack uses SSH to connect to each of the nodes and invokes a puppet run (specifically, a puppet apply) on each of the nodes to install and configure OpenStack. RDO website: http://openstack.redhat.com Packstack installation: http://openstack.redhat.com/install/quickstart The RDO Project quick start gives instructions to install RDO using Packstack in three simple steps: Update the system and install the RDO release rpm as follows: sudo yum update -y sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm Install Packstack as shown in the following command: sudo yum install -y openstack-packstack Run Packstack as shown in the following command: sudo packstack --allinone The all-in-one installation method works well to run on a virtual machine as your all-in-one OpenStack node. In reality, however, a cluster will usually use more than one node beyond a simple learning environment. Packstack is capable of doing multinode installations, though you will have to read the RDO Project documentation for Packstack on the RDO Project wiki. We will not go any deeper with Packstack than the all-in-one installation we have just walked through. Don't avoid doing an all-in-one installation; it really is as simple as the steps make it out to be, and there is value in getting an OpenStack installation up and running quickly. Installing RDO using Triple-O The Triple-O project is an OpenStack installation tool developed by the OpenStack community. A Triple-O deployment consists of two OpenStack deployments. One of the deployments is an all-in-one OpenStack installation that is used as a provisioning tool to deploy a multi-node target OpenStack deployment. This target deployment is the deployment intended for end users. Triple-O stands for OpenStack on OpenStack. OpenStack on OpenStack would be OOO, which lovingly became referred to as Triple-O. It may sound like madness to use OpenStack to deploy OpenStack, but consider that OpenStack is really good at provisioning virtual instances. Triple-O applies this strength to bare-metal deployments to deploy a target OpenStack environment. In Triple-O, the two OpenStacks are called the undercloud and the overcloud. The undercloud is a baremetal management enabled all-in-one OpenStack installation that will build for you in a very prescriptive way. Baremetal management enabled means it is intended to manage physical machines instead of virtual machines. The overcloud is the target deployment of OpenStack that is intended be exposed to end users. The undercloud will take a cluster of nodes provided to it and deploy the overcloud to them, a fully featured OpenStack deployment. In real deployments, this is done with a collection of baremetal nodes. Fortunately, for learning purposes, we can mock having a bunch of baremetal nodes by using virtual machines. Mind blown yet? Let's get started with this RDO Manager based OpenStack installation to start unraveling what all this means. There is an RDO Manager quickstart project that we will use to get going. The RDO Triple-O wiki page will be the most up-to-date place to get started with RDO Triple-O. If you have trouble with the directions in this article, please refer to the wiki. OpenSource changes rapidly and RDO Triple-O is no exception. In particular, note that the directions refer to the Mitaka release of OpenStack. The name of the release will most likely be the first thing that changes on the wiki page that will impact your future deployments with RDO Triple-O. Start by downloading the pre-built undercloud image from the RDO Project's repositories. This is something you could build yourself but it would take much more time and effort to build than it would take to download the pre-built one. As mentioned earlier, the undercloud is a pretty prescriptive all-in-one deployment which lends itself well to starting with a pre-built image. These instructions come from the readme of the triple-o quickstart github repository (https://github.com/redhat-openstack/tripleo-quickstart/): myhost# mkdir -p /usr/share/quickstart_images/ myhost# cd /usr/share/quickstart_images/ myhost# wget https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2.md5 https://ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2 Make sure that your ssh key exists: Myhost# ls ~/.ssh If you don't see the id_rsa and id_rsa.pub files in that directory list, run the command ssh-keygen. Then make sure that your public key is in the authorized keys file: myhost# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys Once you have the undercloud image and you ssh keys pull a copy of the quickstart.sh file, install the dependencies and execute the quickstart script: myhost# cd ~ myhost# wget https://raw.githubusercontent.com/redhat-openstack/tripleo-quickstart/master/quickstart.sh myhost#sh quickstart.sh -u file:///usr/share/quickstart_images/undercloud.qcow2 localhost quickstart.sh will use Ansible to set up the undercloud virtual machine and will define a few extra virtual machines that will be used to mock a collection of baremetal nodes for an overcloud deployment. To see the list of virtual machines that quickstack.sh created, use virsh to list them: myhost# virsh list --all Id Name State ---------------------------------------------------- 17 undercloud running - ceph_0 shut off - compute_0 shut off - control_0 shut off - control_1 shut off - control_2 shut off Along with the undercloud virtual machine, there are ceph, compute, and control virtual machine definitions. These are the nodes that will be used to deploy the OpenStack overcloud. Using virtual machines like this to deploy OpenStack is not suitable for anything but your own personal OpenStack enrichment. These virtual machines represent physical machines that would be used in a real deployment that would be exposed to end users. To continue the undercloud installation, connect to the undercloud virtual machine and run the undercloud configuration: myhost# ssh -F /root/.quickstart/ssh.config.ansible undercloud undercloud# openstack undercloud install The undercloud install command will set up the undercloud machine as an all-in-one OpenStack installation ready be told how to deploy the overcloud. Once the undercloud installation is completed, the final steps are to seed the undercloud with configuration about the overcloud deployment and execute the overcloud deployment: undercloud# source stackrc undercloud# openstack overcloud image upload undercloud# openstack baremetal import --json instackenv.json undercloud# openstack baremetal configure boot undercloud# neutron subnet-list undercloud# neutron subnet-update <subnet-uuid> --dns-nameserver 8.8.8.8 There are also some scripts and other automated ways to make these steps happen: look at the output of the quickstart script or Triple-O quickstart docs in the GitHub repository to get more information about how to automate some of these steps. The source command puts information into the shell environment to tell the subsequent commands how to communicate with the undercloud. The image upload command uploads disk images into Glance that will be used to provision the overcloud nodes.. The first baremetal command imports information about the overcloud environment that will be deployed. This information was written to the instackenv.json file when the undercloud virtual machine was created by quickstart.sh. The second configures the images that were just uploaded in preparation for provisioning the overcloud nodes. The two neutron commands configure a DNS server for the network that the overclouds will use, in this case Google's. Finally, execute the overcloud deploy: undercloud# openstack overcloud deploy --control-scale 1 --compute-scale 1 --templates --libvirt-type qemu --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml Let's talk about what this command is doing. In OpenStack, there are two basic node types, control and compute. A control node runs the OpenStack API services, OpenStack scheduling service, database services, and messaging services. Pretty much everything except the hypervisors are part of the control tier and are segregated onto control nodes in a basic deployment. In an HA deployment, there are at least three control nodes. This is why you see three control nodes in the list of virtual machines quickstart.sh created. RDO Triple-O can do HA deployments, though we will focus on non-HA deployments in this article. Note that in the command you have just executed, control scale and compute scale are both set to one. This means that you are deploying one control and one compute node. The other virtual machines will not be used. Take note of the libvirt-type parameter. It is only required if the compute node itself it virtualized, which is what we are doing with RDO Triple-O, to set the configuration properly for the instances to nested. Nested virtualization is when virtual machines are running inside of a virtual machine. In this case, the instances will be virtual machines running inside of the compute node, which is a virtual machine. Finally, the ceph storage scale and storage environment file will deploy Ceph at the storage backend for Glance and Cinder. If you leave off the Ceph and storage environment file parameters, one less virtual machine will be used for deployment. The indication the overcloud deploy has succeeded will give you a Keystone endpoint and a success message: Overcloud Endpoint: http://192.0.2.6:5000/v2.0 Overcloud Deployed Connecting to your Overcloud Finally, before we dig into looking at the OpenStack components that have been installed and configured, let's identify three ways that you can connect to the freshly installed overcloud deployment: From the undercloud: This is the quickest way to access the overcloud. When the overcloud deployment completed, a file named overcloudrc was created. Install the client libraries: Both RDO Triple-O and Packstack were installed from the RDO release repository. By installing this release repository, in the same way that was demonstrated earlier for Packstack on another computer, the OpenStack client libraries can be installed on that computer. If these libraries are installed on a computer that can route the network the overcloud was installed on then the overcloud can be accessed from that computer the same as it can from the undercloud. This is helpful if you do not want to be tied to jumping through the undercloud node to access the overcloud: laptop# sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm laptop# sudo yum install python-openstackclient In addition to the client package, you will also need the overcloudrc file from the undercloud. As an example, you can install the packages on the host machine you have just run quickstart.sh and make the overcloud routable by adding an IP address to the OVS bridge the virtual machines were attached to: myhost# sudo ip addr add 192.0.2.222/24 dev bridget myhost# sudo ip link set up dev bridget Once this is done, the commands in the subsequent parts could be run from the host machine instead of the undercloud virtual machine. The OpenStack dashboard: OpenStack's included web interface is called the dashboard. In the installation you have just completed, you can access the overcloud's dashboard by first running the two ip commands used in the second option above then connecting to the IP address indicated as the overcloud endpoint but on port 80 instead of 5000: http://192.0.2.6/. Summary After looking at the components that make up an OpenStack installation, we used RDO Triple-O as a provisioning tool. We now have OpenStack installed and running. Now that OpenStack is installed and running, let's walk through each of the components discussed to learn how to use each of them. Resources for Article: Further resources on this subject: Keystone – OpenStack Identity Service [article] Concepts for OpenStack [article] Setting up VPNaaS in OpenStack [article]
Read more
  • 0
  • 0
  • 10278

article-image-conference-app
Packt
09 Aug 2016
4 min read
Save for later

Conference App

Packt
09 Aug 2016
4 min read
In this article, Indermohan Singh, the author of Ionic 2 Blueprints we will create a conference app. We will create an app which will provide list of speakers, schedule, directions to the venue, ticket booking, and lots of other features. We will learn the following things: Using the device's native features Leveraging localStorage Ionic menu and tabs Using RxJS to build a perfect search filter (For more resources related to this topic, see here.) Conference app is a companion application for conference attendees. In this application, we are using Lanyrd JSON Exportand hardcoded JSON file as our backend. We will have a tabs and side menu interface, just like our e-commerce application. When a user opens our app, the app will show a tab interface with SpeakersPageopen. It will have SchedulePage for conference schedule and AboutPage for information about conference. We will also make this app work offline, without any Internet connection. So, your user will still be able to view speakers, see the schedule, and do other stuff without using the Internet at all. JSON data In the application, we have used a hardcoded JSON file as our Database. But in the truest sense, we are actually using a JSON export of a Lanyrd event. I was trying to make this article using Lanyrd as the backend, but unfortunately, Lanyrd is mostly in maintenance mode. So I was not able to use it. In this article, I am still using a JSON export from Lanyrd, from a previous event. So, if you are able to get a JSON export for your event, you can just swap the URL and you are good to go. Those who don't want to use Lanyrd and instead want to use their own backend, must have a look at the next section. I have described the structure of JSON, which I have used to make this app. You can create your REST API accordingly. Understanding JSON Let's understand the structure of the JSON export. The whole JSON database is an object with two keys, timezone and sessions, like the following: { timezone: "Australia/Brisbane", sessions: [..] } Timezone is just a string, but sessions key is an array of lists of all the sessions of our conference. Items in the sessions array are divided according to days of the conference. Each item represents a day of the conference and has the following structure: { day: "Saturday 21st November", sessions: [..] } So, the sessions array of each day has actual sessions as items. Each item has the following structure: { start_time: "2015-11-21 09:30:00", topics: [], web_url: "url of event times: "9:30am - 10:00am", id: "sdtpgq", types: [ ], end_time_epoch: 1448064000, speakers: [], title: "Talk Title", event_id: "event_id", space: "Space", day: "Saturday 21st November", end_time: "2015-11-21 10:00:00", other_url: null, start_time_epoch: 1448062200, abstract: "<p>Abstract of Talk</p>" }, Here, the speakers array has a list of all speakers. We will use that speakers array to create a list of all speakers in an array. You can see the whole structure here: That's all we need to understand for JSON. Defining the app In this section, we will define various functionalities of our application. We will also show the architecture of our app using an app flow diagram. Functionalities We will be including the following functionalities in our application: List of speakers Schedule detail Search functionality using session title, abstract, and speaker's names Hide/Show any day of the schedule Favorite list for sessions Adding favorite sessions to the device calendar Ability to share sessions to other applications Directions to venue Offline working App flow This is how the control will flow inside our application: Let's understand the flow: RootComponent: RootComponent is the root Ionic component. It is defined inside the /app/app.ts file. TabsPage: TabsPage acts as a container for our SpeakersPage, SchedulePage, and AboutPage. SpeakersPage: SpeakersPage shows a list of all the speakers of our conference. SchedulePage: SchedulePage shows us the schedule of our conference and allows us various filter features. AboutPage: AboutPage provides us information about the conference. SpeakersDetail: SpeakerDetail page shows the details of the speaker and a list of his/her presentations in this conference. SessionDetail: SessionDetail page shows the details of a session with the title and abstract of the session. FavoritePage: FavoritePage shows a list of the user's favorite sessions. Summary In this article, we discussed about the JSON files that will used as database in our app. We also defined the the functionalities of our app and understood the flow of our app. Resources for Article:  Further resources on this subject: First Look at Ionic [article] Ionic JS Components [article] Creating Our First App with Ionic [article]
Read more
  • 0
  • 0
  • 21222
article-image-building-grid-system-susy
Packt
09 Aug 2016
14 min read
Save for later

Building a Grid System with Susy

Packt
09 Aug 2016
14 min read
In this article by Luke Watts, author of the book Mastering Sass, we will build a responsive grid system using the Susy library and a few custom mixins and functions. We will set a configuration map with our breakpoints which we will then loop over to automatically create our entire grid, using interpolation to create our class names. (For more resources related to this topic, see here.) Detailing the project requirements For this example, we will need bower to download Susy. After Susy has been downloaded we will only need two files. We'll place them all in the same directory for simplicity. These files will be style.scss and _helpers.scss. We'll place the majority of our SCSS code in style.scss. First, we'll import susy and our _helpers.scss at the beginning of this file. After that we will place our variables and finally our code which will create our grid system. Bower and Susy To check if you have bower installed open your command line (Terminal on Unix or CMD on Windows) and run: bower -v If you see a number like "1.7.9" you have bower. If not you will need to install bower using npm, a package manager for NodeJS. If you don't already have NodeJS installed, you can download it from: https://nodejs.org/en/. To install bower from your command line using npm you will need to run: npm install -g bower Once bower is installed cd into the root of your project and run: bower install susy This will create a directory called bower_components. Inside that you will find a folder called susy. The full path to file we will be importing in style.scss is bower_components/susy/sass/_susy.scss. However we can leave off the underscore (_) and also the extension (.scss). Sass will still load import the file just fine. In style.scss add the following at the beginning of our file: // style.scss @import 'bower_components/susy/sass/susy'; Helpers (mixins and functions) Next, we'll need to import our _helpers.scss file in style.scss. Our _helpers.scss file will contain any custom mixins or functions we'll create to help us in building our grid. In style.scss import _helpers.scss just below where we imported Susy: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; Mixin: bp (breakpoint) I don't know about you, but writing media queries always seems like bit of a chore to me. I just don't like to write (min-width: 768px) all the time. So for that reason I'm going to include the bp mixin, which means instead of writing: @media(min-width: 768px) { // ... } We can simply use: @include bp(md) { // ... } First we are going to create a map of our breakpoints. Add the $breakpoints map to style.scss just below our imports: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); Then, inside _helpers.scss we're going to create our bp mixin which will handle creating our media queries from the $breakpoints map. Here's the breakpoint (bp) mixin: @mixin bp($size: md) { @media (min-width: map-get($breakpoints, $size)) { @content; } } Here we are setting the default breakpoint to be md (768px). We then use the built in Sass function map-get to get the relevant value using the key ($size). Inside our @media rule we use the @content directive which will allows us pass any Sass or CSS directly into our bp mixin to our @media rule. The container mixin The container mixin sets the max-width of the containing element, which will be the .container element for now. However, it is best to use the container mixin to semantically restrict certain parts of the design to your max width instead of using presentational classes like container or row. The container mixin takes a width argument, which will be the max-width. It also automatically applies the micro-clearfix hack. This prevents the containers height from collapsing when the elements inside it are floated. I prefer the overflow: hidden method myself, but they do the same thing essentially. By default, the container will be set to max-width: 100%. However, you can set it to be any valid unit of dimension, such as 60em, 1160px, 50%, 90vw, or whatever. As long as it's a valid CSS unit it will work. In style.scss let's create our .container element using the container mixin: // style.scss .container { @include container(1160px); } The preceding code will give the following CSS output: .container { max-width: 1160px; margin-left: auto; margin-right: auto; } .container:after { content: " "; display: block; clear: both; } Due to the fact the container uses a max-width we don't need to specify different dimensions for various screen sizes. It will be 100% until the screen is above 1160px and then the max-width value will kick in. The .container:after rule is the micro-clearfix hack. The span mixin To create columns in Susy we use the span mixin. The span mixin sets the width of that element and applies a padding or margin depending on how Susy is set up. By default, Susy will apply a margin to the right of each column, but you can set it to be on the left, or to be padding on the left or right or padding or margin on both sides. Susy will do the necessary work to make everything work behind the scenes. To create a half width column in a 12 column grid you would use: .col-6 { @include span(6 of 12); } The of 12 let's Susy know this is a 12 column grid. When we define our $susy map later we can tell Susy how many columns we are using via the columns property. This means we can drop the of 12 part and simply use span(6) instead. Susy will then know we are using 12 columns unless we explicitly pass another value. The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } Notice the width and margin together would actually be 50.84746%, not 50% as you might expect. Therefor two of these column would actually be 101.69492%. That will cause the last column to wrap to the next row. To prevent this, you would need to remove the margin from the last column. The last keyword To address this, Susy uses the last keyword. When you pass this to the span mixin it lets Susy know this is the last column in a row. This removes the margin right and also floats the element in question to the right to ensure it's at the very end of the row. Let's take the previous example where we would have two col-6 elements. We could create a class of col-6-last and apply the last keyword to that span mixin: .col-6 { @include span(6 of 12); &-last { @include span(last 6 of 12) } } The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } .col-6-last { width: 49.15254%; float: right; margin-right: 0; } You can also place the last keyword at the end. This will also work: .col-6 { @include span(6 of 12); &-last { @include span(6 of 12 last) } } The $susy configuration map Susy allows for a lot of configuration through its configuration map which is defined as $susy. The settings in the $susy map allow us to set how wide the container should be, how many columns our grid should have, how wide the gutters are, whether those gutters should be margins or padding, and whether the gutters should be on the left, right or both sides of each column. Actually, there are even more settings available depending what type of grid you'd like to build. Let's, define our $susy map with the container set to 1160px just after our $breakpoints map: // style.scss $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); Here we've set our containers max-width to be 1160px. This is used when we use the container mixin without entering a value. We've also set our grid to be 12 columns with the gutters, (padding or margin) to be 1/3 the width of a column. That's about all we need to set for our purposes, however, Susy has a lot more to offer. In fact, to cover everything in Susy would need an entirely book of its own. If you want to explore more of what Susy can do you should read the documentation at http://susydocs.oddbird.net/en/latest/. Setting up a grid system We've all used a 12 column grid which has various sizes (small, medium, large) or a set breakpoint (or breakpoints). These are the most popular methods for two reasons...it works, and it's easy to understand. Furthermore, with the help of Susy we can achieve this with less than 30 lines of Sass! Don't believe me? Let's begin. The concept of our grid system Our grid system will be similar to that of Foundation and Bootstrap. It will have 3 breakpoints and will be mobile-first. It will have a container, which will act as both .container and .row, therefore removing the need for a .row class. The breakpoints Earlier we defined three sizes in our $breakpoints map. These were: $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); So our grid will have small, medium and large breakpoints. The columns naming convention Our columns will use a similar naming convention to that of Bootstrap. There will be four available sets of columns. The first will start from 0px up to the 399px (example: .col-12) The next will start from 480px up to 767px (example: .col-12-sm) The medium will start from 768px up to 979px (example: .col-12-md) The large will start from 980px (example: .col-12-lg) Having four options will give us the most flexibility. Building the grid From here we can use an @for loop and our bp mixin to create our four sets of classes. Each will go from 1 through 12 (or whatever our Susy columns property is set to) and will use the breakpoints we defined for small (sm), medium (md) and large (lg). In style.scss add the following: // style.scss @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } These 9 lines of code are responsible for our mobile-first set of column classes. This loops from one through 12 (which is currently the value of the $susy columns property) and creates a class for each. It also adds a class which handles removing the final columns right margin so our last column doesn't wrap onto a new line. Having control of when this happens will give us the most control. The preceding code would create: .col-1 { width: 6.38298%; float: left; margin-right: 2.12766%; } .col-1-last { width: 6.38298%; float: right; margin-right: 0; } /* 2, 3, 4, and so on up to col-12 */ That means our loop which is only 9 lines of Sass will generate 144 lines of CSS! Now let's create our 3 breakpoints. We'll use an @each loop to get the sizes from our $breakpoints map. This will mean if we add another breakpoint, such as extra-large (xl) it will automatically create the correct set of classes for that size. @each $size, $value in $breakpoints { // Breakpoint will go here and will use $size } Here we're looping over the $breakpoints map and setting a $size variable and a $value variable. The $value variable will not be used, however the $size variable will be set to small, medium and large for each respective loop. We can then use that to set our bp mixin accordingly: @each $size, $value in $breakpoints { @include bp($size) { // The @for loop will go here similar to the above @for loop... } } Now, each loop will set a breakpoint for small, medium and large, and any additional sizes we might add in the future will be generated automatically. Now we can use the same @for loop inside the bp mixin with one small change, we'll add a size to the class name: @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } That's everything we need for our grid system. Here's the full stye.scss file: / /style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); .container { @include container; } @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } With our bp mixin that's 45 lines of SCSS. And how many lines of CSS does that generate? Nearly 600 lines of CSS! Also, like I've said, if we wanted to create another breakpoint it would only require a change to the $breakpoint map. Then, if we wanted to have 16 columns instead we would only need to the $susy columns property. The above code would then automatically loop over each and create the correct amount of columns for each breakpoint. Testing our grid Next we need to check our grid works. We mainly want to check a few column sizes for each breakpoint and we want to be sure our last keyword is doing what we expect. I've created a simple piece of HTML to do this. I've also add a small bit of CSS to the file to correct box-sizing issues which will happen because of the additional 1px border. I've also restricted the height so text which wraps to a second line won't affect the heights. This is simply so everything remains in line so it's easy to see our widths are working. I don't recommend setting heights on elements. EVER. Instead using padding or line-height if you can to give an element more height and let the content dictate the size of the element. Create a file called index.html in the root of the project and inside add the following: <!doctype html> <html lang="en-GB"> <head> <meta charset="UTF-8"> <title>Susy Grid Test</title> <link rel="stylesheet" type="text/css" href="style.css" /> <style type="text/css"> *, *::before, *::after { box-sizing: border-box; } [class^="col"] { height: 1.5em; background-color: grey; border: 1px solid black; } </style> </head> <body> <div class="container"> <h1>Grid</h1> <div class="col-12 col-10-sm col-2-md col-10-lg">.col-sm-10.col-2-md.col-10-lg</div> <div class="col-12 col-2-sm-last col-10-md-last col-2-lg-last">.col-sm-2-last.col-10-md-last.col-2-lg-last</div> <div class="col-12 col-9-sm col-3-md col-9-lg">.col-sm-9.col-3-md.col-9-lg</div> <div class="col-12 col-3-sm-last col-9-md-last col-3-lg-last">.col-sm-3-last.col-9-md-last.col-3-lg-last</div> <div class="col-12 col-8-sm col-4-md col-8-lg">.col-sm-8.col-4-md.col-8-lg</div> <div class="col-12 col-4-sm-last col-8-md-last col-4-lg-last">.col-sm-4-last.col-8-md-last.col-4-lg-last</div> <div class="col-12 col-7-sm col-md-5 col-7-lg">.col-sm-7.col-md-5.col-7-lg</div> <div class="col-12 col-5-sm-last col-7-md-last col-5-lg-last">.col-sm-5-last.col-7-md-last.col-5-lg-last</div> <div class="col-12 col-6-sm col-6-md col-6-lg">.col-sm-6.col-6-md.col-6-lg</div> <div class="col-12 col-6-sm-last col-6-md-last col-6-lg-last">.col-sm-6-last.col-6-md-last.col-6-lg-last</div> </div> </body> </html> Use your dev tools responsive tools or simply resize the browser from full size down to around 320px and you'll see our grid works as expected. Summary In this article we used Susy grids as well as a simple breakpoint mixin (bp) to create a solid, flexible grid system. With just under 50 lines of Sass we generated our grid system which consists of almost 600 lines of CSS.  Resources for Article: Further resources on this subject: Implementation of SASS [article] Use of Stylesheets for Report Designing using BIRT [article] CSS Grids for RWD [article]
Read more
  • 0
  • 0
  • 14613

article-image-key-elements-time-series-analysis
Packt
08 Aug 2016
7 min read
Save for later

Key Elements of Time Series Analysis

Packt
08 Aug 2016
7 min read
In this article by Jay Gendron, author of the book, Introduction to R for Business Intelligence, we will see that the time series analysis is the most difficult analysis technique. It is true that this is a challenging topic. However, one may also argue that an introductory awareness of a difficult topic is better than perfect ignorance about it. Time series analysis is a technique designed to look at chronologically ordered data that may form cycles over time. Key topics covered in this article include the following: (For more resources related to this topic, see here.) Introducing key elements of time series analysis Time series analysis is an upper-level college statistics course. It is also a demanding topic taught in econometrics. This article provides you an understanding of a useful but difficult analysis technique. It provides a combination of theoretical learning and hands-on practice. The goal is to provide you a basic understanding of working with time series data and give you a foundation to learn more. Use Case: forecasting future ridership The finance group approached the BI team and asked for help with forecasting future trends. They heard about your great work for the marketing team and wanted to get your perspective on their problem. Once a year they prepare an annual report that includes ridership details. They are hoping to include not only last year's ridership levels, but also a forecast of ridership levels in the coming year. These types of time-based predictions are forecasts. The Ch6_ridership_data_2011-2012.csv data file is available at the website—http://jgendron.github.io/com.packtpub.intro.r.bi/. This data is a subset of the bike sharing data. It contains two years of observations, including the date and a count of users by hour. Introducing key elements of time series analysis You just applied a linear regression model to time series data and saw it did not work. The biggest problem was not a failure in fitting a linear model to the trend. For this well-behaved time series, the average formed a linear plot over time. Where was the problem? The problem was in seasonal fluctuations. The seasonal fluctuations were one year in length and then repeated. Most of the data points existed above and below the fitted line, instead of on it or near it. As we saw, the ability to make a point estimate prediction was poor. There is an old adage that says even a broken clock is correct twice a day. This is a good analogy for analyzing seasonal time series data with linear regression. The fitted linear line would be a good predictor twice every cycle. You will need to do something about the seasonal fluctuations in order to make better forecasts; otherwise, they will simply be straight lines with no account of the seasonality. With seasonality in mind, there are functions in R that can break apart the trend, seasonality, and random components of a time series. The decompose() function found in the forecast package shows how each of these three components influence the data. You can think of this technique as being similar to creating the correlogram plot during exploratory data analysis. It captures a greater understanding of the data in a single plot: library(forecast); plot(decompose(airpass)) The output of this code is shown here: This decomposition capability is nice as it gives you insights about approaches you may want to take with the data, and with reference to the previous output, they are described as follows: The top panel provides a view of the original data for context. The next panel shows the trend. It smooths the data and removes the seasonal component. In this case, you will see that over time, air passenger volume has increased steadily and in the same direction. The third plot shows the seasonal component. Removing the trend helps reveal any seasonal cycles. This data shows a regular and repeated seasonal cycle through the years. The final plot is the randomness—everything else in the data. It is like the error term in linear regression. You will see less error in the middle of the series. The stationary assumption There is an assumption for creating time series models. The data must be stationary. Stationary data exists when its mean and variance do not change as a function of time. If you decompose a time series and witness a trend, seasonal component, or both, then you have non-stationary data. You can transform them into stationary data in order to meet the required assumption. Using a linear model for comparison, there is randomness around a mean—represented by data points scattered randomly around a fitted line. The data is independent of time and it does not follow other data in a cycle. This means that the data is stationary. Not all the data lies on the fitted line, but it is not moving. In order to analyze time series data, you need your data points to stay still. Imagine trying to count a class of primary school students while they are on the playground during recess. They are running about back and forth. In order to count them, you need them to stay still—be stationary. Transforming non-stationary data into stationary data allows you to analyze it. You can transform non-stationary data into stationary data using a technique called differencing. The differencing techniques Differencing subtracts each data point from the data point that is immediately in front of it in the series. This is done with the diff() function. Mathematically, it works as follows: Seasonal differencing is similar, but it subtracts each data point from its related data point in the next cycle. This is done with the diff() function, along with a lag parameter set to the number of data points in a cycle. Mathematically, it works as follows: Look at the results of differencing in this toy example. Build a small sample dataset of 36 data points that include an upward trend and seasonal component, as shown here: seq_down <- seq(.625, .125, -0.125) seq_up <- seq(0, 1.5, 0.25) y <- c(seq_down, seq_up, seq_down + .75, seq_up + .75, seq_down + 1.5, seq_up + 1.5) Then, plot the original data and the results obtained after calling the diff() function: par(mfrow = c(1, 3)) plot(y, type = "b", ylim = c(-.1, 3)) plot(diff(y), ylim = c(-.1, 3), xlim = c(0, 36)) plot(diff(diff(y), lag = 12), ylim = c(-.1, 3), xlim = c(0, 36)) par(mfrow = c(1, 1)) detach(package:TSA, unload=TRUE) These three panels show the results of differencing and seasonal differencing. Detach the TSA package to avoid conflicts with other functions in the forecast library we will use, as follows: These three panes are described as follows: The left pane shows n = 36 data points, with 12 points in each of the three cycles. It also shows a steadily increasing trend. Either of these characteristics breaks the stationary data assumption. The center pane shows the results of differencing. Plotting the difference between each point and its next neighbor removes the trend. Also, notice that you get one less data point. With differencing, you get (n - 1) results. The right pane shows seasonal differencing with a lag of 12. The data is stationary. Notice that the trend differencing is now the data in the seasonal differencing. Also note that you will lose a cycle of data, getting (n - lag) results. Summary Congratulations, you truly deserve recognition for getting through a very tough topic. You now have more awareness about time series analysis than some people with formal statistical training.  Resources for Article:   Further resources on this subject: Managing Oracle Business Intelligence [article] Self-service Business Intelligence, Creating Value from Data [article] Business Intelligence and Data Warehouse Solution - Architecture and Design [article]
Read more
  • 0
  • 0
  • 12277
Modal Close icon
Modal Close icon