Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-how-to-build-remote-controlled-tv-node-webkit
Roberto González
08 Jul 2015
14 min read
Save for later

How to build a Remote-controlled TV with Node-Webkit

Roberto González
08 Jul 2015
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html // This is the starting point for our desktop app - css // Our desktop app styles - js // This is where the magic happens - remote // This is where the magic happens (Part 2) - libraries // FFMPEG libraries, which give you H.264 video support in Node-Webkit - player // Our youtube player - Gruntfile.js // Build scripts - run.bat // run.bat runs the app on Windows - run.sh // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -g sudo npm install grunt-cli -g  On Windows: npm install node-gyp -g npm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: { "//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!", "name": "Remote", "//": "A simple description of what the app does.", "description": "An example of node-webkit", "//": "This is the first html the app will load. Just leave this this way", "main": "app://host/index.html", "//": "The version number. 0.0.1 is a good start :D", "version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.", "window": { "//": "The Window Title for the app", "title": "Remote", "//": "The Icon for the app", "icon": "css/images/icon.png", "//": "Do you want the File/Edit/Whatever toolbar?", "toolbar": false, "//": "Do you want a standard window around your app (a title bar and some borders)?", "frame": true, "//": "Can you resize the window?", "resizable": true}, "webkit": { "plugin": false, "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36" }, "//": "These are the libraries we’ll be using:", "//": "Express is a web server, which will handle the files for the remote", "//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.", "dependencies": { "express": "^4.9.5", "socket.io": "^1.1.0" }, "//": "And these are just task handlers to make things easier", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-copy": "^0.6.0", "grunt-node-webkit-builder": "^0.1.21" } } You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm install grunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html> <head> <metacharset="utf-8"/> <title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/> <linkhref="css/normalize.css"rel="stylesheet"type="text/css"/> <linkhref="css/styles.css"rel="stylesheet"type="text/css"/> </head> <body> <divid="serverInfo"> <h1>Youtube TV</h1> </div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script> <script src="js/app.js"></script> </body> </html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express'); var app = express(); var server = require('http').Server(app); var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080; server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) { // video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video Youtube.watchVideo(video); }); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () { Youtube.playVideo(); }); socket.on('pause', function () { Youtube.pauseVideo(); }); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) { io.emit('statusChange', status); }; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html> <head> <metacharset=“utf-8”/> <title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/> <linkrel="stylesheet"href="/css/styles.css"/> </head> <body> <divclass="controls"> <divclass="search"> <inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/> </div> <divclass="playback"> <buttonclass="play">&gt;</button> <buttonclass="pause">||</button> </div> </div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"> <articleclass="video"> <figure><imgsrc=""alt=""/></figure> <divclass="info"> <h2></h2> </div> </article> </div> <script src="/socket.io/socket.io.js"></script> <script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script> <script src="/js/remote.js"></script> </body> </html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null; $('#searchQuery').on('keyup', function(event){ clearTimeout(searchTimeout); searchTimeout = setTimeout(function(){ searchYoutube($('#searchQuery').val()); }, 500); }); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){ // Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data()); }); // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){ if( status ==='play' ) { $('.playback .pause').show(); $('.playback .play').hide(); } elseif( status ==='pause'|| status ==='stop' ) { $('.playback .pause').hide(); $('.playback .play').show(); } }); // Notify the app when we hit the play button$('.playback .play').on('click', function(event){ socket.emit('play'); }); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){ socket.emit('pause'); }); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) { require('dns').lookup( require('os').hostname(), function (err, add, fam) { typeof callback =='function'? callback(add) :null; }); } // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){ $('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote'); }); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 6647

article-image-integrating-google-play-services
Packt
08 Jul 2015
41 min read
Save for later

Integrating Google Play Services

Packt
08 Jul 2015
41 min read
In this article Integrating Google Play Services by Raul Portales, author of the book Mastering Android Game Development, we will cover the tools that Google Play Services offers for game developers. We'll see the integration of achievements and leaderboards in detail, take an overview of events and quests, save games, and use turn-based and real-time multiplaying. Google provides Google Play Services as a way to use special features in apps. Being the game services subset the one that interests us the most. Note that Google Play Services are updated as an app that is independent from the operating system. This allows us to assume that most of the players will have the latest version of Google Play Services installed. (For more resources related to this topic, see here.) More and more features are being moved from the Android SDK to the Play Services because of this. Play Services offer much more than just services for games, but there is a whole section dedicated exclusively to games, Google Play Game Services (GPGS). These features include achievements, leaderboards, quests, save games, gifts, and even multiplayer support. GPGS also comes with a standalone app called "Play Games" that shows the user the games he or she has been playing, the latest achievements, and the games his or her friends play. It is a very interesting way to get exposure for your game. Even as a standalone feature, achievements and leaderboards are two concepts that most games use nowadays, so why make your own custom ones when you can rely on the ones made by Google? GPGS can be used on many platforms: Android, iOS and web among others. It is more used on Android, since it is included as a part of Google apps. There is extensive step-by-step documentation online, but the details are scattered over different places. We will put them together here and link you to the official documentation for more detailed information. For this article, you are supposed to have a developer account and have access to the Google Play Developer Console. It is also advisable for you to know the process of signing and releasing an app. If you are not familiar with it, there is very detailed official documentation at http://developer.android.com/distribute/googleplay/start.html. There are two sides of GPGS: the developer console and the code. We will alternate from one to the other while talking about the different features. Setting up the developer console Now that we are approaching the release state, we have to start working with the developer console. The first thing we need to do is to get into the Game services section of the console to create and configure a new game. In the left menu, we have an option labeled Game services. This is where you have to click. Once in the Game services section, click on Add new game: This bring us to the set up dialog. If you are using other Google services like Google Maps or Google Cloud Messaging (GCM) in your game, you should select the second option and move forward. Otherwise, you can just fill in the fields for I don't use any Google APIs on my game yet and continue. If you don't know whether you are already using them, you probably aren't. Now, it is time to link a game to it. I recommend you publish your game beforehand as an alpha release. This will let you select it from the list when you start typing the package name. Publishing the game to the alpha channel before adding it to Game services makes it much easier to configure. If you are not familiar with signing and releasing your app, check out the official documentation at http://developer.android.com/tools/publishing/app-signing.html. Finally, there are only two steps that we have to take when we link the first app. We need to authorize it and provide branding information. The authorization will generate an OAuth key—that we don't need to use since it is required for other platforms—and also a game ID. This ID is unique to all the linked apps and we will need it to log in. But there is no need to write it down now, it can be found easily in the console at anytime. Authorizing the app will generate the game ID, which is unique to all linked apps. Note that the app we have added is configured with the release key. If you continue and try the login integration, you will get an error telling you that the app was signed with the wrong certificate: You have two ways to work with this limitation: Always make a release build to test GPGS integration Add your debug-signed game as a linked app I recommend that you add the debug signed app as a linked app. To do this, we just need to link another app and configure it with the SHA1 fingerprint of the debug key. To obtain it, we have to open a terminal and run the keytool utility: keytool -exportcert -alias androiddebugkey -keystore <path-to-debug-keystore> -list -v Note that in Windows, the debug keystore can be found at C:Users<USERNAME>.androiddebug.keystore. On Mac and Linux, the debug keystore is typically located at ~/.android/debug.keystore. Dialog to link the debug application on the Game Services console Now, we have the game configured. We could continue creating achievements and leaderboards in the console, but we will put it aside and make sure that we can sign in and connect with GPGS. The only users who can sign in to GPGS while a game is not published are the testers. You can make the alpha and/or beta testers of a linked app become testers of the game services, and you can also add e-mail addresses by hand for this. You can modify this in the Testing tab. Only test accounts can access a game that is not published. The e-mail of the owner of the developer console is prefilled as a tester. Just in case you have problems logging in, double-check the list of testers. A game service that is not published will not appear in the feed of the Play Services app, but it will be possible to test and modify it. This is why it is a good idea to keep it in draft mode until the game itself is ready and publish both the game and the game services at the same time. Setting up the code The first thing we need to do is to add the Google Play Services library to our project. This should already have been done by the wizard when we created the project, but I recommend you to double-check it now. The library needs to be added to the build.gradle file of the main module. Note that Android Studio projects contain a top-level build.gradle and a module-level build.gradle for each module. We will modify the one that is under the mobile module. Make sure that the play services' library is listed under dependencies: apply plugin: 'com.android.application'     dependencies { compile 'com.android.support:appcompat-v7:22.1.1' compile 'com.google.android.gms:play-services:7.3.0' } At the point of writing, the latest version is 7.3.0. The basic features have not changed much and they are unlikely to change. You could force Gradle to use a specific version of the library, but in general I recommend you use the latest version. Once you have it, save the changes and click on Sync Project with Gradle Files. To be able to connect with GPGS, we need to let the game know what the game ID is. This is done through the <meta-data> tag on AndroidManifest.xml. You could hardcode the value here, but it is highly recommended that you set it as a resource in your Android project. We are going to create a new file for this under res/values, which we will name play_services.xml. In this file we will put the game ID, but later we will also have the achievements and leaderboard IDs in it. Using a separate file for these values is recommended because they are constants that do not need to be translated: <application> <meta-data android_name="com.google.android.gms.games.APP_ID" android_value="@string/app_id" /> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version"/> [...] </application> Adding this metadata is extremely important. If you forget to update the AndroidManifest.xml, the app will crash when you try to sign in to Google Play services. Note that the integer for the gms version is defined in the library and we do not need to add it to our file. If you forget to add the game ID to the strings the app will crash. Now, it is time to proceed to sign in. The process is quite tedious and requires many checks, so Google has released an open source project named BaseGameUtils, which makes it easier. Unfortunately this project is not a part of the play services' library and it is not even available as a library. So, we have to get it from GitHub (either check it out or download the source as a ZIP file). BaseGameUtils abstracts us from the complexity of handling the connection with Play Services. Even more cumbersome, BaseGameUtils is not available as a standalone download and has to be downloaded together with another project. The fact that this significant piece of code is not a part of the official library makes it quite tedious to set up. Why it has been done like this is something that I do not comprehend myself. The project that contains BaseGameUtils is called android-basic-samples and it can be downloaded from https://github.com/playgameservices/android-basic-samples. Adding BaseGameUtils is not as straightforward as we would like it to be. Once android-basic-samples is downloaded, open your game project in Android Studio. Click on File > Import Module and navigate to the directory where you downloaded android-basic-samples. Select the BaseGameUtils module in the BasicSamples/libraries directory and click on OK. Finally, update the dependencies in the build.gradle file for the mobile module and sync gradle again: dependencies { compile project(':BaseGameUtils') [...] } After all these steps to set up the project, we are finally ready to begin the sign in. We will make our main Activity extend from BaseGamesActivity, which takes care of all the handling of the connections, and sign in with Google Play Services. One more detail: until now, we were using Activity and not FragmentActivity as the base class for YassActivity (BaseGameActivity extends from FragmentActivity) and this change will mess with the behavior of our dialogs while calling navigateBack. We can change the base class of BaseGameActivity or modify navigateBack to perform a pop-on fragment navigation hierarchy. I recommend the second approach: public void navigateBack() { // Do a pop on the navigation history getFragmentManager().popBackStack(); } This util class has been designed to work with single-activity games. It can be used in multiple activities, but it is not straightforward. This is another good reason to keep the game in a single activity. The BaseGameUtils is designed to be used in single-activity games. The default behavior of BaseGameActivity is to try to log in each time the Activity is started. If the user agrees to sign in, the sign in will happen automatically. But if the user rejects doing so, he or she will be asked again several times. I personally find this intrusive and annoying, and I recommend you to only prompt to log in to Google Play services once (and again, if the user logs out). We can always provide a login entry point in the app. This is very easy to change. The default number of attempts is set to 3 and it is a part of the code of GameHelper: // Should we start the flow to sign the user in automatically on   startup? If // so, up to // how many times in the life of the application? static final int DEFAULT_MAX_SIGN_IN_ATTEMPTS = 3; int mMaxAutoSignInAttempts = DEFAULT_MAX_SIGN_IN_ATTEMPTS; So, we just have to configure it for our activity, adding one line of code during onCreate to change the default behavior with the one we want: just try it once: getGameHelper().setMaxAutoSignInAttempts(1); Finally, there are two methods that we can override to act when the user successfully logs in and when there is a problem: onSignInSucceeded and onSignInFailed. We will use them when we update the main menu at the end of the article. Further use of GPGS is to be made via the GameHelper and/or the GoogleApiClient, which is a part of the GameHelper. We can obtain a reference to the GameHelper using the getGameHelper method of BaseGameActivity. Now that the user can sign into Google Play services we can continue with achievements and leaderboards. Let's go back to the developer console. Achievements We will first define a few achievements in the developer console and then see how to unlock them in the game. Note that to publish any game with GPGS, you need to define at least five achievements. No other feature is mandatory, but achievements are. We need to define at least five achievements to publish a game with Google Play Game services. If you want to use GPGS with a game that has no achievements, I recommend you to add five dummy secret achievements and let them be. To add an achievement, we just need to navigate to the Achievements tab on the left and click on Add achievement: The menu to add a new achievement has a few fields that are mostly self-explanatory. They are as follows: Name: the name that will be shown (can be localized to different languages). Description: the description of the achievement to be shown (can also be localized to different languages). Icon: the icon of the achievement as a 512x512 px PNG image. This will be used to show the achievement in the list and also to generate the locked image and the in-game popup when it is unlocked. Incremental achievements: if the achievement requires a set of steps to be completed, it is called an incremental achievement and can be shown with a progress bar. We will have an incremental achievement to illustrate this. Initial state: Revealed/Hidden depending on whether we want the achievement to be shown or not. When an achievement is shown, the name and description are visible, players know what they have to do to unlock it. A hidden achievement, on the other hand, is a secret and can be a funny surprise when unlocked. We will have two secret achievements. Points: GPGS allows each game to have 1,000 points to give for unlocking achievements. This gets converted to XP in the player profile on Google Play games. This can be used to highlight that some achievements are harder than others, and therefore grant a bigger reward. You cannot change these once they are published, so if you plan to have more achievements in the future, plan ahead with the points. List order: The order of the achievements is shown. It is not followed all the time, since on the Play Games app the unlocked ones are shown before the locked ones. It is still handy to rearrange them. Dialog to add an achievement on the developer console As we already decided, we will have five achievements in our game and they will be as follows: Big Score: score over 100,000 points in one game. This is to be granted while playing. Asteroid killer: destroy 100 asteroids. This will count them across different games and is an incremental achievement. Survivor: survive for 60 seconds. Target acquired: a hidden achievement. Hit 20 asteroids in a row without missing a hit. This is meant to reward players that only shoot when they should. Target lost: this is supposed to be a funny achievement, granted when you miss with 10 bullets in a row. It is also hidden, because otherwise it would be too easy to unlock. So, we created some images for them and added them to the console. The developer console with all the configured achievements Each achievement has a string ID. We will need these ids to unlock the achievements in our game, but Google has made it easy for us. We have a link at the bottom named Get resources that pops up a dialog with the string resources we need. We can just copy them from there and paste them in our project in the play_services.xml file we have already created. Architecture For our game, given that we only have five achievements, we are going to add the code for achievements directly into the ScoreObject. This will make it less code for you to read so we can focus on how it is done. However, for a real production code I recommend you define a dedicated architecture for achievements. The recommended architecture is to have an AchievementsManager class that loads all the achievements when the game starts and stores them in three lists: All achievements Locked achievements Unlocked achievements Then, we have an Achievement base class with an abstract check method that we implement for each one of them: public boolean check (GameEngine gameEngine, GameEvent gameEvent) { } This base class takes care of loading the achievement state from local storage (I recommend using SharedPreferences for this) and modify it as per the result of check. The achievements check is done at AchievementManager level using a checkLockedAchievements method that iterates over the list of achievements that can be unlocked. This method should be called as a part of onEventReceived of GameEngine. This architecture allows you to check only the achievements that are yet to be unlocked and also all the achievements included in the game in a specific dedicated place. In our case, since we are keeping the score inside the ScoreGameObject, we are going to add all achievements code there. Note that making the GameEngine take care of the score and having it as a variable that other objects can read are also recommended design patterns, but it was simpler to do this as a part of ScoreGameObject. Unlocking achievements To handle achievements, we need to have access to an object of the class GoogleApiClient. We can get a reference to it in the constructor of ScoreGameObject: private final GoogleApiClient mApiClient;   public ScoreGameObject(YassBaseFragment parent, View view, int viewResId) { […] mApiClient =  parent.getYassActivity().getGameHelper().getApiClient(); } The parent Fragment has a reference to the Activity, which has a reference to the GameHelper, which has a reference to the GoogleApiClient. Unlocking an achievement requires just a single line of code, but we also need to check whether the user is connected to Google Play services or not before trying to unlock an achievement. This is necessary because if the user has not signed it, an exception is thrown and the game crashes. Unlocking an achievement requires just a single line of code. But this check is not enough. In the edge case, when the user logs out manually from Google Play services (which can be done in the achievements screen), the connection will not be closed and there is no way to know whether he or she has logged out. We are going to create a utility method to unlock the achievements that does all the checks and also wraps the unlock method into a try/catch block and make the API client disconnect if an exception is raised: private void unlockSafe(int resId) { if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.unlock(mApiClient, getString(resId));    } catch (Exception e) {      mApiClient.disconnect();    } } } Even with all the checks, the code is still very simple. Let's work on the particular achievements we have defined for the game. Even though they are very specific, the methodology to track game events and variables and then check for achievements to unlock is in itself generic, and serves as a real-life example of how to deal with achievements. The achievements we have designed require us to count some game events and also the running time. For the last two achievements, we need to make a new GameEvent for the case when a bullet misses, which we have not created until now. The code in the Bullet object to trigger this new GameEvent is as follows: @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mY += mSpeedFactor * elapsedMillis; if (mY < -mHeight) {    removeFromGameEngine(gameEngine);    gameEngine.onGameEvent(GameEvent.BulletMissed); } } Now, let's work inside ScoreGameObject. We are going to have a method that checks achievements each time an asteroid is hit. There are three achievements that can be unlocked when that event happens: Big score, because hitting an asteroid gives us points Target acquired, because it requires consecutive asteroid hits Asteroid killer, because it counts the total number of asteroids that have been destroyed The code is like this: private void checkAsteroidHitRelatedAchievements() { if (mPoints > 100000) {    // Unlock achievement    unlockSafe(R.string.achievement_big_score); } if (mConsecutiveHits >= 20) {    unlockSafe(R.string.achievement_target_acquired); } // Increment achievement of asteroids hit if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.increment(mApiClient, getString(R.string.achievement_asteroid_killer), 1);    } catch (Exception e) {      mApiClient.disconnect();    } } } We check the total points and the number of consecutive hits to unlock the corresponding achievements. The "Asteroid killer" achievement is a bit of a different case, because it is an incremental achievement. These type of achievements do not have an unlock method, but rather an increment method. Each time we increment the value, progress on the achievement is updated. Once the progress is 100 percent, it is unlocked automatically. Incremental achievements are automatically unlocked, we just have to increment their value. This makes incremental achievements much easier to use than tracking the progress locally. But we still need to do all the checks as we did for unlockSafe. We are using a variable named mConsecutiveHits, which we have not initialized yet. This is done inside onGameEvent, which is the place where the other hidden achievement target lost is checked. Some initialization for the "Survivor" achievement is also done here: public void onGameEvent(GameEvent gameEvent) { if (gameEvent == GameEvent.AsteroidHit) {    mPoints += POINTS_GAINED_PER_ASTEROID_HIT;    mPointsHaveChanged = true;    mConsecutiveMisses = 0;    mConsecutiveHits++;    checkAsteroidHitRelatedAchievements(); } else if (gameEvent == GameEvent.BulletMissed) {    mConsecutiveMisses++;    mConsecutiveHits = 0;    if (mConsecutiveMisses >= 20) {      unlockSafe(R.string.achievement_target_lost);    } } else if (gameEvent == GameEvent.SpaceshipHit) {    mTimeWithoutDie = 0; } […] } Each time we hit an asteroid, we increment the number of consecutive asteroid hits and reset the number of consecutive misses. Similarly, each time we miss a bullet, we increment the number of consecutive misses and reset the number of consecutive hits. As a side note, each time the spaceship is destroyed we reset the time without dying, which is used for "Survivor", but this is not the only time when the time without dying should be updated. We have to reset it when the game starts, and modify it inside onUpdate by just adding the elapsed milliseconds that have passed: @Override public void startGame(GameEngine gameEngine) { mTimeWithoutDie = 0; […] }   @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mTimeWithoutDie += elapsedMillis; if (mTimeWithoutDie > 60000) {    unlockSafe(R.string.achievement_survivor); } } So, once the game has been running for 60,000 milliseconds since it started or since a spaceship was destroyed, we unlock the "Survivor" achievement. With this, we have all the code we need to unlock the achievements we have created for the game. Let's finish this section with some comments on the system and the developer console: As a rule of thumb, you can edit most of the details of an achievement until you publish it to production. Once your achievement has been published, it cannot be deleted. You can only delete an achievement in its prepublished state. There is a button labeled Delete at the bottom of the achievement screen for this. You can also reset the progress for achievements while they are in draft. This reset happens for all players at once. There is a button labeled Reset achievement progress at the bottom of the achievement screen for this. Also note that GameBaseActivity does a lot of logging. So, if your device is connected to your computer and you run a debug build, you may see that it lags sometimes. This does not happen in a release build for which the log is removed. Leaderboards Since YASS has only one game mode and one score in the game, it makes sense to have only one leaderboard on Google Play Game Services. Leaderboards are managed from their own tab inside the Game services area of the developer console. Unlike achievements, it is not mandatory to have any leaderboard to be able to publish your game. If your game has different levels of difficulty, you can have a leaderboard for each of them. This also applies if the game has several values that measure player progress, you can have a leaderboard for each of them. Managing leaderboards on Play Games console Leaderboards can be created and managed in the Leaderboards tag. When we click on Add leaderboard, we are presented with a form that has several fields to be filled. They are as follows: Name: the display name of the leaderboard, which can be localized. We will simply call it High Scores. Score formatting: this can be Numeric, Currency, or Time. We will use Numeric for YASS. Icon: a 512x512 px icon to identify the leaderboard. Ordering: Larger is better / Smaller is better. We are going to use Larger is better, but other score types may be Smaller is better as in a racing game. Enable tamper protection: this automatically filters out suspicious scores. You should keep this on. Limits: if you want to limit the score range that is shown on the leaderboard, you can do it here. We are not going to use this List order: the order of the leaderboards. Since we only have one, it is not really important for us. Setting up a leaderboard on the Play Games console Now that we have defined the leaderboard, it is time to use it in the game. As happens with achievements, we have a link where we can get all the resources for the game in XML. So, we proceed to get the ID of the leaderboard and add it to the strings defined in the play_services.xml file. We have to submit the scores at the end of the game (that is, a GameOver event), but also when the user exits a game via the pause button. To unify this, we will create a new GameEvent called GameFinished that is triggered after a GameOver event and after the user exits the game. We will update the stopGame method of GameEngine, which is called in both cases to trigger the event: public void stopGame() { if (mUpdateThread != null) {    synchronized (mLayers) {      onGameEvent(GameEvent.GameFinished);    }    mUpdateThread.stopGame();  mUpdateThread = null; } […] } We have to set the updateThread to null after sending the event, to prevent this code being run twice. Otherwise, we could send each score more than once. Similarly, as happens for achievements, submitting a score is very simple, just a single line of code. But we also need to check that the GoogleApiClient is connected and we still have the same edge case when an Exception is thrown. So, we need to wrap it in a try/catch block. To keep everything in the same place, we will put this code inside ScoreGameObject: @Override public void onGameEvent(GameEvent gameEvent) { […] else if (gameEvent == GameEvent.GameFinished) {    // Submit the score    if (mApiClient.isConnecting() || mApiClient.isConnected()) {      try {        Games.Leaderboards.submitScore(mApiClient,          getLeaderboardId(), mPoints);      }      catch (Exception e){        mApiClient.disconnect();      }    } } }   private String getLeaderboardId() { return mParent.getString(R.string.leaderboard_high_scores); } This is really straightforward. GPGS is now receiving our scores and it takes care of the timestamp of the score to create daily, weekly, and all time leaderboards. It also uses your Google+ circles to show the social score of your friends. All this is done automatically for you. The final missing piece is to let the player open the leaderboards and achievements UI from the main menu as well as trigger a sign in if they are signed out. Opening the Play Games UI To complete the integration of achievements and leaderboards, we are going to add buttons to open the native UI provided by GPGS to our main menu. For this, we are going to place two buttons in the bottom–left corner of the screen, opposite the music and sound buttons. We will also check whether we are connected or not; if not, we will show a single sign-in button. For these buttons we will use the official images of GPGS, which are available for developers to use. Note that you must follow the brand guidelines while using the icons and they must be displayed as they are and not modified. This also provides a consistent look and feel across all the games that support Play Games. Since we have seen a lot of layouts already, we are not going to include another one that is almost the same as something we already have. The main menu with the buttons to view achievements and leaderboards. To handle these new buttons we will, as usual, set the MainMenuFragment as OnClickListener for the views. We do this in the same place as the other buttons, that is, inside onViewCreated: @Override public void onViewCreated(View view, Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); [...] view.findViewById(    R.id.btn_achievements).setOnClickListener(this); view.findViewById(    R.id.btn_leaderboards).setOnClickListener(this); view.findViewById(R.id.btn_sign_in).setOnClickListener(this); } As happened with achievements and leaderboards, the work is done using static methods that receive a GoogleApiClient object. We can get this object from the GameHelper that is a part of the BaseGameActivity, like this: GoogleApiClient apiClient = getYassActivity().getGameHelper().getApiClient(); To open the native UI, we have to obtain an Intent and then start an Activity with it. It is important that you use startActivityForResult, since some data is passed back and forth. To open the achievements UI, the code is like this: Intent achievementsIntent = Games.Achievements.getAchievementsIntent(apiClient); startActivityForResult(achievementsIntent, REQUEST_ACHIEVEMENTS); This works out of the box. It automatically grays out the icons for the unlocked achievements, adds a counter and progress bar to the one that is in progress, and a padlock to the hidden ones. Similarly, to open the leaderboards UI we obtain an intent from the Games.Leaderboards class instead: Intent leaderboardsIntent = Games.Leaderboards.getLeaderboardIntent( apiClient, getString(R.string.leaderboard_high_scores)); startActivityForResult(leaderboardsIntent, REQUEST_LEADERBOARDS); In this case, we are asking for a specific leaderboard, since we only have one. We could use getLeaderboardsIntent instead, which will open the Play Games UI for the list of all the leaderboards. We can have an intent to open the list of leaderboards or a specific one. What remains to be done is to replace the buttons for the login one when the user is not connected. For this, we will create a method that reads the state and shows and hides the views accordingly: private void updatePlayButtons() { GameHelper gameHelper = getYassActivity().getGameHelper(); if (gameHelper.isConnecting() || gameHelper.isSignedIn()) {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.GONE); } else {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.VISIBLE); } } This method decides whether to remove or make visible the views based on the state. We will call it inside the important state-changing methods: onLayoutCompleted: the first time we open the game to initialize the UI. onSignInSucceeded: when the user successfully signs in to GPGS. onSignInFailed: this can be triggered when we auto sign in and there is no connection. It is important to handle it. onActivityResult: when we come back from the Play Games UI, in case the user has logged out. But nothing is as easy as it looks. In fact, when the user signs out and does not exit the game, GoogleApiClient keeps the connection open. Therefore the value of isSignedIn from GameHelper still returns true. This is the edge case we have been talking about all through the article. As a result of this edge case, there is an inconsistency in the UI that shows the achievements and leaderboards buttons when it should show the login one. When the user logs out from Play Games, GoogleApiClient keeps the connection open. This can lead to confusion. Unfortunately, this has been marked as work as expected by Google. The reason is that the connection is still active and it is our responsibility to parse the result in the onActivityResult method to determine the new state. But this is not very convenient. Since it is a rare case we will just go for the easiest solution, which is to wrap it in a try/catch block and make the user sign in if he or she taps on leaderboards or achievements while not logged in. This is the code we have to handle the click on the achievements button, but the one for leaderboards is equivalent: else if (v.getId() == R.id.btn_achievements) { try {    GoogleApiClient apiClient =      getYassActivity().getGameHelper().getApiClient();    Intent achievementsIntent =      Games.Achievements.getAchievementsIntent(apiClient);    startActivityForResult(achievementsIntent,      REQUEST_ACHIEVEMENTS); } catch (Exception e) {    GameHelper gameHelper = getYassActivity().getGameHelper();    gameHelper.disconnect();    gameHelper.beginUserInitiatedSignIn(); } } Basically, we have the old code to open the achievements activity, but we wrap it in a try/catch block. If an exception is raised, we disconnect the game helper and begin a new login using the beginUserInitiatedSignIn method. It is very important to disconnect the gameHelper before we try to log in again. Otherwise, the login will not work. We must disconnect from GPGS before we can log in using the method from the GameHelper. Finally, there is the case when the user clicks on the login button, which just triggers the login using the beginUserInitiatedSignIn method from the GameHelper: if (v.getId() == R.id.btn_sign_in) { getYassActivity().getGameHelper().beginUserInitiatedSignIn(); } Once you have published your game and the game services, achievements and leaderboards will not appear in the game description on Google Play straight away. It is required that "a fair amount of users" have used them. You have done nothing wrong, you just have to wait. Other features of Google Play services Google Play Game Services provides more features for game developers than achievements and leaderboards. None of them really fit the game we are building, but it is useful to know they exist just in case your game needs them. You can save yourself lots of time and effort by using them and not reinventing the wheel. The other features of Google Play Games Services are: Events and quests: these allow you to monitor game usage and progression. Also, they add the possibility of creating time-limited events with rewards for the players. Gifts: as simple as it sounds, you can send a gift to other players or request one to be sent to you. Yes, this is seen in the very mechanical Facebook games popularized a while ago. Saved games: the standard concept of a saved game. If your game has progression or can unlock content based on user actions, you may want to use this feature. Since it is saved in the cloud, saved games can be accessed across multiple devices. Turn-based and real-time multiplayer: Google Play Game Services provides an API to implement turn-based and real-time multiplayer features without you needing to write any server code. If your game is multiplayer and has an online economy, it may be worth making your own server and granting virtual currency only on the server to prevent cheating. Otherwise, it is fairly easy to crack the gifts/reward system and a single person can ruin the complete game economy. However, if there is no online game economy, the benefits of gifts and quests may be more important than the fact that someone can hack them. Let's take a look at each of these features. Events The event's APIs provides us with a way to define and collect gameplay metrics and upload them to Google Play Game Services. This is very similar to the GameEvents we are already using in our game. Events should be a subset of the game events of our game. Many of the game events we have are used internally as a signal between objects or as a synchronization mechanism. These events are not really relevant outside the engine, but others could be. Those are the events we should send to GPGS. To be able to send an event from the game to GPGS, we have to create it in the developer console first. To create an event, we have to go to the Events tab in the developer console, click on Add new event, and fill in the following fields: Name: a short name of the event. The name can be up to 100 characters. This value can be localized. Description: a longer description of the event. The description can be up to 500 characters. This value can also be localized. Icon: the icon for the event of the standard 512x512 px size. Visibility: as for achievements, this can be revealed or hidden. Format: as for leaderboards, this can be Numeric, Currency, or Time. Event type: this is used to mark events that create or spend premium currency. This can be Premium currency sink, Premium currency source, or None. While in the game, events work pretty much as incremental achievements. You can increment the event counter using the following line of code: Games.Events.increment(mGoogleApiClient, myEventId, 1); You can delete events that are in the draft state or that have been published as long as the event is not in use by a quest. You can also reset the player progress data for the testers of your events as you can do for achievements. While the events can be used as an analytics system, their real usefulness appears when they are combined with quests. Quests A quest is a challenge that asks players to complete an event a number of times during a specific time frame to receive a reward. Because a quest is linked to an event, to use quests you need to have created at least one event. You can create a quest from the quests tab in the developer console. A quest has the following fields to be filled: Name: the short name of the quest. This can be up to 100 characters and can be localized. Description: a longer description of the quest. Your quest description should let players know what they need to do to complete the quest. The description can be up to 500 characters. The first 150 characters will be visible to players on cards such as those shown in the Google Play Games app. Icon: a square icon that will be associated with the quest. Banner: a rectangular image that will be used to promote the quest. Completion Criteria: this is the configuration of the quest itself. It consists of an event and the number of times the event must occur. Schedule: the start and end date and time for the quest. GPGS uses your local time zone, but stores the values as UTC. Players will see these values appear in their local time zone. You can mark a checkbox to notify users when the quest is about to end. Reward Data: this is specific to each game. It can be a JSON object, specifying the reward. This is sent to the client when the quest is completed. Once configured in the developer console, you can do two things with the quests: Display the list of quests Process a quest completion To get the list of quests, we start an activity with an intent that is provided to us via a static method as usual: Intent questsIntent = Games.Quests.getQuestsIntent(mGoogleApiClient,    Quests.SELECT_ALL_QUESTS); startActivityForResult(questsIntent, QUESTS_INTENT); To be notified when a quest is completed, all we have to do is register a listener: Games.Quests.registerQuestUpdateListener(mGoogleApiClient, this); Once we have set the listener, the onQuestCompleted method will be called once the quest is completed. After completing the processing of the reward, the game should call claim to inform Play Game services that the player has claimed the reward. The following code snippet shows how you might override the onQuestCompleted callback: @Override public void onQuestCompleted(Quest quest) { // Claim the quest reward. Games.Quests.claim(mGoogleApiClient, quest.getQuestId(),    quest.getCurrentMilestone().getMilestoneId()); // Process the RewardData to provision a specific reward. String reward = new    String(quest.getCurrentMilestone().getCompletionRewardData(),    Charset.forName("UTF-8")); } The rewards themselves are defined by the client. As we mentioned before, this will make the game quite easy to crack and get rewards. But usually, avoiding the hassle of writing your own server is worth it. Gifts The gifts feature of GPGS allows us to send gifts to other players and to request them to send us one as well. This is intended to make the gameplay more collaborative and to improve the social aspect of the game. As for other GPGS features, we have a built-in UI provided by the library that can be used. In this case, to send and request gifts for in-game items and resources to and from friends in their Google+ circles. The request system can make use of notifications. There are two types of requests that players can send using the game gifts feature in Google Play Game Services: A wish request to ask for in-game items or some other form of assistance from their friends A gift request to send in-game items or some other form of assistance to their friends A player can specify one or more target request recipients from the default request-sending UI. A gift or wish can be consumed (accepted) or dismissed by a recipient. To see the gifts API in detail, you can visit https://developers.google.com/games/services/android/giftRequests. Again, as for quest rewards, this is done entirely by the client, which makes the game susceptible to piracy. Saved games The saved games service offers cloud game saving slots. Your game can retrieve the saved game data to allow returning players to continue a game at their last save point from any device. This service makes it possible to synchronize a player's game data across multiple devices. For example, if you have a game that runs on Android, you can use the saved games service to allow a player to start a game on their Android phone and then continue playing the game on a tablet without losing any of their progress. This service can also be used to ensure that a player's game play continues from where it was left off even if their device is lost, destroyed, or traded in for a newer model or if the game was reinstalled The saved games service does not know about the game internals, so it provides a field that is an unstructured binary blob where you can read and write the game data. A game can write an arbitrary number of saved games for a single player subjected to user quota, so there is no hard requirement to restrict players to a single save file. Saved games are done in an unstructured binary blob. The API for saved games also receives some metadata that is used by Google Play Games to populate the UI and to present useful information in the Google Play Game app (for example, last updated timestamp). Saved games has several entry points and actions, including how to deal with conflicts in the saved games. To know more about these check out the official documentation at https://developers.google.com/games/services/android/savedgames. Multiplayer games If you are going to implement multiplayer, GPGS can save you a lot of work. You may or may not use it for the final product, but it will remove the need to think about the server-side until the game concept is validated. You can use GPGS for turn-based and real-time multiplayer games. Although each one is completely different and uses a different API, there is always an initial step where the game is set up and the opponents are selected or invited. In a turn-based multiplayer game, a single shared state is passed among the players and only the player that owns the turn has permission to modify it. Players take turns asynchronously according to an order of play determined by the game. A turn is finished explicitly by the player using an API call. Then the game state is passed to the other players, together with the turn. There are many cases: selecting opponents, creating a match, leaving a match, canceling, and so on. The official documentation at https://developers.google.com/games/services/android/turnbasedMultiplayer is quite exhaustive and you should read through it if you plan to use this feature. In a real-time multiplayer there is no concept of turn. Instead, the server uses the concept of room: a virtual construct that enables network communication between multiple players in the same game session and lets players send data directly to one another, a common concept for game servers. Real-time multiplayer service is based on the concept of Room. The API of real-time multiplayer allows us to easily: Manage network connections to create and maintain a real-time multiplayer room Provide a player-selection user interface to invite players to join a room, look for random players for auto-matching, or a combination of both Store participant and room-state information on the Play Game services' servers while the game is running Send room invitations and updates to players To check the complete documentation for real-time games, please visit the official web at https://developers.google.com/games/services/android/realtimeMultiplayer. Summary We have added Google Play services to YASS, including setting up the game in the developer console and adding the required libraries to the project. Then, we defined a set of achievements and added the code to unlock them. We have used normal, incremental, and hidden achievement types to showcase the different options available. We have also configured a leaderboard and submitted the scores, both when the game is finished and when it is exited via the pause dialog. Finally, we have added links to the native UI for leaderboards and achievements to the main menu. We have also introduced the concepts of events, quests, and gifts and the features of saved games and multiplayer that Google Play Game services offers. The game is ready to publish now. Resources for Article: Further resources on this subject: SceneKit [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] SpriteKit Framework and Physics Simulation [article]
Read more
  • 0
  • 0
  • 3837

Packt
08 Jul 2015
21 min read
Save for later

To Be or Not to Be – Optionals

Packt
08 Jul 2015
21 min read
In this article by Andrew J Wagner, author of the book Learning Swift, we will cover: What is an optional? How to unwrap an optional Optional chaining Implicitly unwrapped optionals How to debug optionals The underlying implementation of an optional (For more resources related to this topic, see here.) Introducing optionals So, we know that the purpose of optionals in Swift is to allow the representation of the absent value, but what does that look like and how does it work? An optional is a special type that can wrap any other type. This means that you can make an optional String, optional Array, and so on. You can do this by adding a question mark (?) to the type name: var possibleString: String? var possibleArray: [Int]? Note that this code does not specify any initial values. This is because all optionals, by default, are set to no value at all. If we want to provide an initial value, we can do so like any other variable: var possibleInt: Int? = 10 Also note that, if we leave out the type specification (: Int?), possibleInt would be inferred to be of the Int type instead of an Int optional. It is pretty verbose to say that a variable lacks a value. Instead, if an optional lacks a variable, we say that it is nil. So, both possibleString and possibleArray are nil, while possibleInt is 10. However, possibleInt is not truly 10. It is still wrapped in an optional. You can see all the forms a variable can take by putting the following code in to a playground: var actualInt = 10 var possibleInt: Int? = 10 var nilInt: Int? println(actualInt) // "10" println(possibleInt) // "Optional(10)" println(nilInt) // "nil" As you can see, actualInt prints out as we expect it to, but possibleInt prints out as an optional that contains the value 10 instead of just 10. This is a very important distinction because an optional cannot be used as if it were the value it wraps. The nilInt optional just reports that it is nil. At any point, you can update the value within an optional, including the fact that you can give it a value for the first time using the assignment operator (=): nilInt = 2 println(nilInt) // "Optional(2)" You can even remove the value within an optional by assigning it to nil: nilInt = nil println(nilInt) // "nil" So, we have this wrapped form of a variable that may or may not contain a value. What do we do if we need to access the value within an optional? The answer is that we must unwrap it. Unwrapping an optional There are multiple ways to unwrap an optional. All of them essentially assert that there is truly a value within the optional. This is a wonderful safety feature of Swift. The compiler forces you to consider the possibility that an optional lacks any value at all. In other languages, this is a very commonly overlooked scenario that can cause obscure bugs. Optional binding The safest way to unwrap an optional is using something called optional binding. With this technique, you can assign a temporary constant or variable to the value contained within the optional. This process is contained within an if statement, so that you can use an else statement for when there is no value. An optional binding looks like this: if let string = possibleString {    println("possibleString has a value: \(string)") } else {    println("possibleString has no value") } An optional binding is distinguished from an if statement primarily by the if let syntax. Semantically, this code says "if you can let the constant string be equal to the value within possibleString, print out its value; otherwise, print that it has no value." The primary purpose of an optional binding is to create a temporary constant that is the normal (nonoptional) version of the optional. It is also possible to use a temporary variable in an optional binding: possibleInt = 10 if var int = possibleInt {    int *= 2 } println(possibleInt) // Optional(10) Note that an astrix (*) is used for multiplication in Swift. You should also note something important about this code, that is, if you put it into a playground, even though we multiplied int by 2, the value does not change. When we print out possibleInt later, the value still remains Optional(10). This is because even though we made the int variable (otherwise known as mutable), it is simply a temporary copy of the value within possibleInt. No matter what we do with int, nothing will be changed about the value within possibleInt. If we need to update the actual value stored within possibleInt, we need to simply assign possibleInt to int after we are done modifying it: possibleInt = 10 if var int = possibleInt {    int *= 2    possibleInt = int } println(possibleInt) // Optional(20) Now the value wrapped inside possibleInt has actually been updated. A common scenario that you will probably come across is the need to unwrap multiple optional values. One way of doing this is by simply nesting the optional bindings: if let actualString = possibleString {    if let actualArray = possibleArray {        if let actualInt = possibleInt {            println(actualString)            println(actualArray)            println(actualInt)        }    } } However, this can be a pain as it increases the indentation level each time to keep the code organized. Instead, you can actually list multiple optional bindings in a single statement separated by commas: if let actualString = possibleString,    let actualArray = possibleArray,    let actualInt = possibleInt {    println(actualString)    println(actualArray)    println(actualInt) } This generally produces more readable code. This way of unwrapping is great, but saying that optional binding is the safe way to access the value within an optional implies that there is an unsafe way to unwrap an optional. This way is called forced unwrapping. Forced unwrapping The shortest way to unwrap an optional is by forced unwrapping. This is done using an exclamation mark (!) after the variable name when it is used: possibleInt = 10 possibleInt! *= 2   println(possibleInt) // "Optional(20)" However, the reason it is considered unsafe is that your entire program crashes if you try to unwrap an optional that is currently nil: nilInt! *= 2 // fatal error The full error you get is "unexpectedly found as nil while unwrapping an optional value". This is because forced unwrapping is essentially your personal guarantee that the optional truly holds a value. This is why it is called forced. Therefore, forced unwrapping should be used in limited circumstances. It should never be used just to shorten up the code. Instead, it should only be used when you can guarantee, from the structure of the code, that it cannot be nil, even though it is defined as an optional. Even in this case, you should check whether it is possible to use a nonoptional variable instead. The only other place you may use it is when your program truly cannot recover if an optional is nil. In these circumstances, you should at least consider presenting an error to the user, which is always better than simply having your program crash. An example of a scenario where forced unwrapping may be used effectively is with lazily calculated values. A lazily calculated value is a value that is not created until the first time it is accessed. To illustrate this, let's consider a hypothetical class that represents a filesystem directory. It would have a property that lists its contents that are lazily calculated. The code would look something like this: class FileSystemItem {} class File: FileSystemItem {} class Directory: FileSystemItem {    private var realContents: [FileSystemItem]?    var contents: [FileSystemItem] {        if self.realContents == nil {           self.realContents = self.loadContents()        }        return self.realContents!    }      private func loadContents() -> [FileSystemItem] {        // Do some loading        return []    } } Here, we defined a superclass called FileSystemItem that both File and Directory inherit from. The contents of a directory is a list of any kind of FileSystemItem. We define content as a calculated variable and store the real value within the realContents property. The calculated property checks whether there is a value yet loaded for realContents; if there isn't, it loads the contents and puts it into the realContents property. Based on this logic, we know with 100 percent certainty that there will be a value within realContents by the time we get to the return statement, so it is perfectly safe to use forced unwrapping. Nil coalescing In addition to optional binding and forced unwrapping, Swift also provides an operator called the nil coalescing operator to unwrap an optional. This is represented by a double question mark (??). Basically, this operator lets us provide a default value for a variable or operation result in case it is nil. This is a safe way to turn an optional value into a nonoptional value and it would look something like this: var possibleString: String? = "An actual string" println(possibleString ?? "Default String")   // "An Actual String" Here, we ask the program to print out possibleString unless it is nil, in which case, it will just print Default String. Since we did give it a value, it printed out that value and it is important to note that it printed out as a regular variable, not as an optional. This is because one way or another, an actual value will be printed. This is a great tool for concisely and safely unwrapping an optional when a default value makes sense. Optional chaining A common scenario in Swift is to have an optional that you must calculate something from. If the optional has a value you want to store the result of the calculation on, but if it is nil, the result should just be set to nil: var invitee: String? = "Sarah" var uppercaseInvitee: String? if let actualInvitee = invitee {    uppercaseInvitee = actualInvitee.uppercaseString } This is pretty verbose. To shorten this up in an unsafe way, we could use forced unwrapping: uppercaseInvitee = invitee!.uppercaseString However, optional chaining will allow us to do this safely. Essentially, it allows optional operations on an optional. When the operation is called, if the optional is nil, it immediately returns nil; otherwise, it returns the result of performing the operation on the value within the optional: uppercaseInvitee = invitee?.uppercaseString So in this call, invitee is an optional. Instead of unwrapping it, we will use optional chaining by placing a question mark (?) after it, followed by the optional operation. In this case, we asked for the uppercaseInvitee property on it. If invitee is nil, uppercaseInvitee is immediately set to nil without it even trying to access uppercaseString. If it actually does contain a value, uppercaseInvitee gets set to the uppercaseString property of the contained value. Note that all optional chains return an optional result. You can chain as many calls, both optional and nonoptional, as you want in this way: var myNumber: String? = "27" myNumber?.toInt()?.advancedBy(10).description This code attempts to add 10 to myNumber, which is represented by String. First, the code uses an optional chain in case myNumber is nil. Then, the call to toInt uses an additional optional chain because that method returns an optional Int type. We then call advancedBy, which does not return an optional, allowing us to access the description of the result without using another optional chain. If at any point any of the optionals are nil, the result will be nil. This can happen for two different reasons: This can happen because myNumber is nil This can also happen because toInt returns nil as it cannot convert String to the Int type If the chain makes it all the way to advanceBy, there is no longer a failure path and it will definitely return an actual value. You will notice that there are exactly two question marks used in this chain and there are two possible failure reasons. At first, it can be hard to understand when you should and should not use a question mark to create a chain of calls. The rule is that you should always use a question mark if the previous element in the chain returns an optional. However, since you are prepared, let's look at what happens if you use an optional chain improperly: myNumber.toInt() // Value of optional type 'String?' not unwrapped In this case, we try to call a method directly on an optional without a chain so that we get an error. We also have the case where we try to inappropriately use an optional chain: var otherNumber = "10" otherNumber?.toInt() // Operand of postfix '?'   should have optional type Here, we get an error that says a question mark can only be used on an optional type. It is great to have a good sense of catching errors, which you will see when you make mistakes, so that you can quickly correct them because we all make silly mistakes from time to time. Another great feature of optional chaining is that it can be used for method calls on an optional that does not actually return a value: var invitees: [String]? = [] invitee?.removeAll(keepCapacity: false) In this case, we only want to call removeAll if there is truly a value within the optional array. So, with this code, if there is a value, all the elements are removed from it: otherwise, it remains nil. In the end, option chaining is a great choice for writing concise code that still remains expressive and understandable. Implicitly unwrapped optionals There is a second type of optional called an implicitly unwrapped optional. There are two ways to look at what an implicitly unwrapped optional is. One way is to say that it is a normal variable that can also be nil. The other way is to say that it is an optional that you don't have to unwrap to use. The important thing to understand about them is that like optionals, they can be nil, but like a normal variable, you do not have to unwrap them. You can define an implicitly unwrapped optional with an exclamation mark (!) instead of a question mark (?) after the type name: var name: String! Just like with regular optionals, implicitly unwrapped optionals do not need to be given an initial value because they are nil by default. At first, this may sound like it is the best of both worlds, but in reality, it is more like the worst of both worlds. Even though an implicitly unwrapped optional does not have to be unwrapped, it will crash your entire program if it is nil when used: name.uppercaseString // Crash A great way to think about them is that every time an implicitly unwrapped optional is used, it is implicitly performing a forced unwrapping. The exclamation mark is placed in its type declaration instead of using it every time. This is particularly bad because it appears the same as any other variable except for how it is declared. This means that it is very unsafe to use, unlike a normal optional. So, if implicitly unwrapped optionals are the worst of both worlds and are so unsafe, why do they even exist? The reality is that in rare circumstances, they are necessary. They are used in circumstances where a variable is not truly optional, but you also cannot give an initial value to it. This is almost always true in the case of custom types that have a member variable that is nonoptional, but cannot be set during initialization. A rare example of this is a view in iOS. UIKit, as we discussed earlier, is the framework that Apple provides for iOS development. In it, Apple has a class called UIView that is used for displaying content on the screen. Apple also provides a tool in Xcode called Interface Builder that lets you design these views in a visual editor instead of in code. Many views designed in this way need references to other views that can be accessed programmatically later. When one of these views is loaded, it is initialized without anything connected and then all the connections are made. Once all the connections are made, a function called awakeFromNib is called on the view. This means that these connections are not available for use during initialization, but are available once awakeFromNib is called. This order of operations also ensures that awakeFromNib is always called before anything actually uses the view. This is a circumstance where it is necessary to use an implicitly unwrapped optional. A member variable may not be defined until the view is initialized and when it is completely loaded: import UIKit class MyView: UIView {    @IBOutlet var button : UIButton!    var buttonOriginalWidth : CGFloat!      override func awakeFromNib() {        self.buttonOriginalWidth = self.button.frame.size.width    } } Note that we have actually declared two implicitly unwrapped optionals. The first is a connection to button. We know this is a connection because it is preceded by @IBOutlet. This is declared as an implicitly unwrapped optional because the connections are not set up until after initialization, but they are still guaranteed to be set up before any other methods are called on the view. This also then leads us to make our second variable, buttonOriginalWidth, implicitly unwrapped because we need to wait until the connection is made before we can determine the width of button. After awakeFromNib is called, it is safe to treat both button and buttonOriginalWidth as nonoptional. You may have noticed that we had to dive pretty deep in to app development in order to find a valid use case for implicitly unwrapped optionals, and this is arguably only because UIKit is implemented in Objective-C. Debugging optionals We already saw a couple of compiler errors that we commonly see because of optionals. If we try to call a method on an optional that we intended to call on the wrapped value, we will get an error. If we try to unwrap a value that is not actually optional, we will get an error that the variable or constant is not optional. We also need to be prepared for runtime errors that optionals can cause. As discussed, optionals cause runtime errors if you try to forcefully unwrap an optional that is nil. This can happen with both explicit and implicit forced unwrapping. If you followed my advice so far in this article, this should be a rare occurrence. However, we all end up working with third-party code, and maybe they were lazy or maybe they used forced unwrapping to enforce their expectations about how their code should be used. Also, we all suffer from laziness from time to time. It can be exhausting or discouraging to worry about all the edge cases when you are excited about programming the main functionality of your app. We may use forced unwrapping temporarily while we worry about that main functionality and plan to come back to handle it later. After all, during development, it is better to have a forced unwrapping crash the development version of your app than it is for it to fail silently if you have not yet handled that edge case. We may even decide that an edge case is not worth the development effort of handling because everything about developing an app is a trade-off. Either way, we need to recognize a crash from forced unwrapping quickly, so that we don't waste extra time trying to figure out what went wrong. When an app tries to unwrap a nil value, if you are currently debugging the app, Xcode shows you the line that tries to do the unwrapping. The line reports that there was EXC_BAD_INSTRUCTION and you will also get a message in the console saying fatal error: unexpectedly found nil while unwrapping an Optional value:   You will also sometimes have to look at which code currently calls the code that failed. To do that, you can use the call stack in Xcode. When your program crashes, Xcode automatically displays the call stack, but you can also manually show it by going to View | Navigators | Show Debug Navigator. This will look something as follows:   Here, you can click on different levels of code to see the state of things. This becomes even more important if the program crashes within one of Apple's framework, where you do not have access to the code. In that case, you should move up the call stack to the point where your code is called in the framework. You may also be able to look at the names of the functions to help you figure out what may have gone wrong. Anywhere on the call stack, you can look at the state of the variables in the debugger, as shown in the following screenshot:   If you do not see this variable's view, you can display it by clicking on the button at the bottom-left corner, which is second from the right that will be grayed out. Here, you can see that invitee is indeed nil, which is what caused the crash. As powerful as the debugger is, if you find that it isn't helping you find the problem, you can always put println statements in important parts of the code. It is always safe to print out an optional as long as you don't forcefully unwrap it like in the preceding example. As we saw earlier, when an optional is printed, it will print nil if it doesn't have a value or it will print Optional(<value>) if it does have a value. Debugging is an extremely important part of becoming a productive developer because we all make mistakes and create bugs. Being a great developer means that you can identify problems quickly and understand how to fix them soon after that. This will largely come from practice, but it will also come when you have a firm grasp of what really happens with your code instead of simply adapting some code you find online to fit your needs through trial and error. The underlying implementation At this point, you should have a pretty strong grasp of what an optional is and how to use and debug it, but it is valuable to look deeper at optionals and see how they actually work. In reality, the question mark syntax for optionals is just a special shorthand. Writing String? is equivalent to writing Optional<String>. Writing String! is equivalent to writing ImplicitlyUnwrappedOptional<String>. The Swift compiler has shorthand versions because they are so commonly used This allows the code to be more concise and readable. If you declare an optional using the long form, you can see Swift's implementation by holding command and clicking on the word Optional. Here, you can see that Optional is implemented as an enumeration. If we simplify the code a little, we have: enum Optional<T> {    case None    case Some(T) } So, we can see that Optional really has two cases: None and Some. None stands for the nil case, while the Some case has an associated value, which is the value wrapped inside Optional. Unwrapping is then the process of retrieving the associated value out of the Some case. One part of this that you have not seen yet is the angled bracket syntax (<T>). This is a generic and essentially allows the enumeration to have an associated value of any type. Realizing that optionals are simply enumerations will help you to understand how to use them. It also gives you some insight into how concepts are built on top of other concepts. Optionals seem really complex until you realize that they are just two-case enumerations. Once you understand enumerations, you can pretty easily understand optionals as well. Summary We only covered a single concept, optionals, in this article, but we saw that this is a pretty dense topic. We saw that at the surface level, optionals are pretty straightforward. They offer a way to represent a variable that has no value. However, there are multiple ways to get access to the value wrapped within an optional, which have very specific use cases. Optional binding is always preferred as it is the safest method, but we can also use forced unwrapping if we are confident that an optional is not nil. We also have a type called implicitly unwrapped optional to delay the assigning of a variable that is not intended to be optional, but we should use it sparingly because there is almost always a better alternative. Resources for Article: Further resources on this subject: Network Development with Swift [article] Flappy Swift [article] Playing with Swift [article]
Read more
  • 0
  • 0
  • 5063

article-image-editors-and-ides
Packt
08 Jul 2015
10 min read
Save for later

Editors and IDEs

Packt
08 Jul 2015
10 min read
In this article by Daniel Blair, the author of the book Learning Banana Pi, you are going to learn about some editors and the programming languages that are available on the Pi and Linux. These tools will help you write the code that will interact with the hardware through GPIO and on the Pi as a server. (For more resources related to this topic, see here.) Choosing your editor There are many different integrated development environments (generally abbreviated as IDEs) to choose from on Linux. When working on the Banana Pi, you're limited to the software that will run on an ARM-based CPU. Hence, options such as Sublime Text are not available. Some options that you may be familiar with are available for general purpose code editing. Some tools are available for the command line, while others are GUI tools. So, depending on whether you have a monitor or not, you will want to choose an appropriate tool. The following screenshot shows some JavaScript being edited via nano on the command line: Command-line editors The command line is a powerful tool. If you master it, you will rarely need to leave it. There are several editors available for the command line. There has been an ongoing war between the users of two editors: GNU Emacs and Vim. There are many editors like nano (which is my preference), but the war tends to be between the two aforementioned editors. The Emacs editor This is my least favorite command-line editor (just my preference). Emacs is a GNU-flavored editor for the command line. It is often installed by default, but you can easily install it if it is missing by running a quick command, as follows: sudo apt-get install emacs Now, you can edit a file via the CLI by using the following code: emacs <command-line arguments> <your file> The preceding code will open the file in Emacs for you to edit. You can also use this to create new files. You can save and close the editor with a couple of key combinations: Ctrl + X and Ctrl + S Ctrl + X and Ctrl + C Thus, your document will be saved and closed. The Vim editor Vim is actually an extension of Vi, and it is functionally the same thing. Vim is a fine editor. Many won't personally go out of their way to not use it. However, people do find it a bit difficult to remember all the commands. If you do get good at it, though, you can code very quickly. You can install Vim with the command line: sudo apt-get install vim Also, there is a GUI version available that allows interaction with the mouse; this is functionally the same program as the Vim command line. You don't have to be confined to the terminal window. You can install it with an identical command: sudo apt-get install vim-gnome You can edit files easily with Vim via the command line for both Vim and Vim-Gnome, as follows: vim <your file> gvim <your file> The gnome version will open the file in a window There is a handy tutorial that you can use to learn the commands of Vim. You can run the tutorial with the help of the following command: vimtutor This tutorial will teach you how to run this editor, which is awesome because the commands can be a bit complicated at first. The following screenshot shows Vim editing the file that we used earlier: The nano editor The nano editor is my favorite editor for the command line. This is probably because it was the first editor that I was exposed to when I started to learn Linux and experiment with the servers and eventually, the Raspberry Pi and Banana Pi. The nano editor is generally considered the easiest to use and is installed by default on the Banana Pi images. If, for some reason, you need to install it, you can get it quickly with the help of the following command: sudo apt-get install nano The editor is easy to use. It comes with several commands that you will use frequently. To save and close the editor, use the following key combinations: Ctrl + O Ctrl + X You can get help at any time by pressing Ctrl + G. Graphic editors With the exception of gVim, all the editors we just talked about live on the command line. If you are more accustomed to graphical tools, you may be more comfortable with a full-featured IDE. There are a couple of choices in this regard that you may be familiar with. These tools are a little heavier than the command-line tools because you will need to not only run the software, but also render the window. This is not as much of a big deal on the Banana Pi as it is on the Raspberry Pi, because we have more RAM to play with. However, if you have a lot of programs running already, it might cause some performance issues. Eclipse Eclipse is a very popular IDE that is available for everything. You can use it to develop all kinds of systems and use all kinds of programming languages. This is a tool that can be used to do professional development. There are a lot of plugins available in this IDE. It is also used to develop apps for Android (although Android Studio is also available now). Eclipse is written in Java. Hence, in order to make it work, you will require a Java Runtime Environment. The Banana Pi should come equipped with the Java development and runtime environments. If this is not the case, they are not difficult to install. In order to grab the proper version of Eclipse and avoid browsing all the specific versions on the website, you can just install it via the command line by entering the following code: sudo apt-get install eclipse Once Eclipse is installed, you will find it in the application menu under programming tools. The following screenshot shows the Eclipse IDE running on the Banana Pi: The Geany IDE Geany is a lighter weight IDE than Eclipse although the former is not quite fully featured. It is a clean UI that can be customized and used to write a lot of different programming languages. Geany was one of the first IDEs I ever used when first exploring Linux when I was a kid. Geany does not come preinstalled on the Banana Pi images, but it is easy to get via the command line: sudo apt-get install geany Depending on what you plan to do code-wise on the Banana Pi, Geany may be your best bet. It is GUI-based and offers quite a bit of functionality. However, it is a lot faster to load than Eclipse. It may seem familiar for Windows users, and they might find it easier to operate since it resembles Windows software. The following screenshot shows Geany on Linux: Both of these editors, Geany and Eclipse, are not specific to a particular programming language, but they both are slightly better for certain languages. Geany tends to be better for web languages such as HTML, PHP, JavaScript, and CSS, while Eclipse tends to be better for compiled languages such as C++, Go, and Java as well as PHP and Ruby with plugins. If you plan to write scripts or languages that are intended to be run from the command line such as Bash, Ruby, or Python, you may want to stick to the command line and use an editor such as Vim or nano. It is worth your time to play around with the editors and find your preferences. Web IDEs In addition to the command line and GUI editors, there are a couple of web-based IDEs. These essentially turn your Pi into a code server, which allows you to run and even execute certain types of code on an IDE written in web languages. These IDEs are great for learning code, but they are not really replacements for the solutions that were listed previously. Google Coder Google Coder is an educational web IDE that was released as an open source project by Google for the Raspberry Pi. Although there is a readily available image for the Raspberry Pi, we can manually install it for the Banana Pi. The following screenshot shows the Google Coder's interface: The setup is fairly straightforward. We will clone the Git repo and install it with Node.js. If you don't have Git and Node.js installed, you can install them with a quick command in the terminal, as follows: sudo apt-get install nodejs npm git Once it is installed, we can clone the coder repo by using the following code: git clone https://github.com/googlecreativelab/coder After it is cloned, we will move into the directory and install it with the help of the following code: cd ~/coder/coder-base/ npm install It may take several minutes to install, even on the Banana Pi. Next, we will edit the config.js file, which will be used to configure the ports and IP addresses. nano config.js The preceding code will reveal the contents of the file. Change the top values to match the following: exports.listenIP = '127.0.0.1'; exports.listenPort = '8081'; exports.httpListenPort = '8080'; exports.cacheApps = true; exports.httpVisiblePort = '8080'; exports.httpsVisiblePort = '8081'; After you change the settings you need, run a server by using Node.js: nodejs server.js You should now be able to connect to the Pi in a browser either on it or on another computer and use Coder. Coder is an educational tool with a lot of different built-in tutorials. You can use Coder to learn JavaScript, CSS, HTML, and jQuery. Adafruit WebIDE Adafruit has developed its own Web IDE, which is designed to run on the Raspberry Pi and BeagleBone. Since we are using the Banana Pi, it will only run better. This IDE is designed to work with Ruby, Python, and JavaScript, to name a few. It includes a terminal via which you can send commands to the Pi from the browser. It is an interesting tool if you wish to learn how to code. The following screenshot shows the interface of the WebIDE: The installation of WebIDE is very simple compared to that of Google Coder, which took several steps. We will just run one command: curl https://raw.githubusercontent.com/adafruit/Adafruit-WebIDE/alpha/scripts/install.sh | sudo sh After a few minutes, you will see an output that indicates that the server is starting. You will be able to access the IDE just like Google Coder—through a browser from another computer or from itself. It should be noted that you will be required to create a free Bit Bucket account to use this software. Summary In this article, we explored several different programming languages, command-line tools, graphical editors, and even some web IDEs. These tools are valuable for all kinds of projects that you may be working on. Resources for Article: Further resources on this subject: Prototyping Arduino Projects using Python [article] Raspberry Pi and 1-Wire [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 8186

article-image-understanding-mesos-internals
Packt
08 Jul 2015
26 min read
Save for later

Understanding Mesos Internals

Packt
08 Jul 2015
26 min read
 In this article by, Dharmesh Kakadia, author of the book Apache Mesos Essentials, explains how Mesos works internally in detail. We will start off with cluster scheduling and fairness concepts, understanding the Mesos architecture, and we will move on towards resource isolation and fault tolerance implementation in Mesos. In this article, we will cover the following topics: The Mesos architecture Resource allocation (For more resources related to this topic, see here.) The Mesos architecture Modern organizations have a lot of different kinds of applications for different business needs. Modern applications are distributed and they are deployed across commodity hardware. Organizations today run different applications in siloed environments, where separate clusters are created for different applications. This static partitioning of cluster leads to low utilization, and all the applications will duplicate the effort of dealing with distributed infrastructures. Not only is this a wasted effort, but it also undermines the fact that distributed systems are hard to build and maintain. This is challenging for both developers and operators. For developers, it is a challenge to build applications that scale elastically and can handle faults that are inevitable in large-scale environment. Operators, on the other hand, have to manage and scale all of these applications individually in siloed environments. The preceding situation is like trying to develop applications without having an operating system and managing all the devices in a computer. Mesos solves the problems mentioned earlier by providing a data center kernel. Mesos provides a higher-level abstraction to develop applications that treat distributed infrastructure just like a large computer. Mesos abstracts the hardware infrastructure away from the applications from the physical infrastructure. Mesos makes developers more productive by providing an SDK to easily write data center scale applications. Now, developers can focus on their application logic and do not have to worry about the infrastructure that runs it. Mesos SDK provides primitives to build large-scale distributed systems, such as resource allocation, deployment, and monitoring isolation. They only need to know and implement what resources are needed, and not how they get the resources. Mesos allows you to treat the data center just as a computer. Mesos makes the infrastructure operations easier by providing elastic infrastructure. Mesos aggregates all the resources in a single shared pool of resources and avoids static partitioning. This makes it easy to manage and increases the utilization. The data center kernel has to provide resource allocation, isolation, and fault tolerance in a scalable, robust, and extensible way. We will discuss how Mesos fulfills these requirements, as well as some other important considerations of modern data center kernel: Scalability: The kernel should be scalable in terms of the number of machines and number of applications. As the number of machines and applications increase, the response time of the scheduler should remain acceptable. Flexibility: The kernel should support a wide range of applications. It should also support diverse frameworks currently running on the cluster and future frameworks as well. The framework should also be able to cope up with the heterogeneity in the hardware, as most clusters are built over time and have a variety of hardware running. Maintainability: The kernel would be one of the very important pieces of modern infrastructure. As the requirements evolve, the scheduler should be able to accommodate new requirements. Utilization and dynamism: The kernel should adapt to the changes in resource requirements and available hardware resources and utilize resources in an optimal manner. Fairness: The kernel should be fair in allocating resources to the different users and/or frameworks. We will see what it means to be fair in detail in the next section. The design philosophy behind Mesos was to define a minimal interface to enable efficient resource sharing across frameworks and defer the task scheduling and execution to the frameworks. This allows the frameworks to implement diverse approaches toward scheduling and fault tolerance. It also makes the Mesos core simple, and the frameworks and core can evolve independently. The preceding figure shows the overall architecture (http://mesos.apache.org/documentation/latest/mesos-architecture) of a Mesos cluster. It has the following entities: The Mesos masters The Mesos slaves Frameworks Communication Auxiliary services We will describe each of these entities and their role, followed by how Mesos implements different requirements of the data center kernel. The Mesos slave The Mesos slaves are responsible for executing tasks from frameworks using the resources they have. The slave has to provide proper isolation while running multiple tasks. The isolation mechanism should also make sure that the tasks get resources that they are promised, and not more or less. The resources on slaves that are managed by Mesos can be described using slave resources and slave attributes. Resources are elements of slaves that can be consumed by a task, while we use attributes to tag slaves with some information. Slave resources are managed by the Mesos master and are allocated to different frameworks. Attributes identify something about the node, such as the slave having a specific OS or software version, it's part of a particular network, or it has a particular hardware, and so on. The attributes are simple key-value pairs of strings that are passed along with the offers to frameworks. Since attributes cannot be consumed by a running task, they will always be offered for that slave. Mesos doesn't understand the slave attribute, and interpretation of the attributes is left to the frameworks. More information about resource and attributes in Mesos can be found at https://mesos.apache.org/documentation/attributes-resources. A Mesos resource or attribute can be described as one of the following types: Scalar values are floating numbers Range values are a range of scalar values; they are represented as [minValue-maxValue] Set values are arbitrary strings Text values are arbitrary strings; they are applicable only to attributes Names of the resources can be an arbitrary string consisting of alphabets, numbers, "-", "/", ".", "-". The Mesos master handles the cpus, mem, disk, and ports resources in a special way. A slave without the cpus and mem resources will not be advertised to the frameworks. The mem and disk scalars are interpreted in MB. The ports resource is represented as ranges. The list of resources a slave has to offer to various frameworks can be specified as the resources flag. Resources and attributes are separated by a semicolon. For example: --resources='cpus:30;mem:122880;disk:921600;ports:[21000-29000];bugs:{a,b,c}' --attributes='rack:rack-2;datacenter:europe;os:ubuntuv14.4' This slave offers 30 cpus, 102 GB mem, 900 GB disk, ports from 21000 to 29000, and have bugs a, b, and c. The slave has three attributes: rack with value rack-2, datacenter with value europe, and os with value ubuntu14.4. Mesos does not yet provide direct support for GPUs, but does support custom resource types. This means that if we specify gpu(*):8 as part of --resources, then it will be part of the resource that offers to frameworks. Frameworks can use it just like other resources. Once some of the GPU resources are in use by a task, only the remaining resources will be offered. Mesos does not yet have support for GPU isolation, but it can be extended by implementing a custom isolator. Alternately, we can also specify which slaves have GPUs using attributes, such as --attributes="hasGpu:true". The Mesos master The Mesos master is primarily responsible for allocating resources to different frameworks and managing the task life cycle for them. The Mesos master implements fine-grained resource sharing using resource offers. The Mesos master acts as a resource broker for frameworks using pluggable policies. The master decides to offer cluster resources to frameworks in the form of resource offers based on them. Resources offer represents a unit of allocation in the Mesos world. It's a vector of resource available on a node. An offer represents some resources available on a slave being offered to a particular framework. Frameworks Distributed applications that run on top of Mesos are called frameworks. Frameworks implement the domain requirements using the general resource allocation API of Mesos. A typical framework wants to run a number of tasks. Tasks are the consumers of resources and they do not have to be the same. A framework in Mesos consists of two components: a framework scheduler and executors. Framework schedulers are responsible for coordinating the execution. An executor provides the ability to control the task execution. Executors can realize a task execution in many ways. An executor can choose to run multiple tasks, by spawning multiple threads, in an executor, or it can run one task in each executor. Apart from the life cycle and task management-related functions, the Mesos framework API also provides functions to communicate with framework schedulers and executors. Communication Mesos currently uses an HTTP-like wire protocol to communicate with the Mesos components. Mesos uses the libprocess library to implement the communication that is located in 3rdparty/libprocess. The libprocess library provides asynchronous communication with processes. The communication primitives have an actor message passing, such as semantics. The libprocess messages are immutable, which makes parallelizing the libprocess internals easier. Mesos communication happens along with the following APIs: Scheduler API: This is used to communicate with the framework scheduler and master. The internal communication is intended to be used only by the SchedulerDriver API. Executor API: This is used to communicate with an executor and the Mesos slave. Internal API: This is used to communicate with the Mesos master and slave. Operator API: This is the API exposed by Mesos for operators and is used by web UI, among other things. Unlike most Mesos API, the operator API is a synchronous API. To send a message, the actor does an HTTP POST request. The path is composed by the name of the actor followed by the name of the message. The User-Agent field is set to "libprocess/…" to distinguish from the normal HTTP requests. The message data is passed as the body of the HTTP request. Mesos uses protocol buffers to serialize all the messages (defined in src/messages/messages.proto). The parsing and interpretation of the message is left to the receiving actor. Here is an example header of a message sent to master to register the framework by scheduler(1) running at 10.0.1.7:53523 address: POST /master/mesos.internal.RegisterFrameworkMessage HTTP/1.1 User-Agent: libprocess/scheduler(1)@10.0.1.7:53523 The reply message header from the master that acknowledges the framework registration might look like this: POST /scheduler(1)/mesos.internal.FrameworkRegisteredMessage HTTP/1.1 User-Agent: libprocess/master@10.0.1.7:5050 At the time of writing, there is a very early discussion about rewiring the Mesos Scheduler API and Executor API as a pure HTTP API (https://issues.apache.org/jira/browse/MESOS-2288). This will make the API standard and integration with Mesos for various tools much easier without the need to be dependent on native libmesos. Also, there is an ongoing effort to convert all the internal messages into a standardized JSON or protocol buffer format (https://issues.apache.org/jira/browse/MESOS-1127). Auxiliary services Apart from the preceding main components, a Mesos cluster also needs some auxiliary services. These services are not part of Mesos itself, and are not strictly required, but they form a basis for operating the Mesos cluster in production environments. These services include, but are not limited to, the following: Shared filesystem: Mesos provides a view of the data center as a single computer and allows developers to develop for the data center scale application. With this unified view of resources, clusters need a shared filesystem to truly make the data center a computer. HDFS, NFS (Network File System), and Cloud-based storage options, such as S3, are popular among various Mesos deployments. Consensus service: Mesos uses a consensus service to be resilient in face of failure. Consensus services, such as ZooKeeper or etcd, provide a reliable leader election in a distributed environment. Service fabric: Mesos enables users to run a number of frameworks on unified computing resources. With a large number of applications and services running, it's important for users to be able to connect to them in a seamless manner. For example, how do users connect to Hive running on Mesos? How does the Ruby on Rails application discover and connect to the MongoDB database instances when one or both of them are running on Mesos? How is the website traffic routed to web servers running on Mesos? Answering these questions mainly requires service discovery and load balancing mechanisms, but also things such as IP/port management and security infrastructure. We are collectively referring to these services that connect frameworks to other frameworks and users as service fabric. Operational services: Operational services are essential in managing operational aspects of Mesos. Mesos deployments and upgrades, monitoring cluster health and alerting when human intervention is required, logging, and security are all part of the operational services that play a very important role in a Mesos cluster. Resource allocation As a data center kernel, Mesos serves a large variety of workloads and no single scheduler will be able to satisfy the needs of all different frameworks. For example, the way in which a real-time processing framework schedules its tasks will be very different from how a long running service will schedule its task, which, in turn, will be very different from how a batch processing framework would like to use its resources. This observation leads to a very important design decision in Mesos: separation of resource allocation and task scheduling. Resource allocation is all about deciding who gets what resources, and it is the responsibility of the Mesos master. Task scheduling, on the other hand, is all about how to use the resources. This is decided by various framework schedulers according to their own needs. Another way to understand this would be that Mesos handles coarse-grain resource allocation across frameworks, and then each framework does fine-grain job scheduling via appropriate job ordering to achieve its needs. The Mesos master gets information on the available resources from the Mesos slaves, and based on resource policies, the Mesos master offers these resources to different frameworks. Different frameworks can choose to accept or reject the offer. If the framework accepts a resource offer, the framework allocates the corresponding resources to the framework, and then the framework is free to use them to launch tasks. The following image shows the high-level flow of Mesos resource allocation: Mesos two level scheduler Here is the typical flow of events for one framework in Mesos: The framework scheduler registers itself with the Mesos master. The Mesos master receives the resource offers from slaves. It invokes the allocation module and decides which frameworks should receive the resource offers. The framework scheduler receives the resource offers from the Mesos master. On receiving the resource offers, the framework scheduler inspects the offer to decide whether it's suitable. If it finds it satisfactory, the framework scheduler accepts the offer and replies to the master with the list of executors that should be run on the slave, utilizing the accepted resource offers. Alternatively, the framework can reject the offer and wait for a better offer. The slave allocates the requested resources and launches the task executors. The executor is launched on slave nodes and runs the framework's tasks. It is up to the framework scheduler to accept or reject the resource offers. Here is an example of events that can happen when allocating resources. The framework scheduler gets notified about the task's completion or failure. The framework scheduler will continue receiving the resource offers and task reports and launch tasks as it sees fit. The framework unregisters with the Mesos master and will not receive any further resource offers. Note that this is optional and a long running service, and meta-framework will not unregister during the normal operation. Because of this design, Mesos is also known as a two-level scheduler. Mesos' two-level scheduler design makes it simpler and more scalable, as the resource allocation process does not need to know how scheduling happens. This makes the Mesos core more stable and scalable. Frameworks and Mesos are not tied to each other and each can iterate independently. Also, this makes porting frameworks easier. The choice of a two-level scheduler means that the scheduler does not have a global knowledge about resource utilization and the resource allocation decisions can be nonoptimal. One potential concern could be about the preferences that the frameworks have about the kind of resources needed for execution. Data locality, special hardware, and security constraints can be a few of the constraints on which tasks can run. In the Mesos realm, these preferences are not explicitly specified by a framework to the Mesos master, instead the framework rejects all the offers that do not meet its constraints. The Mesos scheduler Mesos was the first cluster scheduler to allow the sharing of resources to multiple frameworks. Mesos resource allocation is based on online Dominant Resource Fairness (DRF) called HierarchicalDRF. In a world of single resource static partitioning, fairness is easy to define. DRF extends this concept of fairness to multi-resource settings without the need for static partitioning. Resource utilization and fairness are equally important, and often conflicting, goals for a cluster scheduler. The fairness of resource allocation is important in a shared environment, such as data centers, to ensure that all the users/processes of the cluster get nearly an equal amount of resources. Min-max fairness provides a well-known mechanism to share a single resource among multiple users. Min-max fairness algorithm maximizes the minimum resources allocated to a user. In its simplest form, it allocates 1/Nth of the resource to each of the users. The weighted min-max fairness algorithm can also support priorities and reservations. Min-max resource fairness has been a basis for many well-known schedulers in operating systems and distributed frameworks, such as Hadoop's fair scheduler (http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html), capacity scheduler (https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html), Quincy scheduler (http://dl.acm.org/citation.cfm?id=1629601), and so on. However, it falls short when the cluster has multiple types of resources, such as the CPU, memory, disk, and network. When jobs in a distributed environment use different combinations of these resources to achieve the outcome, the fairness has to be redefined. For example, the two requests <1 CPU, 3 GB> and <3 CPU, 1 GB> come to the scheduler. How do they compare and what is the fair allocation? DRF generalizes the min-max algorithm for multiple resources. A user's dominant resource is the resource for which the user has a biggest share. For example, if the total resources are <8 CPU, 5 GB>, then for the user allocation of <2 CPU, 1 GB>, the user's dominant share is maximumOf(2/8,1/5) means CPU. A user's dominant share is the fraction of the dominant resource that's allocated to the user. In our example, it would be 25 percent (2/8). DRF applies the min-max algorithm to the dominant share of each user. It has many provable properties: Strategy proofness: A user cannot gain any advantage by lying about the demands. Sharing incentive: DRF has a minimum allocation guarantee for each user, and no user will prefer exclusive partitioned cluster of size 1/N over DRF allocation. Single resource fairness: In case of only one resource, DRF is equivalent to the min-max algorithm. Envy free: Every user prefers his allocation over any other allocation of other users. This also means that the users with the same requests get equivalent allocations. Bottleneck fairness: When one resource becomes a bottleneck, and each user has a dominant demand for it, DRF is equivalent to min-max. Monotonicity: Adding resources or removing users can only increase the allocation of the remaining users. Pareto efficiency: The allocation achieved by DRF will be pareto efficient, so it would be impossible to make a better allocation for any user without making allocation for some other user worse. We will not further discuss DRF but will encourage you to refer to the DRF paper for more details at http://static.usenix.org/event/nsdi11/tech/full_papers/Ghodsi.pdf. Mesos uses role specified in FrameworkInfo for resource allocation decision. A role can be per user or per framework or can be shared by multiple users and frameworks. If it's not set, Mesos will set it to the current user that runs the framework scheduler. An optimization is to use deny resource offers from particular slaves for a specified time period. Mesos can revoke tasks allocation killing those tasks. Before killing a task, Mesos gives the framework a grace period to clean up. Mesos asks the executor to kill the task, but if it does not oblige the request, it will kill the executor and all of its tasks. Weighted DRF DRF calculates each role's dominant share and allocates the available resources to the user with the smallest dominant share. In practice, an organization rarely wants to assign resources in a complete fair manner. Most organizations want to allocate resources in a weighted manner, such as 50 percent resources to ads team, 30 percent to QA, and 20 percent to R&D teams. To satisfy this command functionality, Mesos implements weighted DRF, where masters can be configured with weights for different roles. When weights are specified, a client's DRF share will be divided by the weight. For example, a role that has a weight of two will be offered twice as many resources as a role with weight of one. Mesos can be configured to use weighted DRF using the --weights and --roles flags on the master startup. The --weights flag expects a list of role/weight pairs in the form of role1=weight1 and role2=weight2. Weights do not need to be integers. We must provide weights for each role that appear in --roles on the master startup. Reservation One of the other most asked questions for requirement is the ability to reserve resources. For example, persistent or stateful services, such as memcache, or a database running on Mesos, would need a reservation mechanism to avoid being negatively affected on restart. Without reservation, memcache is not guaranteed to get a resource offer from the slave, which has all the data and would incur significant time in initialization and downtime for the service. Reservation can also be used to limit the resource per role. Reservation provides guaranteed resources for roles, but improper usage might lead to resource fragmentation and lower utilization of resources. Note that all the reservation requests go through a Mesos authorization mechanism to ensure that the operator or framework requesting the operation has the proper privileges. Reservation privileges are specified to the Mesos master through ACL along with the rest of the ACL configuration. Mesos supports the following two kinds of reservation: Static reservation Dynamic reservation Static reservation In static reservation, resources are reserved for a particular role. The restart of the slave after removing the checkpointed state is required to change static reservation. Static reservation is thus typically managed by operators using the --resources flag on the slave. The flag expects a list of name(role):value for different resources. If a resource is assigned to role A, then only frameworks with role A are eligible to get an offer for that resource. Any resources that do not include a role or resources that are not included in the --resources flag will be included in the default role (default *). For example, --resources="cpus:4;mem:2048;cpus(ads):8;mem(ads):4096" specifies that the slave has 8 CPUs and 4096 MB memory reserved for "ads" role and has 4 CPUs and 2048 MB memory unreserved. Nonuniform static reservation across slaves can quickly become difficult to manage. Dynamic reservation Dynamic reservation allows operators and frameworks to manage reservation more dynamically. Frameworks can use dynamic reservations to reserve offered resources, allowing those resources to only be reoffered to the same framework. At the time of writing, dynamic reservation is still being actively developed and is targeted toward the next release of Mesos (https://issues.apache.org/jira/browse/MESOS-2018). When asked for a reservation, Mesos will try to convert the unreserved resources to reserved resources. On the other hand, during the unreserve operation, the previously reserved resources are returned to the unreserved pool of resources. To support dynamic reservation, Mesos allows a sequence of Offer::Operations to be performed as a response to accepting resource offers. A framework manages reservation by sending Offer::Operations::Reserve and Offer::Operations::Unreserve as part of these operations, when receiving resource offers. For example, consider the framework that receives the following resource offer with 32 CPUs and 65536 MB memory: {   "id" : <offer_id>,   "framework_id" : <framework_id>,   "slave_id" : <slave_id>,   "hostname" : <hostname>,   "resources" : [     {       "name" : "cpus",       "type" : "SCALAR",       "scalar" : { "value" : 32 },       "role" : "*",     },     {       "name" : "mem",       "type" : "SCALAR",       "scalar" : { "value" : 65536 },       "role" : "*",     }   ] } The framework can decide to reserve 8 CPUs and 4096 MB memory by sending the Operation::Reserve message with resources field with the desired resources state: [   {     "type" : Offer::Operation::RESERVE,     "resources" : [       {         "name" : "cpus",         "type" : "SCALAR",         "scalar" : { "value" : 8 },         "role" : <framework_role>,         "reservation" : {           "framework_id" : <framework_id>,           "principal" : <framework_principal>         }       }       {         "name" : "mem",         "type" : "SCALAR",         "scalar" : { "value" : 4096 },         "role" : <framework_role>,         "reservation" : {           "framework_id" : <framework_id>,           "principal" : <framework_principal>         }       }     ]   } ] After a successful execution, the framework will receive resource offers with reservation. The next offer from the slave might look as follows: {   "id" : <offer_id>,   "framework_id" : <framework_id>,   "slave_id" : <slave_id>,   "hostname" : <hostname>,   "resources" : [     {       "name" : "cpus",       "type" : "SCALAR",       "scalar" : { "value" : 8 },       "role" : <framework_role>,       "reservation" : {         "framework_id" : <framework_id>,         "principal" : <framework_principal>       }     },     {       "name" : "mem",       "type" : "SCALAR",       "scalar" : { "value" : 4096 },       "role" : <framework_role>,       "reservation" : {         "framework_id" : <framework_id>,         "principal" : <framework_principal>       }     },     {       "name" : "cpus",       "type" : "SCALAR",       "scalar" : { "value" : 24 },       "role" : "*",     },     {       "name" : "mem",       "type" : "SCALAR",       "scalar" : { "value" : 61440 },       "role" : "*",     }   ] } As shown, the framework has 8 CPUs and 4096 MB memory reserved resources and 24 CPUs and 61440 MB memory underserved in the resource offer. The unreserve operation is similar. The framework on receiving the resource offer can send the unreserve operation message, and subsequent offers will not have reserved resources. The operators can use/reserve and/unreserve HTTP endpoints of the operator API to manage the reservation. The operator API allows operators to change the reservation specified when the slave starts. For example, the following command will reserve 4 CPUs and 4096 MB memory on slave1 for role1 with the operator authentication principal ops: ubuntu@master:~ $ curl -d slaveId=slave1 -d resources="{          {            "name" : "cpus",            "type" : "SCALAR",            "scalar" : { "value" : 4 },            "role" : "role1",            "reservation" : {              "principal" : "ops"            }          },          {            "name" : "mem",            "type" : "SCALAR",            "scalar" : { "value" : 4096 },            "role" : "role1",            "reservation" : {              "principal" : "ops"            }          },        }"        -X POST http://master:5050/master/reserve Before we end this discussion on resource allocation, it would be important to note that the Mesos community continues to innovate on the resource allocation front by incorporating interesting ideas, such as oversubscription (https://issues.apache.org/jira/browse/MESOS-354), from academic literature and other systems. Summary In this article, we looked at the Mesos architecture in detail and learned how Mesos deals with resource allocation, resource isolation, and fault tolerance. We also saw the various ways in which we can extend Mesos. Resources for Article: Further resources on this subject: Recommender systems dissected Tuning Solr JVM and Container [article] Transformation [article] Getting Started [article]
Read more
  • 0
  • 0
  • 5733

article-image-deployment-preparations
Packt
08 Jul 2015
23 min read
Save for later

Deployment Preparations

Packt
08 Jul 2015
23 min read
In this article by Jurie-Jan Botha, author of the book Grunt Cookbook, has covered the following recipes: Minifying HTML Minifying CSS Optimizing images Linting JavaScript code Uglifying JavaScript code Setting up RequireJS (For more resources related to this topic, see here.) Once our web application is built and its stability ensured, we can start preparing it for deployment to its intended market. This will mainly involve the optimization of the assets that make up the application. Optimization in this context mostly refers to compression of one kind or another, some of which might lead to performance increases too. The focus on compression is primarily due to the fact that the smaller the asset, the faster it can be transferred from where it is hosted to a user's web browser. This leads to a much better user experience, and can sometimes be essential to the functioning of an application. Minifying HTML In this recipe, we make use of the contrib-htmlmin (0.3.0) plugin to decrease the size of some HTML documents by minifying them. Getting ready In this example, we'll work with the a basic project structure. How to do it... The following steps take us through creating a sample HTML document and configuring a task that minifies it: We'll start by installing the package that contains the contrib-htmlmin plugin. Next, we'll create a simple HTML document called index.html in the src directory, which we'd like to minify, and add the following content in it: <html> <head>    <title>Test Page</title> </head> <body>    <!-- This is a comment! -->    <h1>This is a test page.</h1> </body> </html> Now, we'll add the following htmlmin task to our configuration, which indicates that we'd like to have the white space and comments removed from the src/index.html file, and that we'd like the result to be saved in the dist/index.html file: htmlmin: { dist: {    src: 'src/index.html',    dest: 'dist/index.html',    options: {      removeComments: true,      collapseWhitespace: true    } } } The removeComments and collapseWhitespace options are used as examples here, as using the default htmlmin task will have no effect. Other minification options can be found at the following URL: https://github.com/kangax/html-minifier#options-quick-reference We can now run the task using the grunt htmlmin command, which should produce output similar to the following: Running "htmlmin:dist" (htmlmin) task Minified dist/index.html 147 B ? 92 B If we now take a look at the dist/index.html file, we will see that all white space and comments have been removed: <html> <head>    <title>Test Page</title> </head> <body>    <h1>This is a test page.</h1> </body> </html> Minifying CSS In this recipe, we'll make use of the contrib-cssmin (0.10.0) plugin to decrease the size of some CSS documents by minifying them. Getting ready In this example, we'll work with a basic project structure. How to do it... The following steps take us through creating a sample CSS document and configuring a task that minifies it. We'll start by installing the package that contains the contrib-cssmin plugin. Then, we'll create a simple CSS document called style.css in the src directory, which we'd like to minify, and provide it with the following contents: body { /* Average body style */ background-color: #ffffff; color: #000000; /*! Black (Special) */ } Now, we'll add the following cssmin task to our configuration, which indicates that we'd like to have the src/style.css file compressed, and have the result saved to the dist/style.min.css file: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css' } } We can now run the task using the grunt cssmin command, which should produce the following output: Running "cssmin:dist" (cssmin) taskFile dist/style.css created: 55 B ? 38 B If we take a look at the dist/style.min.css file that was produced, we will see that it has the compressed contents of the original src/style.css file: body{background-color:#fff;color:#000;/*! Black (Special) */} There's more... The cssmin task provides us with several useful options that can be used in conjunction with its basic compression feature. We'll look at prefixing a banner, removing special comments, and reporting gzipped results. Prefixing a banner In the case that we'd like to automatically include some information about the compressed result in the resulting CSS file, we can do so in a banner. A banner can be prepended to the result by supplying the desired banner content to the banner option, as shown in the following example: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      banner: '/* Minified version of style.css */'    } } } Removing special comments Comments that should not be removed by the minification process are called special comments and can be indicated using the "/*! comment */" markers. By default, the cssmin task will leave all special comments untouched, but we can alter this behavior by making use of the keepSpecialComments option. The keepSpecialComments option can be set to either the *, 1, or 0 value. The * value is the default and indicates that all special comments should be kept, 1 indicates that only the first comment that is found should be kept, and 0 indicates that none of them should be kept. The following configuration will ensure that all comments are removed from our minified result: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      keepSpecialComments: 0    } } } Reporting on gzipped results Reporting is useful to see exactly how well the cssmin task has compressed our CSS files. By default, the size of the targeted file and minified result will be displayed, but if we'd also like to see the gzipped size of the result, we can set the report option to gzip, as shown in the following example: cssmin: { dist: {    src: 'src/main.css',    dest: 'dist/main.css',    options: {      report: 'gzip'    } } } Optimizing images In this recipe, we'll make use of the contrib-imagemin (0.9.4) plugin to decrease the size of images by compressing them as much as possible without compromising on their quality. This plugin also provides a plugin framework of its own, which is discussed at the end of this recipe. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through configuring a task that will compress an image for our project. We'll start by installing the package that contains the contrib-imagemin plugin. Next, we can ensure that we have an image called image.jpg in the src directory on which we'd like to perform optimizations. Now, we'll add the following imagemin task to our configuration and indicate that we'd like to have the src/image.jpg file optimized, and have the result saved to the dist/image.jpg file: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg' } } We can then run the task using the grunt imagemin command, which should produce the following output: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 13.36 kB) If we now take a look at the dist/image.jpg file, we will see that its size has decreased without any impact on the quality. There's more... The imagemin task provides us with several options that allow us to tweak its optimization features. We'll look at how to adjust the PNG compression level, disable the progressive JPEG generation, disable the interlaced GIF generation, specify SVGO plugins to be used, and use the imagemin plugin framework. Adjusting the PNG compression level The compression of a PNG image can be increased by running the compression algorithm on it multiple times. By default, the compression algorithm is run 16 times. This number can be changed by providing a number from 0 to 7 to the optimizationLevel option. The 0 value means that the compression is effectively disabled and 7 indicates that the algorithm should run 240 times. In the following configuration we set the compression level to its maximum: imagemin: { dist: {    src: 'src/image.png',    dest: 'dist/image.png',    options: {      optimizationLevel: 7    } } } Disabling the progressive JPEG generation Progressive JPEGs are compressed in multiple passes, which allows a low-quality version of them to quickly become visible and increase in quality as the rest of the image is received. This is especially helpful when displaying images over a slower connection. By default, the imagemin plugin will generate JPEG images in the progressive format, but this behavior can be disabled by setting the progressive option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      progressive: false    } } } Disabling the interlaced GIF generation An interlaced GIF is the equivalent of a progressive JPEG in that it allows the contained image to be displayed at a lower resolution before it has been fully downloaded, and increases in quality as the rest of the image is received. By default, the imagemin plugin will generate GIF images in the interlaced format, but this behavior can be disabled by setting the interlaced option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.gif',    dest: 'dist/image.gif',    options: {      interlaced: false    } } } Specifying SVGO plugins to be used When optimizing SVG images, the SVGO library is used by default. This allows us to specify the use of various plugins provided by the SVGO library that each performs a specific function on the targeted files. Refer to the following URL for more detailed instructions on how to use the svgo plugins options and the SVGO library: https://github.com/sindresorhus/grunt-svgmin#available-optionsplugins Most of the plugins in the library are enabled by default, but if we'd like to specifically indicate which of these should be used, we can do so using the svgoPlugins option. Here, we can provide an array of objects, where each contain a property with the name of the plugin to be affected, followed by a true or false value to indicate whether it should be activated. The following configuration disables three of the default plugins: imagemin: { dist: {    src: 'src/image.svg',    dest: 'dist/image.svg',    options: {      svgoPlugins: [        {removeViewBox:false},        {removeUselessStrokeAndFill:false},        {removeEmptyAttrs:false}      ]    } } } Using the 'imagemin' plugin framework In order to provide support for the various image optimization projects, the imagemin plugin has a plugin framework of its own that allows developers to easily create an extension that makes use of the tool they require. You can get a list of the available plugin modules for the imagemin plugin's framework at the following URL: https://www.npmjs.com/browse/keyword/imageminplugin The following steps will take us through installing and making use of the mozjpeg plugin to compress an image in our project. These steps start where the main recipe takes off. We'll start by installing the imagemin-mozjpeg package using the npm install imagemin-mozjpeg command, which should produce the following output: imagemin-mozjpeg@4.0.0 node_modules/imagemin-mozjpeg With the package installed, we need to import it into our configuration file, so that we can make use of it in our task configuration. We do this by adding the following line at the top of our Gruntfile.js file: var mozjpeg = require('imagemin-mozjpeg'); With the plugin installed and imported, we can now change the configuration of our imagemin task by adding the use option and providing it with the initialized plugin: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      use: [mozjpeg()]    } } } Finally, we can test our setup by running the task using the grunt imagemin command. This should produce an output similar to the following: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 9.88 kB) Linting JavaScript code In this recipe, we'll make use of the contrib-jshint (0.11.1) plugin to detect errors and potential problems in our JavaScript code. It is also commonly used to enforce code conventions within a team or project. As can be derived from its name, it's basically a Grunt adaptation for the JSHint tool. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will scan and analyze it using the JSHint tool. We'll start by installing the package that contains the contrib-jshint plugin. Next, we'll create a sample JavaScript file called main.js in the src directory, and add the following content in it: sample = 'abc'; console.log(sample); With our sample file ready, we can now add the following jshint task to our configuration. We'll configure this task to target the sample file and also add a basic option that we require for this example: jshint: { main: {    options: {      undef: true    },    src: ['src/main.js'] } } The undef option is a standard JSHint option used specifically for this example and is not required for this plugin to function. Specifying this option indicates that we'd like to have errors raised for variables that are used without being explicitly defined. We can now run the task using the grunt jshint command, which should produce output informing us of the problems found in our sample file: Running "jshint:main" (jshint) task      src/main.js      1 |sample = 'abc';          ^ 'sample' is not defined.      2 |console.log(sample);          ^ 'console' is not defined.      2 |console.log(sample);                      ^ 'sample' is not defined.   >> 3 errors in 1 file There's more... The jshint task provides us with several options that allow us to change its general behavior, in addition to how it analyzes the targeted code. We'll look at how to specify standard JSHint options, specify globally defined variables, send reported output to a file, and prevent task failure on JSHint errors. Specifying standard JSHint options The contrib-jshint plugin provides a simple way to pass all the standard JSHint options from the task's options object to the underlying JSHint tool. A list of all the options provided by the JSHint tool can be found at the following URL: http://jshint.com/docs/options/ The following example adds the curly option to the task we created in our main recipe to enforce the use of curly braces wherever they are appropriate: jshint: { main: {    options: {      undef: true,      curly: true    },    src: ['src/main.js'] } } Specifying globally defined variables Making use of globally defined variables is quite common when working with JavaScript, which is where the globals option comes in handy. Using this option, we can define a set of global values that we'll use in the targeted code, so that errors aren't raised when JSHint encounters them. In the following example, we indicate that the console variable should be treated as a global, and not raise errors when encountered: jshint: { main: {    options: {      undef: true,      globals: {        console: true      }    },    src: ['src/main.js'] } } Sending reported output to a file If we'd like to store the resulting output from our JSHint analysis, we can do so by specifying a path to a file that should receive it using the reporterOutput option, as shown in the following example: jshint: { main: {    options: {      undef: true,      reporterOutput: 'report.dat'    },    src: ['src/main.js'] } } Preventing task failure on JSHint errors The default behavior for the jshint task is to exit the running Grunt process once a JSHint error is encountered in any of the targeted files. This behavior becomes especially undesirable if you'd like to keep watching files for changes, even when an error has been raised. In the following example, we indicate that we'd like to keep the process running when errors are encountered by giving the force option a true value: jshint: { main: {    options: {      undef: true,      force: true    },    src: ['src/main.js'] } } Uglifying JavaScript Code In this recipe, we'll make use of the contrib-uglify (0.8.0) plugin to compress and mangle some files containing JavaScript code. For the most part, the process of uglifying just removes all the unnecessary characters and shortens variable names in a source code file. This has the potential to dramatically reduce the size of the file, slightly increase performance, and make the inner workings of your publicly available code a little more obscure. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will uglify it. We'll start by installing the package that contains the contrib-uglify plugin. Then, we can create a sample JavaScript file called main.js in the src directory, which we'd like to uglify, and provide it with the following contents: var main = function () { var one = 'Hello' + ' '; var two = 'World';   var result = one + two;   console.log(result); }; With our sample file ready, we can now add the following uglify task to our configuration, indicating the sample file as the target and providing a destination output file: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js' } } We can now run the task using the grunt uglify command, which should produce output similar to the following: Running "uglify:main" (uglify) task >> 1 file created. If we now take a look at the resulting dist/main.js file, we should see that it contains the uglified contents of the original src/main.js file. There's more... The uglify task provides us with several options that allow us to change its general behavior and see how it uglifies the targeted code. We'll look at specifying standard UglifyJS options, generating source maps, and wrapping generated code in an enclosure. Specifying standard UglifyJS options The underlying UglifyJS tool can provide a set of options for each of its separate functional parts. These parts are the mangler, compressor, and beautifier. The contrib-plugin allows passing options to each of these parts using the mangle, compress, and beautify options. The available options for each of the mangler, compressor, and beautifier parts can be found at each of following URLs (listed in the order mentioned): https://github.com/mishoo/UglifyJS2#mangler-options https://github.com/mishoo/UglifyJS2#compressor-options https://github.com/mishoo/UglifyJS2#beautifier-options The following example alters the configuration of the main recipe to provide a single option to each of these parts: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      mangle: {        toplevel: true      },      compress: {        evaluate: false      },      beautify: {        semicolons: false      }    } } } Generating source maps As code gets mangled and compressed, it becomes effectively unreadable to humans, and therefore, nearly impossible to debug. For this reason, we are provided with the option of generating a source map when uglifying our code. The following example makes use of the sourceMap option to indicate that we'd like to have a source map generated along with our uglified code: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      sourceMap: true    } } } Running the altered task will now, in addition to the dist/main.js file with our uglified source, generate a source map file called main.js.map in the same directory as the uglified file. Wrapping generated code in an enclosure When building your own JavaScript code modules, it's usually a good idea to have them wrapped in a wrapper function to ensure that you don't pollute the global scope with variables that you won't be using outside of the module itself. For this purpose, we can use the wrap option to indicate that we'd like to have the resulting uglified code wrapped in a wrapper function, as shown in the following example: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      wrap: true    } } } If we now take a look at the result dist/main.js file, we should see that all the uglified contents of the original file are now contained within a wrapper function. Setting up RequireJS In this recipe, we'll make use of the contrib-requirejs (0.4.4) plugin to package the modularized source code of our web application into a single file. For the most part, this plugin just provides a wrapper for the RequireJS tool. RequireJS provides a framework to modularize JavaScript source code and consume those modules in an orderly fashion. It also allows packaging an entire application into one file and importing only the modules that are required while keeping the module structure intact. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating some files for a sample application and setting up a task that bundles them into one file. We'll start by installing the package that contains the contrib-requirejs plugin. First, we'll need a file that will contain our RequireJS configuration. Let's create a file called config.js in the src directory and add the following content in it: require.config({ baseUrl: 'app' }); Secondly, we'll create a sample module that we'd like to use in our application. Let's create a file called sample.js in the src/app directory and add the following content in it: define(function (require) { return function () {    console.log('Sample Module'); } }); Lastly, we'll need a file that will contain the main entry point for our application, and also makes use of our sample module. Let's create a file called main.js in the src/app directory and add the following content in it: require(['sample'], function (sample) { sample(); }); Now that we've got all the necessary files required for our sample application, we can setup a requirejs task that will bundle it all into one file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js'    } } } The mainConfigFile option points out the configuration file that will determine the behavior of RequireJS. The name option indicates the name of the module that contains the application entry point. In the case of this example, our application entry point is contained in the app/main.js file, and app is the base directory of our application in the src/config.js file. This translates the app/main.js filename into the main module name. The out option is used to indicate the file that should receive the result of the bundled application. We can now run the task using the grunt requirejs command, which should produce output similar to the following: Running "requirejs:app" (requirejs) task We should now have a file named app.js in the www/js directory that contains our entire sample application. There's more... The requirejs task provides us with all the underlying options provided by the RequireJS tool. We'll look at how to use these exposed options and generate a source map. Using RequireJS optimizer options The RequireJS optimizer is quite an intricate tool, and therefore, provides a large number of options to tweak its behavior. The contrib-requirejs plugin allows us to easily set any of these options by just specifying them as options of the plugin itself. A list of all the available configuration options for the RequireJS build system can be found in the example configuration file at the following URL: https://github.com/jrburke/r.js/blob/master/build/example.build.js The following example indicates that the UglifyJS2 optimizer should be used instead of the default UglifyJS optimizer by using the optimize option: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2'    } } } Generating a source map When the source code is bundled into one file, it becomes somewhat harder to debug, as you now have to trawl through miles of code to get to the point you're actually interested in. A source map can help us with this issue by relating the resulting bundled file to the modularized structure it is derived from. Simply put, with a source map, our debugger will display the separate files we had before, even though we're actually using the bundled file. The following example makes use of the generateSourceMap option to indicate that we'd like to generate a source map along with the resulting file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2',      preserveLicenseComments: false,      generateSourceMaps: true    } } } In order to use the generateSourceMap option, we have to indicate that UglifyJS2 is to be used for optimization, by setting the optimize option to uglify2, and that license comments should not be preserved, by setting the preserveLicenseComments option to false. Summary This article covers the optimization of images, minifying of CSS, ensuring the quality of our JavaScript code, compressing it, and packaging it all together into one source file. Resources for Article: Further resources on this subject: Grunt in Action [article] So, what is Node.js? [article] Exploring streams [article]
Read more
  • 0
  • 0
  • 1543
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-project-setup-and-modeling-residential-project
Packt
08 Jul 2015
20 min read
Save for later

Project Setup and Modeling a Residential Project

Packt
08 Jul 2015
20 min read
In this article by Scott H. MacKenzie and Adam Rendek, authors of the book ArchiCAD 19 – The Definitive Guide, we will see how our journey, into ArchiCAD 19, begins with an introduction to the graphic user interface, also known as the GUI. As with any software program, there is a menu bar along the top that gives access to all the tools and features. There are also toolbars and tool palettes that can be docked anywhere you like. In addition to this, there are some special palettes that pop up only when you need them. After your introduction to ArchiCAD's user interface, you can jump right in and start creating the walls and floors for your new house. Then you will learn how to create ceilings and the stairs. Before too long you will have a 3D model to orbit around. It is really fun and probably easier than you would expect. (For more resources related to this topic, see here.) The ArchiCAD GUI The first time you open ArchiCAD you will find the toolbars along the top, just under the menu bar and there will be palettes docked to the left and right of the drawing area. We will focus on the 3 following palettes to get started: The Toolbox palette: This contains all of your selection, modeling, and drafting tools. It will be located on the left hand side by default. The Info Box palette: This is your context menu that changes according to whatever tool is currently in use. By default, this will be located directly under the toolbars at the top. It has a scrolling function; hover your cursor over the palette and spin the scroll wheel on your mouse to reveal everything on the palette. The Navigator palette: This is your project navigation window. This palette gives you access to all your views, sheets, and lists. It will be located on the right-hand side by default. These three palettes can be seen in the following screenshot: All of the mentioned palettes are dockable and can be arranged however you like on your screen. They can also be dragged away from the main ArchiCAD interface. For instance, you could have palettes on a second monitor. Panning and Zooming ArchiCAD has the same panning and zooming interface as most other CAD (Computer-aided design) and BIM (Building Information Modeling) programs. Rolling the scroll wheel on your mouse will zoom in and out. Pressing down on the scroll wheel (or middle button) and moving your cursor will execute a pan. Each drawing view window has a row of zoom commands along the bottom. You should try each one to get familiar with each of their functions. View toggling When you have multiple views open, you can toggle through them by pressing the Ctrl key and tapping on the Tab key. Or, you can pick any of the open views from the bottom of the Window pull-down menu. Pressing the F2 key will open a 2D floor plan view and pressing the F3 key will open the default 3D view. Pressing the F5 key will open a 3D view of selected items. In other words, if you want to isolate specific items in a 3D view, select those items and press F5. The function keys are second nature to those that have been using ArchiCAD for a long time. If a feature has a function key shortcut, you should use it. Project setup ArchiCAD is available in multiple different language versions. The exercises in this book use the USA version of ArchiCAD. Obviously this version is in English. There is another version in English and that is referred to as the International (INT) version. You can use the International version to do the exercises in the book, just be aware that there may be some subtle differences in the way that something is named or designed. When you create a new project in ArchiCAD, you start by opening a project template. The template will have all the basic stuff you need to get started including layers, line types, wall types, doors, windows, and more. The following lesson will take you through the first steps in creating a new ArchiCAD project: Open ArchiCAD. The Start ArchiCAD dialog box will appear. Select the Create a New Project radio button at the top. Select the Use a Template radio button under Set up Project Settings. Select ArchiCAD 19 Residential Template.tpl from the drop-down list. If you have the International version of ArchiCAD, then the residential template may not be available. Therefore you can use ArchiCAD 19 Template.tpl. Click on New. This will open a blank project file. Project Settings Now that you have opened your new project, we are going to create a house with 4 stories (which includes a story for the roof). We create a story for the roof in order to facilitate a workspace to model the elements on that level. The template we just opened only has 2 stories, so we will need to add 2 more. Then we need to look at some other settings. Stories The settings for the stories are as follows: On the Navigator palette, select the Project Map icon . Double click on 1st FLOOR. Right click on Stories and select Create New Story. You will be prompted to give the new story a name. Enter the name BASEMENT. Click on the button next to Below. Enter 9' into the Height box and click on the Create button. Then double click on 2. 2nd FLOOR. Right click on Stories and then select Create New Story. You will be prompted to give the new story a name. Enter the name ROOF. Click on the button next to Above. Enter 9' into the Height box and click on the Create button. Your list of stories should now look like this 3. ROOF 2. 2nd Floor 1. 1st Floor -1. BASEMENT The International version of ArchiCAD (INT) will give the first floor the index number of 0. The second floor index number will be 1. And the roof will be 2. Now we need to adjust the heights of the other stories: Right click on Stories (on the Navigator palette) and select Story Settings. Change the number in the Height to Next box for 1st FLOOR to 9'. Do the same for 2nd FLOOR. Units On the menu bar, go to Options | Project Preferences | Working Units and perform the following steps: Ensure Model Units is set to feet & fractional inches. Ensure that Fractions is set to 1/64. Ensure that Layout Units is set to feet & fractional inches. Ensure that Angle Unit is set to Decimal degrees. Ensure that Decimals is set to 2. You are now ready to begin modeling your house, but first let's save the project. To save the project, perform the following steps: Navigate to the File menu and click on Save. If by chance you have saved it already, then click on Save As. Name your file Colonial House. Click on Save. Renovation filters The Renovation Filter feature allows you to differentiate how your drawing elements will appear in different construction phases. For renovation projects that have demolition and new work phases, you need to show the items to be demolished differently than the existing items that are to remain, or that are new. The projects we will work on in this book do not require this feature to manage phases because we will only be creating a new construction. However, it is essential that your renovation filter setting is set to New Construction. We will do this in the first modeling exercise. Selection methods Before you can do much in ArchiCAD, you need to be familiar with selecting elements. There are several ways to select something in ArchiCAD, which are as follows: Single cursor click Pick the Arrow tool from the toolbox or hold the Shift key down on the keyboard and click on what you want to select. As you click on the elements, hold the Shift key down to add them to your selection set. To remove elements from the selection set, just click on them again with the Shift key pressed. There is a mode within this mode called Quick Selection. It is toggled on and off from the Info Box palette. The icon looks like a magnet. When it is on, it works like a magnet because it will stick to faces or surfaces, such as slabs or fill patterns. If this mode is not on, then you are required to find an edge, endpoint, or hotspot node to select an element with a single click. Hold the Space button down to temporarily change the mode while selecting elements. Window Pick the Arrow tool from the toolbox or hold the Shift key down and draw your selection window. Click once for the window starting corner and click a second time for the end corner. This works just as windowing does in AutoCAD. Not as Revit does, where you need to hold the mouse button down while you draw your window. There are 3 different windowing methods. Each one is set from the Info Box palette: Partial Elements: Anything that is inside of or touching the window will be selected. AutoCAD users will know this as a Crossing Window. Entire Elements: Anything completely encapsulated by the window will be selected. If something is not completely inside the window then it will not be selected. Direction Dependent: Click and window to the left, the Partial Elements window will be used. Click and window to the right, the Entire Elements window will be used. Marquee A marquee is a selection window that stays on the screen after you create it. If you are a MicroStation CAD program user, this will be similar to a selection window. It can be used for printing a specific area in a drawing view and performing what AutoCAD users would refer to as a Stretch command. There are 2 types of marquees; single story (skinny) and multi story (fat). The single story marquee is used when you want to select elements on your current story view only. The multi-story marquee will select everything on your current story as well as the stories above and below your selections. The Find & Select tool This lets ArchiCAD select elements for you, based on the attribute criteria that you define, such as element type, layer, and pen number. When you have the criteria defined, click on the plus sign button on the palette and all the elements within that criterion inside your current view or marquee will be selected. The quickest way to open the Find & Select tool is with the Ctrl + F key combination Modification commands As you draw, you will inevitably need to move, copy, stretch, or trim something. Select your items first, and then execute the modification command. Here are the basic commands you will need to get things moving: Adjust (Extend): Press Ctrl + - or navigate to Edit | Reshape | Adjust Drag (Move): Press Ctrl + D or…navigate to Edit | Move | Drag Drag a Copy (Copy): Press Ctrl + Shift + D or navigate to Edit | Move | Drag a Copy Intersect (Fillet): Click on the Intersect button on the Standard toolbar or navigate to Edit | Reshape | Intersect Resize (Scale): Press Ctrl + K or navigate to Edit | Reshape | Resize Rotate: Press Ctrl + E or navigate to Edit | Move | Rotate Stretch: Press Ctrl + H or navigate to Edit | Reshape | Stretch Trim: Press Ctrl or click on the Trim button on the Standard toolbar or navigate to Edit | Reshape | Trim. Hold the Ctrl key down and click on the portion of wall or line that you want trimmed off. This is the fastest way to trim anything! Memorizing the keyboard combinations above is a sure way to increase your productivity. Modeling – part I We will start with the wall tool to create the main exterior walls on the 1st floor of our house, and then create the floor with the slab tool. However, before we begin, let's make sure your Renovation Filter is set to New Construction. Setting the Renovation Filter The Renovation Filter is an active setting that controls how the elements you create are displayed. Everything we create in this project is for new construction so we need the new construction filter to be active. To do so, go to the Document menu, click on Renovation and then click on 04 New Construction. Using the Wall tool The Wall tool has settings for height, width, composite, layer, pen weight and more. We will learn about these things as we go along, and learn a little bit more each time we progress into to the project. Double click on 1. 1st Story in the Navigator palette to ensure we are working on story 1. Select the Wall tool from the Toolbox palette or from the menu bar under Design | Design Tools | Wall. Notice that this will automatically change the contents of the Info Box palette. Click on the wall icon inside Info Box. This will bring up the active properties of the wall tool in the form of the Wall Default Settings window. (This can also be achieved by double clicking on the wall tool button in Toolbox). Change the composite type to Siding 2x6 Wd. Stud. Click on the wall composite button to do this.   Creating the exterior walls of the 1st Story To create the exterior walls of the 1st story perform the following steps: Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Double click on 1. 1st Story in the Navigator palette to ensure that we are working on story 1. Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Change the composite type to be Siding 2x6 Wd. Stud. Click on the wall composite button to do this. Notice at the bottom of the Wall Default Settings window is the layer currently assigned to the wall tool. It should be set to A-WALL-EXTR. Click on OK to start your first wall. Click near the center of the drawing screen and move your cursor to the left, notice the orange dashed line that appears. That is your guide line. Keep your cursor over the guide line so that it keeps you locked in the orthogonal direction. You should also immediately see the Tracker palette pop up, displaying your distance drawn and angle after your first click. Before you make your second click, enter the number 24 from your keyboard and press Enter. You should now have 24-0" long wall. If your Tracker palette does not appear, it may be toggled off. Go up to the Standard tool bar and click on the Tracker button to turn it on. Select this again and make your first click on the upper left end corner of your first wall. Move your cursor down, so that it snaps to the guideline, enter the number 28, and press the Enter key. Draw your third wall by clicking on the bottom left endpoint of your second wall, move your cursor to the right, snapped over the guide line, type in the number 24 and press Enter. Draw your fourth wall by clicking on the bottom right end point of your third wall and the starting point of your first wall. You should now have four walls that measure 24'-0" x 28"-0, outside edge to outside edge. Move your four walls to the center of the drawing view and perform the following steps: Click on the Arrow tool at the top of the Toolbox. Click outside one of the corners of the walls, and then click on the opposite side. All four walls should be selected now. Use the Drag command to move the walls. The quickest way to activate the Drag command is by pressing Ctrl + D. The long way is from the menu bar by navigating to Edit | Move | Drag. Drag (move) the walls to the center of your drawing window. Press the Esc key or click on a blank space in your drawing window to deselect the walls. You can select all the walls in a view by activating the Wall tool and pressing Ctrl + A. You are now ready to create a floor with the slab tool. But first, let's have a little fun and see how it looks in 3D (press the F3 key): From the Navigator palette, double click on Generic Axonometry under the 3D folder icon. This will open a 3D view window. Hold your Shift key down, press down on your scroll wheel button, and slowly move your mouse around. You are now orbiting! Play around with it a little, then get back to work and go to the next step to create your first floor slab. Press the F2 key to get back to a 2D view. You can also perform a 3D orbit via the Orbit button at the bottom of any 3D view window. Creating the first story's floor with the Slab tool The slab tool is used to create floors. It is also used to create ceilings. We will begin using it now to create the first floor for our house. Similar to the Wall tool, it also has settings for layer, pen weight and composite. To create the first story's floor using the Slab tool, perform the following steps: Select the Slab tool from the Toolbox palette or from the menu bar under Design | Design Tools | Slab. This will change the contents of the Info Box palette. Click on the Slab icon in Info Box. This will bring up the Slab Default Settings (active properties) window for the Slab tool. As with the Wall tool, you have a composite setting for the slab tool. Set the composite type for the slab tool to FLR Wd Flr + 2x10. The layer should be set to A-FLOR. Click OK. You could draw the shape of the slab by tracing over the outside lines of your walls but we are going to use the Magic Wand feature. Hover your cursor over the space inside your four walls and press the space bar on your keyboard. This will automatically create the slab using the boundary created by the walls. Then, open a 3D view and look at your floor. Instead of using the tool icon inside the Info Box palette, double click on any tool icon inside the Toolbox palette to bring up the default settings window for that tool. Creating the exterior walls and floor slabs for the basement and the second story We could repeat all of the previous steps to create the floor and walls for the second story and the basement, but in this case, it will be quicker to copy what we have already drawn on the first story and copy it up with the Edit Elements by Stories tool. Perform the following steps to create the exterior walls and floor slabs for the basement and second story: Go to the Navigator palette and right click over Stories, select Edit Elements by Stories. The Edit Elements by Stories window will open. Under Select Action, you want to set it to Copy. Under From Story, set it to 1. 1st FLOOR. In the To Story section, check the box for 2nd FLOOR and -1. BASEMENT. Click on OK. You should see a dialog box appear, stating that as a result of the last operation, elements have been created and/or have changed their position on currently unseen stories. Whenever you get this message, you should confirm that you have not created any unwanted elements. Click on the Continue button. Now you should have walls and a floor on three stories; Basement, 1st FLOOR, and 2nd FLOOR. The quickest way to jump to the next story up or the next story down is with the Ctrl + Arrow Up or Ctrl + Arrow Down key combination. Basement element modification The floor and the walls on the BASEMENT story need to be changed to a different composite type. Do this by performing the following steps: Open the BASEMENT view and select the four walls by clicking on one at a time while holding down the Shift key. Right click over your selection and click on Wall Selection Settings. Change the walls to the EIFS on 8" CMU composite type. Then, click on OK. Move your cursor over the floor slab. The quick selection cursor should appear. This selection mode allows you to click on an object without needing to find an edge or endpoint. Click on the slab. Open the Slab Selection Setting window but this time, do it by pressing the Ctrl + T key combination. Change the floor slab composite to Conc. Slab: 4" on gravel. Click on OK. The Ctrl + T key combination is the quickest way to bring up an element's selection settings window when an element is selected. Open a 3D view (by pressing the F3 key) and orbit around your house. It should look similar to the following screenshot: Adding the garage We need to add the garage and the laundry room, which connects the garage to the house. Do this by performing the following steps: Open the 1st FLOOR story from the project map. Start the Wall tool. From the Info Box palette, set the wall composite setting to Siding 2x6 Wd. Stud. Click on the upper-left corner of your house for your wall starting point. Move your cursor to the left, snap to the guide line, type 6'-10", and press Enter. Change the Geometry Method setting on Info Box to Chained. Refer to the following screenshot: Start your next wall by clicking on the endpoint of your last wall, move your cursor up, snap to the guideline and type 5', and press Enter. Move your cursor to the left, snap to grid line, type in 12'-6", and press Enter. Move your cursor down, snap to grid line, type in 22'-4", and press Enter. Move your cursor to the right, snap to grid line and double click on the perpendicular west wall (double pressing your Enter key will work the same as a double click). Now we want to create the floor for this new set of walls. To do that, perform the following steps: Start the Slab tool. Change the composite to Conc. Slab: 4" on gravel. Hover your cursor inside the new set of walls and press the Space key to use the magic wand. This will create the floor slab for the garage and laundry room. There is still one more wall to create, but this time we will use the Adjust command to, in effect, create a new wall: Select the 5'-0" wall drawn in the previous exercise. Go to the Edit menu, click on Reshape, and then click on Adjust. Click on the bottom edge of the perpendicular wall down below. The wall should extend down. Refer to the following screenshot: Then Change to a 3D view (by pressing F3) and examine your work. The 3D view If you switch to a 3D view and your new modeling does not show, zoom in or out to refresh the view, or double click your scroll wheel (middle button). Your new work will appear. Summary In this article you were introduced to the ArchiCAD Graphical User Interface (GUI), project settings and learned how to select stuff. You created all the major modeling for your house and got a primer on layers. Now you should have a good understanding of the ArchiCAD way of creating architectural elements and how to control their parameters. Resources for Article: Further resources on this subject: Let There be Light! [article] Creating an AutoCAD command [article] Setting Up for Photoreal Rendering [article]
Read more
  • 0
  • 0
  • 2614

Packt
08 Jul 2015
8 min read
Save for later

Zabbix and I – Almost Heroes

Packt
08 Jul 2015
8 min read
In this article written by Luciano Alves, author of the book Zabbix Performance Tuning, the author explains that ever since he started working with IT infrastructure, he's been noticing that almost every company, when they start thinking about a monitoring tool, think of trying to know in some way when the system or service will go down before it actually happens. They expect the monitoring tool to create some kind of alert when something is broken. But by this approach, the system administrator will know about an error or system outage only after the error occurs (and maybe, at the same time, users are trying to use those systems). We need a monitoring solution to help us predict system outages and any other situation that our services can be affected by. Our approach with monitoring tools should cover not only our system monitoring but also our business monitoring. Nowadays, any company (small, medium, or large) has some dependency on technologies, from servers and network assets to IP equipment with a lower environmental impact. Maybe you need security cameras, thermometers, UPS, access control devices, or any other IP device by which you can gather some useful data. What about applications and services? What about data integration or transactions? What about user experience? What about a supplier website or system that you depend on? We should realize that monitoring things is not restricted to IT infrastructure, and it can be extended to other areas and business levels as well. (For more resources related to this topic, see here.) After starting Zabbix – the initial steps Suppose you already have your Zabbix server up and running. In a few weeks, Zabbix has helped you save a lot of time while restoring systems. It has also helped you notice some hidden things in your environment—maybe a flapping port in a network switch, or lack of CPU in a router. In a few months, Zabbix and you (of course) are like superstars. During lunch, people are talking about you. Some are happy because you've dealt with a recurring error. Maybe, a manager asks you to find a way to monitor a printer because it's very important to their team, another manager asks you to monitor an application, and so on. The other teams and areas also need some kind of monitoring. They have other things to monitor, not only IT things. But are these people familiar with technical things? Technical words, expressions, flows, and lines of thoughts are not so easy for people with nontechnical backgrounds to understand. Of course, in small and medium enterprises (SME), things will go ahead faster and paths will be shorter, but the scenario is not too different in most cases. You can work alone or in a huge team, but now you have another important partner—Zabbix. An immutable fact is that monitoring things comes with more and more responsibility and reliability. At this point, we have some new issues to solve: How do we create and authenticate a user? When Zabbix's visibility starts growing in your environment, you will need to think how to manage and handle these users. Do you have an LDAP or Microsoft Active Directory that you can use for centralized authentication? Of course, depending on the users you have, you will have more requests. Will you permit any user to access the Zabbix interface? Only a few? And which ones? Is it necessary to create a custom monitor? We know that Zabbix has a lot of built-in keys for gathering data. These keys are available for a good number of operating systems. We also have built-in functions used to gather data using the Intelligent Platform Management Interface (IPMI), Simple Network Management Protocol (SNMP), Open Database Connectivity (ODBC), Java Management Extensions (JMX), user parameters in the Zabbix agent, and so on. However, we need to think about a wide scenario where we need to gather data from somewhere Zabbix hasn't reached yet. Our experience shows us that most of the time, it is necessary to create custom monitors (not one, but a lot of them). Zabbix is a very flexible and easy-to-customize platform. It is possible to make Zabbix do anything you want. However, to learn every new function or to monitor Zabbix, you'll need to think about what kind of extension you'll use. More functions, more data, more load, and more TCP connections! This means that when other teams or areas start putting light on Zabbix, you will need to think about the number of new functions or monitors you will need to get. Then, which language to choose to develop these new things? Maybe you know the C language and you are thinking of using Zabbix modules. Will you use bulk operations to avoid network traffic? The natural growth In most scenarios, natural growth will occur without control. I mean, people are not used to planning this growth. It is very important to keep it under control. When some guys start their Zabbix deployment, they probably do not intend to cater to all company teams, areas, or businesses. They think about their needs and their team only. So, they don't think a lot about user rights, mainly because they are technicians and know mostly about hosts, items, triggers, maps, graphs, screens, and so on. What about users who are not technicians? Will they understand the Zabbix interface easily? Do you know that in Zabbix, we have a lot of paths that reach the same point? The Zabbix interface isn't object-based, which means that users need a lot of clicks to reach (read or write) the information related to an object (hosts, items, graphs, triggers, events, and so on). If you need to see the most recent data gathered from a specific item, you'll need to use the Monitoring menu, then use the Latest data menu, choose the group that the host belongs to, choose your host, and finally search for your item in the table. If you need to see a specific custom graph, use the Graphs menu, which is under Monitoring. Choose the group that the hosts belong to, choose your host, and then search for your graph in a combobox. If you need to know about an active trigger in your host, you'll need to use the Triggers menu, which is under Monitoring. Choose the group that your host belongs to and choose your host. Then, you can see the triggers from that specific host. If you want to include a new item in an existing custom graph, you'll need to access the Hosts menu, which is under Configuration. Choose the group that the hosts belong to, search for your host, and click on the Graphs link. Then you can choose which graph you want to change. There are a lot of clicks required to do simple things. Of course, the steps you just saw are something familiar for guys who have deployed Zabbix, but is this true for other teams too? Maybe, you are thinking right now that it doesn't matter to those guys. But actually, it matters, and it's directly related to Zabbix's growth in your environment. Okay, I think the next two questions will be: are you sure it matters? And why? Let's agree that the actual Zabbix interface isn't very user friendly for nontechnical guys. But according to the path of natural growth, you started gathering data from a lot of things that are not just IT related. Also, you can develop custom charts and any data from Zabbix via API functions. Now you'll have a lot of nontechnical guys trying to use Zabbix data. I'm sure that it will be necessary to create some maps and screens to help these users get the required information quickly and smoothly. The following screenshots show how we can transform the viewing layer of Zabbix into something more attractive: Tactical dashboard Here is what a strategic dashboard may look like: Strategic dashboard The point here is whether your Zabbix deployment is prepared to cater to these types of requirements. Summary We've noticed how Zabbix has evolved in terms of performance issues with each version. Also, you realized the importance of the need to be aware of its new features. Another significant point was to realize that the importance of Zabbix is growing, as the other teams and areas of the company are now aware of the potential of this tool. This movement will take Zabbix to all the corners of a company, which often requires a more open approach as far as monitoring tasks is concerned. Monitoring only servers and network assets will not suffice. Resources for Article: Further resources on this subject: Going beyond Zabbix agents [article] Understanding Self-tuning Thresholds [article] Query Performance Tuning [article]
Read more
  • 0
  • 0
  • 6006

article-image-blueprint-class
Packt
08 Jul 2015
26 min read
Save for later

The Blueprint Class

Packt
08 Jul 2015
26 min read
In this article by Nitish Misra, author of the book, Learning Unreal Engine Android Game Development, mentions about the Blueprint class. You would need to do all the scripting and everything else only once. A Blueprint class is an entity that contains actors (static meshes, volumes, camera classes, trigger box, and so on) and functionalities scripted in it. Looking at our example once again of the lamp turning on/off, say you want to place 10 such lamps. With a Blueprint class, you would just have to create and script once, save it, and duplicate it. This is really an amazing feature offered by UE4. (For more resources related to this topic, see here.) Creating a Blueprint class To create a Blueprint class, click on the Blueprints button in the Viewport toolbar, and in the dropdown menu, select New Empty Blueprint Class. A window will then open, asking you to pick your parent class, indicating the kind of Blueprint class you wish to create. At the top, you will see the most common classes. These are as follows: Actor: An Actor, as already discussed, is an object that can be placed in the world (static meshes, triggers, cameras, volumes, and so on, all count as actors) Pawn: A Pawn is an actor that can be controlled by the player or the computer Character: This is similar to a Pawn, but has the ability to walk around Player Controller: This is responsible for giving the Pawn or Character inputs in the game, or controlling it Game Mode: This is responsible for all of the rules of gameplay Actor Component: You can create a component using this and add it to any actor Scene Component: You can create components that you can attach to other scene components Apart from these, there are other classes that you can choose from. To see them, click on All Classes, which will open a menu listing all the classes you can create a Blueprint with. For our key cube, we will need to create an Actor Blueprint Class. Select Actor, which will then open another window, asking you where you wish to save it and what to name it. Name it Key_Cube, and save it in the Blueprint folder. After you are satisfied, click on OK and the Actor Blueprint Class window will open. The Blueprint class user interface is similar to that of Level Blueprint, but with a few differences. It has some extra windows and panels, which have been described as follows: Components panel: The Components panel is where you can view, and add components to the Blueprint class. The default component in an empty Blueprint class is DefaultSceneRoot. It cannot be renamed, copied, or removed. However, as soon as you add a component, it will replace it. Similarly, if you were to delete all of the components, it will come back. To add a component, click on the Add Component button, which will open a menu, from where you can choose which component to add. Alternatively, you can drag an asset from the Content Browser and drop it in either the Graph Editor or the Components panel, and it will be added to the Blueprint class as a component. Components include actors such as static or skeletal meshes, light actors, camera, audio actors, trigger boxes, volumes, particle systems, to name a few. When you place a component, it can be seen in the Graph Editor, where you can set its properties, such as size, position, mobility, material (if it is a static mesh or a skeletal mesh), and so on, in the Details panel. Graph Editor: The Graph Editor is also slightly different from that of Level Blueprint, in that there are additional windows and editors in a Blueprint class. The first window is the Viewport, which is the same as that in the Editor. It is mainly used to place actors and set their positions, properties, and so on. Most of the tools you will find in the main Viewport (the editor's Viewport) toolbar are present here as well. Event Graph: The next window is the Event Graph window, which is the same as a Level Blueprint window. Here, you can script the components that you added in the Viewport and their functionalities (for example, scripting the toggling of the lamp on/off when the player is in proximity and moves away respectively). Keep in mind that you can script the functionalities of the components only present within the Blueprint class. You cannot use it directly to script the functionalities of any actor that is not a component of the Class. Construction Script: Lastly, there is the Construction Script window. This is also similar to the Event Graph, as in you can set up and connect nodes, just like in the Event Graph. The difference here is that these nodes are activated when you are constructing the Blueprint class. They do not work during runtime, since that is when the Event Graph scripts work. You can use the Construction Script to set properties, create and add your own property of any of the components you wish to alter during the construction, and so on. Let's begin creating the Blueprint class for our key cubes. Viewport The first thing we need are the components. We require three components: a cube, a trigger box, and a PostProcessVolume. In the Viewport, click on the Add Components button, and under Rendering, select Static Mesh. It will add a Static Mesh component to the class. You now need to specify which Static Mesh you want to add to the class. With the Static Mesh actor selected in the Components panel, in the actor's Details panel, under the Static Mesh section, click on the None button and select TemplateCube_Rounded. As soon as you set the mesh, it will appear in the Viewport. With the cube selected, decrease its scale (located in the Details panel) from 1 to 0.2 along all three axes. The next thing we need is a trigger box. Click on the Add Component button and select Box Collision in the Collision section. Once added, increase its scale from 1 to 9 along all three axes, and place it in such a way that its bottom is in line with the bottom of the cube. The Construction Script You could set its material in the Details panel itself by clicking on the Override Materials button in the Rendering section, and selecting the key cube material. However, we are going to assign its material using Construction Script. Switch to the Construction Script tab. You will see a node called Construction Script, which is present by default. You cannot delete this node; this is where the script starts. However, before we can script it in, we will need to create a variable of the type Material. In the My Blueprint section, click on Add New and select Variable in the dropdown menu. Name this variable Key Cube Material, and change its type from Bool (which is the default variable type) to Material in the Details panel. Also, be sure to check the Editable box so that we can edit it from outside the Blueprint class. Next, drag the Key Cube Material variable from the My Blueprint panel, drop it in the Graph Editor, and select Set when the window opens up. Connect this to the output pin of the Construction Script node. Repeat this process, only this time, select Get and connect it to the input pin of Key Cube Material. Right-click in the Graph Editor window and type in Set Material in the search bar. You should see Set Material (Static Mesh). Click on it and add it to the scene. This node already has a reference of the Static Mesh actor (TemplateCube_Rounded), so we will not have to create a reference node. Connect this to the Set node. Finally, drag Key Cube Material from My Blueprint, drop it in the Graph Editor, select Get, and connect it to the Material input pin. After you are done, hit Compile. We will now be able to set the cube's material outside of the Blueprint class. Let's test it out. Add the Blueprint class to the level. You will see a TemplateCube_Rounded actor added to the scene. In its Details panel, you will see a Key Cube Material option under the Default section. This is the variable we created inside our Construction Script. Any material we add here will be added to the cube. So, click on None and select KeyCube_Material. As soon as you select it, you will see the material on the cube. This is one of the many things you can do using Construction Script. For now, only this will do. The Event Graph We now need to script the key cube's functionalities. This is more or less the same as what we did in the Level Blueprint with our first key cube, with some small differences. In the Event Graph panel, the first thing we are going to script is enabling and disabling input when the player overlaps and stops overlapping the trigger box respectively. In the Components section, right-click on Box. This will open a menu. Mouse over Add Event and select Add OnComponentBeginOverlap. This will add a Begin Overlap node to the Graph Editor. Next, we are going to need a Cast node. A Cast node is used to specify which actor you want to use. Right-click in the Graph Editor and add a Cast to Character node. Connect this to the OnComponentBeginOverlap node and connect the other actor pin to the Object pin of the Cast to Character node. Finally, add an Enable Input node and a Get Player Controller node and connect them as we did in the Level Blueprint. Next, we are going to add an event for when the player stops overlapping the box. Again, right-click on Box and add an OnComponentEndOverlap node. Do the exact same thing you did with the OnComponentBeginOverlap node; only here, instead of adding an Enable Input node, add a Disable Input node. The setup should look something like this: You can move the key cube we had placed earlier on top of the pedestal, set it to hidden, and put the key cube Blueprint class in its place. Also, make sure that you set the collision response of the trigger actor to Ignore. The next step is scripting the destruction of the key cube when the player touches the screen. This, too, is similar to what we had done in Level Blueprint, with a few differences. Firstly, add a Touch node and a Sequence node, and connect them to each other. Next, we need a Destroy Component node, which you can find under Components | Destroy Component (Static Mesh). This node already has a reference to the key cube (Static Mesh) inside it, so you do not have to create an external reference and connect it to the node. Connect this to the Then 0 node. We also need to activate the trigger after the player has picked up the key cube. Now, since we cannot call functions on actors outside the Blueprint class directly (like we could in Level Blueprint), we need to create a variable. This variable will be of the type Trigger Box. The way this works is, when you have created a Trigger Box variable, you can assign it to any trigger in the level, and it will call that function to that particular trigger. With that in mind, in the My Blueprint panel, click on Add New and create a variable. Name this variable Activated Trigger Box, and set its type to Trigger Box. Finally, make sure you tick on the Editable box; otherwise, you will not be able to assign any trigger to it. After doing that, create a Set Collision Response to All Channels node (uncheck the Context Sensitive box), and set the New Response option to Overlap. For the target, drag the Activated Trigger Box variable, drop it in the Graph Editor, select Get, and connect it to the Target input. Finally, for the Post Process Volume, we will need to create another variable of the type PostProcessVolume. You can name this variable Visual Indicator, again, while ensuring that the Editable box is checked. Add this variable to the Graph Editor as well. Next, click on its pin, drag it out, and release it, which will open the actions menu. Here, type in Enabled, select Set Enabled, and check Enabled. Finally, add a Delay node and a Destroy Actor and connect them to the Set Enabled node, in that order. Your setup should look something like this: Back in the Viewport, you will find that under the Default section of the Blueprint class actor, two more options have appeared: Activated Trigger Box and Visual Indicator (the variables we had created). Using this, you can assign which particular trigger box's collision response you want to change, and which exact post process volume you want to activate and destroy. In front of both variables, you will see a small icon in the shape of an eye dropper. You can use this to choose which external actor you wish to assign the corresponding variable. Anything you scripted using those variables will take effect on the actor you assigned in the scene. This is one of the many amazing features offered by the Blueprint class. All we need to do now for the remaining key cubes is: Place them in the level Using the eye dropper icon that is located next to the name of the variables, pick the trigger to activate once the player has picked up the key cube, and which post process volume to activate and destroy. In the second room, we have two key cubes: one to activate the large door and the other to activate the door leading to the third room. The first key cube will be placed on the pedestal near the big door. So, with the first key cube selected, using the eye dropper, select the trigger box on the pedestal near the big door for the Activated Trigger Box variable. Then, pick the post process volume inside which the key cube is placed for the Visual Indicator variable. The next thing we need to do is to open Level Blueprint and script in what happens when the player places the key cube on the pedestal near the big door. Doing what we did in the previous room, we set up nodes that will unhide the hidden key cube on the pedestal, and change the collision response of the trigger box around the big door to Overlap, ensuring that it was set to Ignore initially. Test it out! You will find that everything is working as expected. Now, do the same with the remaining key cubes. Pick which trigger box and which post process volume to activate when you touch on the screen. Then, in the Level Blueprint, script in which key cube to unhide, and so on (place the key cubes we had placed earlier on the pedestals and set it to Hidden), and place the Blueprint class key cube in its place. This is one of the many ways you can use Blueprint class. You can see it takes a lot of work and hassle. Let us now move on to Artificial intelligence. Scripting basic AI Coming back to the third room, we are now going to implement AI in our game. We have an AI character in the third room which, when activated, moves. The main objective is to make a path for it with the help of switches and prevent it from falling. When the AI character reaches its destination, it will unlock the key cube, which the player can then pick up and place on the pedestal. We first need to create another Blueprint class of the type Character, and name it AI_Character. When created, double-click on it to open it. You will see a few components already set up in the Viewport. These are the CapsuleComponent (which is mainly used for collision), ArrowComponent (to specify which side is the front of the character, and which side is the back), Mesh (used for character animation), and CharacterMovement. All four are there by default, and cannot be removed. The only thing we need to do here is add a StaticMesh for our character, which will be TemplateCube_Rounded. Click on Add Components, add a StaticMesh, and assign it TemplateCube_Rounded (in its Details panel). Next, scale this cube to 0.2 along all three axes and move it towards the bottom of the CapsuleComponent, so that it does not float in midair. This is all we require for our AI character. The rest we will handle in Level Blueprints. Next, place AI_Character into the scene on the Player side of the pit, with all of the switches. Place it directly over the Target Point actor. Next, open up Level Blueprint, and let's begin scripting it. The left-most switch will be used to activate the AI character, and the remaining three will be used to draw the parts of a path on which it will walk to reach the other side. To move the AI character, we will need an AI Move To node. The first thing we need is an overlapping event for the trigger over the first switch, which will enable the input, otherwise the AI character will start moving whenever the player touches the screen, which we do not want. Set up an Overlap event, an Enable Input node, and a Gate event. Connect the Overlap event to the Enable Input event, and then to the Gate node's Open input. The next thing is to create a Touch node. To this, we will attach an AI Move To node. You can either type it in or find it under the AI section. Once created, attach it to the Gate node's Exit pin. We now need to specify to the node which character we want to move, and where it should move to. To specify which character we want to move, select the AI character in the Viewport, and in the Level Blueprint's Graph Editor, right-click and create a reference for it. Connect it to the Pawn input pin. Next, for the location, we want the AI character to move towards the second Target Point actor, located on the other side of the pit. But first, we need to get its location in the world. With it selected, right-click in the Graph Editor, and type in Get Actor Location. This node returns an actor's location (coordinates) in the world (the one connected to it). This will create a Get Actor Location, with the Target Point actor connect to its input pin. Finally, connect its Return Value to the Destination input of the AI Move To node. If you were to test it out, you would find that it works fine, except for one thing: the AI character stops when it reaches the edge of the pit. We want it to fall off the pit if there is no path. For that, we will need a Nav Proxy Link actor. A Nav Proxy Link actor is used when an AI character has to step outside the Nav Mesh temporarily (for example, jump between ledges). We will need this if we want our AI character to fall off the ledge. You can find it in the All Classes section in the Modes panel. Place it in the level. The actor is depicted by two cylinders with a curved arrow connecting them. We want the first cylinder to be on one side of the pit and the other cylinder on the other side. Using the Scale tool, increase the size of the Nav Proxy Link actor. When placing the Nav Proxy Link actor, keep two things in mind: Make sure that both cylinders intersect in the green area; otherwise, the actor will not work Ensure that both cylinders are in line with the AI character; otherwise, it will not move in a straight line but instead to where the cylinder is located Once placed, you will see that the AI character falls off when it reaches the edge of the pit. We are not done yet. We need to bring the AI character back to its starting position so that the player can start over (or else the player will not be able to progress). For that, we need to first place a trigger at the bottom of the pit, making sure that if the AI character does fall into it, it overlaps the trigger. This trigger will perform two actions: first, it will teleport the AI character to its initial location (with the help of the first Target Point); second, it will stop the AI Move To node, or it will keep moving even after it has been teleported. After placing the trigger, open Level Blueprint and create an Overlap event for the trigger box. To this, we will add a Sequence node, since we are calling two separate functions for when the player overlaps the trigger. The first node we are going to create is a Teleport node. Here, we can specify which actor to teleport, and where. The actor we want to teleport is the AI character, so create a reference for it and connect it to the Target input pin. As for the destination, first use the Get Actor Location function to get the location of the first Target Point actor (upon which the AI character is initially placed), and connect it to the Dest Location input. To stop the AI character's movement, right-click anywhere in the Graph Editor, and first uncheck the Context Sensitive box, since we cannot use this function directly on our AI character. What we need is a Stop Active Movement node. Type it into the search bar and create it. Connect this to the Then 1 output node, and attach a reference of the AI character to it. It will automatically convert from a Character Reference into Character Movement component reference. This is all that we need to script for our AI in the third room. There is one more thing left: how to unlock the key cube. In the fourth room, we are going to use the same principle. Here, we are going to make a chain of AI Move To nodes, each connected to the previous one's On Success output pin. This means that when the AI character has successfully reached the destination (Target Point actor), it should move to the next, and so on. Using this, and what we have just discussed about AI, script the path that the AI will follow. Packaging the project Another way of packaging the game and testing it on your device is to first package the game, import it to the device, install it, and then play it. But first, we should discuss some settings regarding packaging, and packaging for Android. The Maps & Modes settings These settings deal with the maps (scenes) and the game mode of the final game. In the Editor, click on Edit and select Project settings. In the Project settings, Project category, select Maps & Modes. Let's go over the various sections: Default Maps: Here, you can set which map the Editor should open when you open the Project. You can also set which map the game should open when it is run. The first thing you need to change is the main menu map we had created. To do this, click on the downward arrow next to Game Default Map and select Main_Menu. Local Multiplayer: If your game has local multiplayer, you can alter a few settings regarding whether the game should have a split screen. If so, you can set what the layout should be for two and three players. Default Modes: In this section, you can set the default game mode the game should run with. The game mode includes things such as the Default Pawn class, HUD class, Controller class, and the Game State Class. For our game, we will stick to MyGame. Game Instance: Here, you can set the default Game Instance Class. The Packaging settings There are settings you can tweak when packaging your game. To access those settings, first go to Edit and open the Project settings window. Once opened, under the Project section click on Packaging. Here, you can view and tweak the general settings related to packaging the project file. There are two sections: Project and Packaging. Under the Project section, you can set options such as the directory of the packaged project, the build configuration to either debug, development, or shipping, whether you want UE4 to build the whole project from scratch every time you build, or only build the modified files and assets, and so on. Under the Packaging settings, you can set things such as whether you want all files to be under one .pak file instead of many individual files, whether you want those .pak files in chunks, and so on. Clicking on the downward arrow will open the advanced settings. Here, since we are packaging our game for distribution check the For Distribution checkbox. The Android app settings In the preceding section, we talked about the general packaging settings. We will now talk about settings specific to Android apps. This can be found in Project Settings, under the Platforms section. In this section, click on Android to open the Android app settings. Here you will find all the settings and properties you need to package your game. At the top the first thing you should do is configure your project for Android. If your project is not configured, it will prompt you to do so (since version 4.7, UE4 automatically creates the androidmanifest.xml file for you). Do this before you do anything else. Here you have various sections. These are: APKPackaging: In this section, you can find options such as opening the folder where all of the build files are located, setting the package's name, setting the version number, what the default orientation of the game should be, and so on. Advanced APKPackaging: This section contains more advanced packaging options, such as one to add extra settings to the .apk files. Build: To tweak settings in the Build section, you first need the source code which is available from GitHub. Here, you can set things like whether you want the build to support x86, OpenGL ES2, and so on. Distribution Signing: This section deals with signing your app. It is a requirement on Android that all apps have a digital signature. This is so that Android can identify the developers of the app. You can learn more about digital signatures by clicking on the hyperlink at the top of the section. When you generate the key for your app, be sure to keep it in a safe and secure place since if you lose it you will not be able to modify or update your app on Google Play. Google Play Service: Android apps are downloaded via the Google Play store. This section deals with things such as enabling/disabling Google Play support, setting your app's ID, the Google Play license key, and so on. Icons: In this section, you can set your game's icons. You can set various sizes of icons depending upon the screen density of the device you are aiming to develop on. You can get more information about icons by click on the hyperlink at the top of the section. Data Cooker: Finally, in this section, you can set how you want the audio in the game to be encoded. For our game, the first thing you need to set is the Android Package Name which is found in the APKPackaging section. The format of the naming is com.YourCompany.[PROJECT]. Here, replace YourCompany with the name of the company and [PROJECT] with the name of your project. Building a package To package your project, in the Editor go to File | Package Project | Android. You will see different types of formats to package the project in. These are as follows: ATC: Use this format if you have a device that has a Qualcomm Snapdragon processor. DXT: Use this format if your device has a Tegra graphical processing unit (GPU). ETC1: You can use this for any device. However, this format does not accept textures with alpha channels. Those textures will be uncompressed, making your game requiring more space. ETC2: Use this format is you have a MALI-based device. PVRTC: Use this format if you have a device with a PowerVR GPU. Once you have decided upon which format to use, click on it to begin the packaging process. A window will open up asking you to specify which folder you want the package to be stored in. Once you have decided where to store the package file, click OK and the build process will commence. When started, just like with launching the project, a small window will pop up at the bottom-right corner of the screen notifying the user that the build process has begun. You can open the output log and cancel the build process. Once the build process is complete, go the folder you set. You will find a .bat file of the game. Providing you have checked the packaged game data inside .apk? option (which is located in the Project settings in the Android category under the APKPackaging section), you will also find an .apk file of the game. The .bat file directly installs the game from the system onto your device. To do so, first connect your device to the system. Then double-click on the .bat file. This will open a command prompt window.   Once it has opened, you do not need to do anything. Just wait until the installation process finishes. Once the installation is done, the game will be on your device ready to be executed. To use the .apk file, you will have to do things a bit differently. An .apk file installs the game when it is on the device. For that, you need to perform the following steps: Connect the device. Create a copy of the .apk file. Paste it in the device's storage. Execute the .apk file from the device. The installation process will begin. Once completed, you can play the game. Summary In this article, we covered with Blueprints and discussed how they work. We also discussed Level Blueprints and the Blueprint class, and covered how to script AI. We discussed how to package the final product and upload the game onto the Google Play Store for people to download. Resources for Article: Further resources on this subject: Flash Game Development: Creation of a Complete Tetris Game [article] Adding Finesse to Your Game [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 8688

article-image-what-apache-camel
Packt
08 Jul 2015
9 min read
Save for later

What is Apache Camel?

Packt
08 Jul 2015
9 min read
In this article Jean-Baptiste Onofré, author of the book Mastering Apache Camel, we will see how Apache Camel originated in Apache ServiceMix. Apache ServiceMix 3 was powered by the Spring framework and implemented in the JBI specification. The Java Business Integration (JBI) specification proposed a Plug and Play approach for integration problems. JBI was based on WebService concepts and standards. For instance, it directly reusesthe Message Exchange Patterns (MEP) concept that comes from WebService Description Language (WSDL). Camel reuses some of these concepts, for instance, you will see that we have the concept of MEP in Camel. However, JBI suffered mostly from two issues: In JBI, all messages between endpoints are transported in the Normalized Messages Router (NMR). In the NMR, a message has a standard XML format. As all messages in the NMR havethe same format, it's easy to audit messages and the format is predictable. However, the JBI XML format has an important drawback for performances: it needs to marshall and unmarshall the messages. Some protocols (such as REST or RMI) are not easy to describe in XML. For instance, REST can work in stream mode. It doesn't make sense to marshall streamsin XML. Camel is payload-agnostic. This means that you can transport any kind of messages with Camel (not necessary XML formatted). JBI describes a packaging. We distinguish the binding components (responsible for the interaction with the system outside of the NMR and the handling of the messages in the NMR), and the service engines (responsible for transforming the messages inside the NMR). However, it's not possible to directly deploy the endpoints based on these components. JBI requires a service unit (a ZIP file) per endpoint, and for each package in a service assembly (another ZIP file). JBI also splits the description of the endpoint from its configuration. It does not result in a very flexible packaging: with definitions and configurations scattered in different files, not easy to maintain. In Camel, the configuration and definition of the endpoints are gatheredin a simple URI. It's easier to read. Moreover, Camel doesn't force any packaging; the same definition can be packaged in a simple XML file, OSGi bundle, andregular JAR file. In addition to JBI, another foundation of Camel is the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. It describes design patterns answering classical problems while dealing with enterprise application integration and message oriented middleware. The book describes the problems and the patterns to solve them. Camel strives to implement the patterns described in the book to make them easy to use and let the developer concentrate on the task at hand. This is what Camel is: an open source framework that allows you to integrate systems and that comes with a lot of connectors and Enterprise Integration Patterns (EIP) components out of the box. And if that is not enough, one can extend and implement custom components. Components and bean support Apache Camel ships with a wide variety of components out of the box; currently, there are more than 100 components available. We can see: The connectivity components that allow exposure of endpoints for external systems or communicate with external systems. For instance, the FTP, HTTP, JMX, WebServices, JMS, and a lot more components are connectivity components. Creating an endpoint and the associated configuration for these components is easy, by directly using a URI. The internal components applying rules to the messages internally to Camel. These kinds of components apply validation or transformation rules to the inflight message. For instance, validation or XSLT are internal components. Camel brings a very powerful connectivity and mediation framework. Moreover, it's pretty easy to create new custom components, allowing you to extend Camel if the default components set doesn't match your requirements. It's also very easy to implement complex integration logic by creating your own processors and reusing your beans. Camel supports beans frameworks (IoC), such as Spring or Blueprint. Predicates and expressions As we will see later, most of the EIP need a rule definition to apply a routing logic to a message. The rule is described using an expression. It means that we have to define expressions or predicates in the Enterprise Integration Patterns. An expression returns any kind of value, whereas a predicate returns true or false only. Camel supports a lot of different languages to declare expressions or predicates. It doesn't force you to use one, it allows you to use the most appropriate one. For instance, Camel supports xpath, mvel, ognl, python, ruby, PHP, JavaScript, SpEL (Spring Expression Language), Groovy, and so on as expression languages. It also provides native Camel prebuilt functions and languages that are easy to use such as header, constant, or simple languages. Data format and type conversion Camel is payload-agnostic. This means that it can support any kind of message. Depending on the endpoints, it could be required to convert from one format to another. That's why Camel supports different data formats, in a pluggable way. This means that Camel can marshall or unmarshall a message in a given format. For instance, in addition to the standard JVM serialization, Camel natively supports Avro, JSON, protobuf, JAXB, XmlBeans, XStream, JiBX, SOAP, and so on. Depending on the endpoints and your need, you can explicitly define the data format during the processing of the message. On the other hand, Camel knows the expected format and type of endpoints. Thanks to this, Camel looks for a type converter, allowing to implicitly transform a message from one format to another. You can also explicitly define the type converter of your choice at some points during the processing of the message. Camel provides a set of ready-to-use type converters, but, as Camel supports a pluggable model, you can extend it by providing your own type converters. It's a simple POJO to implement. Easy configuration and URI Camel uses a different approach based on URI. The endpoint itself and its configuration are on the URI. The URI is human readable and provides the details of the endpoint, which is the endpoint component and the endpoint configuration. As this URI is part of the complete configuration (which defines what we name a route, as we will see later), it's possible to have a complete overview of the integration logic and connectivity in a row. Lightweight and different deployment topologies Camel itself is very light. The Camel core is only around 2 MB, and contains everythingrequired to run Camel. As it's based on a pluggable architecture, all Camel components are provided as external modules, allowing you to install only what you need, without installing superfluous and needlessly heavy modules. Camel is based on simple POJO, which means that the Camel core doesn't depend on other frameworks: it's an atomic framework and is ready to use. All other modules (components, DSL, and so on) are built on top of this Camel core. Moreover, Camel is not tied to one container for deployment. Camel supports a wide range of containers to run. They are as follows: A J2EE application server such as WebSphere, WebLogic, JBoss, and so on A Web container such as Apache Tomcat An OSGi container such as Apache Karaf A standalone application using frameworks such as Spring Camel gives a lot of flexibility, allowing you to embed it into your application or to use an enterprise-ready container. Quick prototyping and testing support In any integration project, it's typical that we have some part of the integration logic not yet available. For instance: The application to integrate with has not yet been purchased or not yet ready The remote system to integrate with has a heavy cost, not acceptable during the development phase Multiple teams work in parallel, so we may have some kinds of deadlocks between the teams As a complete integration framework, Camel provides a very easy way to prototype part of the integration logic. Even if you don't have the actual system to integrate, you can simulate this system (mock), as it allows you to implement your integration logic without waiting for dependencies. The mocking support is directly part of the Camel core and doesn't require any additional dependency. Along the same lines, testing is also crucial in an integration project. In such a kind of project, a lot of errors can happen and most are unforeseen. Moreover, a small change in an integration process might impact a lot of other processes. Camel provides the tools to easily test your design and integration logic, allowing you to integrate this in a continuous integration platform. Management and monitoring using JMX Apache Camel uses the Java Management Extension (JMX) standard and provides a lot of insights into the system using MBeans (Management Beans), providing a detailed view of the following current system: The different integration processes with the associated metrics The different components and endpoints with the associated metrics Moreover, these MBeans provide more insights than metrics. They also provide the operations to manage Camel. For instance, the operations allow you to stop an integration process, to suspend an endpoint, and so on. Using a combination of metrics and operations, you can configure a very agile integration solution. Active community The Apache Camel community is very active. This means that potential issues are identified very quickly and a fix is available soon after. However, it also means that a lot of ideas and contributions are proposed, giving more and more features to Camel. Another big advantage of an active community is that you will never be alone; a lot of people are active on the mailing lists who are ready to answer your question and provide advice. Summary Apache Camel is an enterprise integration solution used in many large organizations with enterprise support available through RedHat or Talend. Resources for Article: Further resources on this subject: Getting Started [article] A Quick Start Guide to Flume [article] Best Practices [article]
Read more
  • 0
  • 0
  • 2520
article-image-developing-javafx-application-ios
Packt
08 Jul 2015
10 min read
Save for later

Developing a JavaFX Application for iOS

Packt
08 Jul 2015
10 min read
In this article by Mohamed Taman, authors of the book JavaFX Essentials, we will learn how to develop a JavaFX, Apple has a great market share in the mobile and PC/Laptop world, with many different devices, from mobile phones such as the iPhone to musical devices such as the iPod and tablets such as the iPad. (For more resources related to this topic, see here.) It has a rapidly growing application market, called the Apple Store, serving its community, where the number of available apps increases daily. Mobile application developers should be ready for such a market. Mobile application developers targeting both iOS and Android face many challenges. By just comparing the native development environments of these two platforms, you will find that they differ substantially. iOS development, according to Apple, is based on the Xcode IDE (https://developer.apple.com/xcode/) and its programming languages. Traditionally, it was Objetive-C and, in June 2014, Apple introduced Swift (https://developer.apple.com/swift/); on the other hand, Android development, as defined by Google, is based on the Intellij IDEA IDE and the Java programming language. Not many developers are proficient in both environments. In addition, these differences rule out any code reuse between the platforms. JavaFX 8 is filling the gap for reusable code between the platforms, as we will see in this article, by sharing the same application in both platforms. Here are some skills that you will have gained by the end of this article: Installing and configuring iOS environment tools and software Creating iOS JavaFX 8 applications Simulating and debugging JavaFX mobile applications Packaging and deploying applications on iOS mobile devices Using RoboVM to run JavaFX on iOS RoboVM is the bridge from Java to Objetive-C. Using this, it becomes easy to develop JavaFX 8 applications that are to be run on iOS-based devices, as the ultimate goal of the RoboVM project is to solve this problem without compromising on developer experience or app user experience. As we saw in the article about Android, using JavaFXPorts to generate APKs was a relatively easy task due to the fact that Android is based on Java and the Dalvik VM. On the contrary, iOS doesn't have a VM for Java, and it doesn't allow dynamic loading of native libraries. Another approach is required. The RoboVM open source project tries to close the gap for Java developers by creating a bridge between Java and Objective-C using an ahead-of-time compiler that translates Java bytecode into native ARM or x86 machine code. Features Let's go through the RoboVM features: Brings Java and other JVM languages, such as Scala, Clojure, and Groovy, to iOS-based devices Translates Java bytecode into machine code ahead of time for fast execution directly on the CPU without any overhead The main target is iOS and the ARM processor (32- and 64-bit), but there is also support for Mac OS X and Linux running on x86 CPUs (both 32- and 64-bit) Does not impose any restrictions on the Java platform features accessible to the developer, such as reflection or file I/O Supports standard JAR files that let the developer reuse the vast ecosystem of third-party Java libraries Provides access to the full native iOS APIs through a Java-to-Objective-C bridge, enabling the development of apps with truly native UIs and with full hardware access Integrates with the most popular tools such as NetBeans, Eclipse, Intellij IDEA, Maven, and Gradle App Store ready, with hundreds of apps already in the store Limitations Mainly due to the restrictions of the iOS platform, there are a few limitations when using RoboVM: Loading custom bytecode at runtime is not supported. All class files comprising the app have to be available at compile time on the developer machine. The Java Native Interface technology as used on the desktop or on servers usually loads native code from dynamic libraries, but Apple does not permit custom dynamic libraries to be shipped with an iOS app. RoboVM supports a variant of JNI based on static libraries. Another big limitation is that RoboVM is an alpha-state project under development and not yet recommended for production usage. RoboVM has full support for reflection. How it works Since February 2015 there has been an agreement between the companies behind RoboVM and JavaFXPorts, and now a single plugin called jfxmobile-plugin allows us to build applications for three platforms—desktop, Android, and iOS—from the same codebase. The JavaFXMobile plugin adds a number of tasks to your Java application that allow you to create .ipa packages that can be submitted to the Apple Store. Android mostly uses Java as the main development language, so it is easy to merge your JavaFX 8 code with it. On iOS, the situation is internally totally different—but with similar Gradle commands. The plugin will download and install the RoboVM compiler, and it will use RoboVM compiler commands to create an iOS application in build/javafxports/ios. Getting started In this section, you will learn how to install the RoboVM compiler using the JavaFXMobile plugin, and make sure the tool chain works correctly by reusing the same application, Phone Dial version 1.0. Prerequisites In order to use the RoboVM compiler to build iOS apps, the following tools are required: Gradle 2.4 or higher is required to build applications with the jfxmobile plugin. A Mac running Mac OS X 10.9 or later. Xcode 6.x from the Mac App Store (https://itunes.apple.com/us/app/xcode/id497799835?mt=12). The first time you install Xcode, and every time you update to a new version, you have to open it once to agree to the Xcode terms. Preparing a project for iOS We will reuse the project we developed before, for the Android platform, since there is no difference in code, project structure, or Gradle build script when targeting iOS. They share the same properties and features, but with different Gradle commands that serve iOS development, and a minor change in the Gradle build script for the RoboVM compiler. Therefore, we will see the power of WORA Write Once, Run Everywhere with the same application. Project structure Based on the same project structure from the Android, the project structure for our iOS app should be as shown in the following figure: The application We are going to reuse the same application from the Phone DialPad version 2.0 JavaFX 8 application: As you can see, reusing the same codebase is a very powerful and useful feature, especially when you are developing to target many mobile platforms such as iOS and Android at the same time. Interoperability with low-level iOS APIs To have the same functionality of natively calling the default iOS phone dialer from our application as we did with Android, we have to provide the native solution for iOS as the following IosPlatform implementation: import org.robovm.apple.foundation.NSURL; import org.robovm.apple.uikit.UIApplication; import packt.taman.jfx8.ch4.Platform;   public class IosPlatform implements Platform {   @Override public void callNumber(String number) {    if (!number.equals("")) {      NSURL nsURL = new NSURL("telprompt://" + number);      UIApplication.getSharedApplication().openURL(nsURL);    } } } Gradle build files We will use the Gradle build script file, but with a minor change by adding the following lines to the end of the script: jfxmobile { ios {    forceLinkClasses = [ 'packt.taman.jfx8.ch4.**.*' ] } android {    manifest = 'lib/android/AndroidManifest.xml' } } All the work involved in installing and using robovm compilers is done by the jfxmobile plugin. The purpose of those lines is to give the RoboVM compiler the location of the main application class that has to be loaded at runtime is, as it is not visible by default to the compiler. The forceLinkClasses property ensures that those classes are linked in during RoboVM compilation. Building the application After we have added the necessary configuration set to build the script for iOS, its time to build the application in order to deploy it to different iOS target devices. To do so, we have to run the following command: $ gradle build We should have the following output: BUILD SUCCESSFUL   Total time: 44.74 secs We have built our application successfully; next, we need to generate the .ipa and, in the case of production, you have to test it by deploying it to as many iOS versions as you can. Generating the iOS .ipa package file In order to generate the final .ipa iOS package for our JavaFX 8 application, which is necessary for the final distribution to any device or the AppStore, you have to run the following gradle command: gradle ios This will generate the .ipa file in the directory build/javafxports/ios. Deploying the application During development, we need to check our application GUI and final application prototype on iOS simulators and measure the application performance and functionality on different devices. These procedures are very useful, especially for testers. Let's see how it is a very easy task to run our application on either simulators or on real devices. Deploying to a simulator On a simulator, you can simply run the following command to check if your application is running: $ gradle launchIPhoneSimulator This command will package and launch the application in an iPhone simulator as shown in the following screenshot: DialPad2 JavaFX 8 application running on the iOS 8.3/iPhone 4s simulator This command will launch the application in an iPad simulator: $ gradle launchIPadSimulator Deploying to an Apple device In order to package a JavaFX 8 application and deploy it to an Apple device, simply run the following command: $ gradle launchIOSDevice This command will launch the JavaFX 8 application in the device that is connected to your desktop/laptop. Then, once the application is launched on your device, type in any number and then tap Call. The iPhone will ask for permission to dial using the default mobile dialer; tap on Ok. The default mobile dialer will be launched and will the number as shown in the following figure: To be able to test and deploy your apps on your devices, you will need an active subscription with the Apple Developer Program. Visit the Apple Developer Portal, https://developer.apple.com/register/index.action, to sign up. You will also need to provision your device for development. You can find information on device provisioning in the Apple Developer Portal, or follow this guide: http://www.bignerdranch.com/we-teach/how-to-prepare/ios-device-provisioning/. Summary This article gave us a very good understanding of how JavaFX-based applications can be developed and customized using RoboVM for iOS to make it possible to run your applications on Apple platforms. You learned about RoboVM features and limitations, and how it works; you also gained skills that you can use for developing. You then learned how to install the required software and tools for iOS development and how to enable Xcode along with the RoboVM compiler, to package and install the Phone Dial JavaFX-8-based application on OS simulators. Finally, we provided tips on how to run and deploy your application on real devices. Resources for Article: Further resources on this subject: Function passing [article] Creating Java EE Applications [article] Contexts and Dependency Injection in NetBeans [article]
Read more
  • 0
  • 0
  • 10071

article-image-file-sharing
Packt
08 Jul 2015
14 min read
Save for later

File Sharing

Packt
08 Jul 2015
14 min read
In this article by Dan Ristic, author of the book Learning WebRTC, we will cover the following topics: Getting a file with File API Setting up our page Getting a reference to a file The real power of a data channel comes when combining it with other powerful technologies from a browser. By opening up the power to send data peer-to-peer and combining it with a File API, we could open up all new possibilities in your browser. This means you could add file sharing functionalities that are available to any user with an Internet connection. The application that we will build will be a simple one with the ability to share files between two peers. The basics of our application will be real-time, meaning that the two users have to be on the page at the same time to share a file. There will be a finite number of steps that both users will go through to transfer an entire file between them: User A will open the page and type a unique ID. User B will open the same page and type the same unique ID. The two users can then connect to each other using RTCPeerConnection. Once the connection is established, one user can select a file to share. The other user will be notified of the file that is being shared, where it will be transferred to their computer over the connection and they will download the file. The main thing we will focus on throughout the article is how to work with the data channel in new and exciting ways. We will be able to take the file data from the browser, break it down into pieces, and send it to the other user using only the RTCPeerConnection API. The interactivity that the API promotes will stand out in this article and can be used in a simple project. Getting a file with the File API One of the first things that we will cover is how to use the File API to get a file from the user's computer. There is a good chance you have interacted with the File API on a web page and have not even realized it yet! The API is usually denoted by the Browse or Choose File text located on an input field in the HTML page and often looks something similar to this: Although the API has been around for quite a while, the one you are probably familiar with is the original specification, dating back as far as 1995. This was the Form-based File Upload in HTML specification that focused on allowing a user to upload a file to a server using an HTML form. Before the days of the file input, application developers had to rely on third-party tools to request files of data from the user. This specification was proposed in order to make a standard way to upload files for a server to download, save, and interact with. The original standard focused entirely on interacting with a file via an HTML form, however, and did not detail any way to interact with a file via JavaScript. This was the origin of the File API. Fast-forward to the groundbreaking days of HTML5 and we now have a fully-fledged File API. The goal of the new specification was to open the doors to file manipulation for web applications, allowing them to interact with files similar to how a native-installed application would. This means providing access to not only a way for the user to upload a file, but also ways to read the file in different formats, manipulate the data of the file, and then ultimately do something with this data. Although there are many great features of the API, we are going to only focus on one small aspect of this API. This is the ability to get binary file data from the user by asking them to upload a file. A typical application that works with files, such as Notepad on Windows, will work with file data in pretty much the same way. It asks the user to open a file in which it will read the binary data from the file and display the characters on the screen. The File API gives us access to the same binary data that any other application would use in the browser. This is the great thing about working with the File API: it works in most browsers from a HTML page; similar to the ones we have been building for our WebRTC demos. To start building our application, we will put together another simple web page. This will look similar to the last ones, and should be hosted with a static file server as done in the previous examples. By the end of the article, you will be a professional single page application builder! Now let's take a look at the following HTML code that demonstrates file sharing: <!DOCTYPE html> <html lang="en"> <head>    <meta charset="utf-8" />      <title>Learning WebRTC - Article: File Sharing</title>      <style>      body {        background-color: #404040;        margin-top: 15px;        font-family: sans-serif;        color: white;      }        .thumb {        height: 75px;        border: 1px solid #000;        margin: 10px 5px 0 0;      }        .page {        position: relative;        display: block;        margin: 0 auto;        width: 500px;        height: 500px;      }        #byte_content {        margin: 5px 0;        max-height: 100px;        overflow-y: auto;        overflow-x: hidden;      }        #byte_range {        margin-top: 5px;      }    </style> </head> <body>    <div id="login-page" class="page">      <h2>Login As</h2>      <input type="text" id="username" />      <button id="login">Login</button>    </div>      <div id="share-page" class="page">      <h2>File Sharing</h2>        <input type="text" id="their-username" />      <button id="connect">Connect</button>      <div id="ready">Ready!</div>        <br />      <br />           <input type="file" id="files" name="file" /> Read bytes:      <button id="send">Send</button>    </div>      <script src="client.js"></script> </body> </html> The page should be fairly recognizable at this point. We will use the same page showing and hiding via CSS as done earlier. One of the main differences is the appearance of the file input, which we will utilize to have the user upload a file to the page. I even picked a different background color this time to spice things up. Setting up our page Create a new folder for our file sharing application and add the HTML code shown in the preceding section. You will also need all the steps from our JavaScript file to log in two users, create a WebRTC peer connection, and create a data channel between them. Copy the following code into your JavaScript file to get the page set up: var name, connectedUser;   var connection = new WebSocket('ws://localhost:8888');   connection.onopen = function () { console.log("Connected"); };   // Handle all messages through this callback connection.onmessage = function (message) { console.log("Got message", message.data);   var data = JSON.parse(message.data);   switch(data.type) {    case "login":      onLogin(data.success);      break;    case "offer":      onOffer(data.offer, data.name);      break;    case "answer":      onAnswer(data.answer);      break;    case "candidate":      onCandidate(data.candidate);      break;    case "leave":      onLeave();      break;    default:      break; } };   connection.onerror = function (err) { console.log("Got error", err); };   // Alias for sending messages in JSON format function send(message) { if (connectedUser) {    message.name = connectedUser; }   connection.send(JSON.stringify(message)); };   var loginPage = document.querySelector('#login-page'), usernameInput = document.querySelector('#username'), loginButton = document.querySelector('#login'), theirUsernameInput = document.querySelector('#their- username'), connectButton = document.querySelector('#connect'), sharePage = document.querySelector('#share-page'), sendButton = document.querySelector('#send'), readyText = document.querySelector('#ready'), statusText = document.querySelector('#status');   sharePage.style.display = "none"; readyText.style.display = "none";   // Login when the user clicks the button loginButton.addEventListener("click", function (event) { name = usernameInput.value;   if (name.length > 0) {    send({      type: "login",      name: name    }); } });   function onLogin(success) { if (success === false) {    alert("Login unsuccessful, please try a different name."); } else {    loginPage.style.display = "none";    sharePage.style.display = "block";      // Get the plumbing ready for a call    startConnection(); } };   var yourConnection, connectedUser, dataChannel, currentFile, currentFileSize, currentFileMeta;   function startConnection() { if (hasRTCPeerConnection()) {    setupPeerConnection(); } else {    alert("Sorry, your browser does not support WebRTC."); } }   function setupPeerConnection() { var configuration = {    "iceServers": [{ "url": "stun:stun.1.google.com:19302 " }] }; yourConnection = new RTCPeerConnection(configuration, {optional: []});   // Setup ice handling yourConnection.onicecandidate = function (event) {    if (event.candidate) {      send({        type: "candidate",       candidate: event.candidate      });    } };   openDataChannel(); }   function openDataChannel() { var dataChannelOptions = {    ordered: true,    reliable: true,    negotiated: true,    id: "myChannel" }; dataChannel = yourConnection.createDataChannel("myLabel", dataChannelOptions);   dataChannel.onerror = function (error) {    console.log("Data Channel Error:", error); };   dataChannel.onmessage = function (event) {    // File receive code will go here };   dataChannel.onopen = function () {    readyText.style.display = "inline-block"; };   dataChannel.onclose = function () {    readyText.style.display = "none"; }; }   function hasUserMedia() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; return !!navigator.getUserMedia; }   function hasRTCPeerConnection() { window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.webkitRTCSessionDescription || window.mozRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.webkitRTCIceCandidate || window.mozRTCIceCandidate; return !!window.RTCPeerConnection; }   function hasFileApi() { return window.File && window.FileReader && window.FileList && window.Blob; }   connectButton.addEventListener("click", function () { var theirUsername = theirUsernameInput.value;   if (theirUsername.length > 0) {    startPeerConnection(theirUsername); } });   function startPeerConnection(user) { connectedUser = user;   // Begin the offer yourConnection.createOffer(function (offer) {    send({      type: "offer",      offer: offer    });    yourConnection.setLocalDescription(offer); }, function (error) {    alert("An error has occurred."); }); };   function onOffer(offer, name) { connectedUser = name; yourConnection.setRemoteDescription(new RTCSessionDescription(offer));   yourConnection.createAnswer(function (answer) {    yourConnection.setLocalDescription(answer);      send({      type: "answer",      answer: answer    }); }, function (error) {    alert("An error has occurred"); }); };   function onAnswer(answer) { yourConnection.setRemoteDescription(new RTCSessionDescription(answer)); };   function onCandidate(candidate) { yourConnection.addIceCandidate(new RTCIceCandidate(candidate)); };   function onLeave() { connectedUser = null; yourConnection.close(); yourConnection.onicecandidate = null; setupPeerConnection(); }; We set up references to our elements on the screen as well as get the peer connection ready to be processed. When the user decides to log in, we send a login message to the server. The server will return with a success message telling the user they are logged in. From here, we allow the user to connect to another WebRTC user who is given their username. This sends offer and response, connecting the two users together through the peer connection. Once the peer connection is created, we connect the users through a data channel so that we can send arbitrary data across. Hopefully, this is pretty straightforward and you are able to get this code up and running in no time. It should all be familiar to you by now. This is the last time we are going to refer to this code, so get comfortable with it before moving on! Getting a reference to a file Now that we have a simple page up and running, we can start working on the file sharing part of the application. The first thing the user needs to do is select a file from their computer's filesystem. This is easily taken care of already by the input element on the page. The browser will allow the user to select a file from their computer and then save a reference to that file in the browser for later use. When the user presses the Send button, we want to get a reference to the file that the user has selected. To do this, you need to add an event listener, as shown in the following code: sendButton.addEventListener("click", function (event) { var files = document.querySelector('#files').files;   if (files.length > 0) {    dataChannelSend({      type: "start",      data: files[0]    });      sendFile(files[0]); } }); You might be surprised at how simple the code is to get this far! This is the amazing thing about working within a browser. Much of the hard work has already been done for you. Here, we will get a reference to our input element and the files that it has selected. The input element supports both multiple and single selection of files, but in this example we will only work with one file at a time. We then make sure we have a file to work with, tell the other user that we want to start sending data, and then call our sendFile function, which we will implement later in this article. Now, you might think that the object we get back will be in the form of the entire data inside of our file. What we actually get back from the input element is an object representing metadata about the file itself. Let's take a look at this metadata: { lastModified: 1364868324000, lastModifiedDate: "2013-04-02T02:05:24.000Z", name: "example.gif", size: 1745559, type: "image/gif" } This will give us the information we need to tell the other user that we want to start sending a file with the example.gif name. It will also give a few other important details, such as the type of file we are sending and when it has been modified. The next step is to read the file's data and send it through the data channel. This is no easy task, however, and we will require some special logic to do so. Summary In this article we covered the basics of using the File API and retrieving a file from a user's computer. The article also discusses the page setup for the application using JavaScript and getting a reference to a file. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Using the WebRTC Data API [article] Applications of WebRTC [article]
Read more
  • 0
  • 0
  • 2062

article-image-whats-bitbake-all-about
Packt
08 Jul 2015
7 min read
Save for later

What's BitBake All About?

Packt
08 Jul 2015
7 min read
In this article by H M Irfan Sadiq, the author of the book Using Yocto Project with BeagleBone Black, we will move one step ahead by detailing different aspects of the basic engine behind Yocto Project, and other similar projects. This engine is BitBake. Covering all the various aspects of BitBake in one article is not possible; it will require a complete book. We will familiarize you as much as possible with this tool. We will cover the following topics in this article: Legacy tools and BitBake Execution of BitBake (For more resources related to this topic, see here.) Legacy tools and BitBake This discussion does not intend to invoke any religious row between other alternatives and BitBake. Every step in the evolution has its own importance, which cannot be denied, and so do other available tools. BitBake was developed keeping in mind the Embedded Linux Development domain. So, it tried to solve the problems faced in this core area, and in my opinion, it addresses these in the best way till date. You might get the same output using other tools, such as Buildroot, but the flexibility and ease provided by BitBake in this domain is second to none. The major difference is in addressing the problem. Legacy tools are developed considering packages in mind, but BitBake evolved to solve the problems faced during the creation of BSPs, or embedded distributions. Let's go through the challenges faced in this specific domain and understand how BitBake helps us face them. Cross-compilation BitBake takes care of cross compilation. You do not have to worry about it for each package you are building. You can use the same set of packages and build for different platforms seamlessly. Resolving inter-package dependencies This is the real pain of resolving dependencies of packages on each other and fulfilling them. In this case, we need to specify the different dependency types available, and BitBake takes care of them for us. We can handle both build and runtime dependencies. Variety of target distribution BitBake supports a variety of target distribution creations. We can define a full new distribution of our own, by choosing package management, image types, and other artifacts to fulfill our requirements. Coupling to build system BitBake is not very dependent on the build system we use to build our target images. We don't use libraries and tools installed on the system; we build their native versions and use them instead. This way, we are not dependent on the build system's root filesystem. Variety of build systems distros Since BitBake is very loosely coupled to the build system's distribution type, it's very easy to use on various distributions. Variety of architecture We have to support different architectures. We don't have to modify our recipes for each package. We can write our recipes so that features, parameters, and flags are picked up conditionally. Exploit parallelism For the simplest projects, we have to build images and do more than a thousand tasks. These tasks require us to use the full power available to us, whether they are computational or related to memory. BitBake's architecture supports us in this regard, using its scheduler to run as many tasks in parallel as it can, or as we configure. Also, when we say task, it should not be confused with package, but it is a part of package. A package can contain many tasks, (fetch, compile, configure, package, populate_sysroot, and so on), and all these can run in parallel. Easy to use, extend, and collaborate Keeping and relying on metadata keeps things simple and configurable. Almost nothing is hard coded. Thus, we can configure things according to our requirements. Also, BitBake provides us with a mechanism to reuse things that are already developed. We can keep our metadata structured, so that it gets applied/extended conditionally. You will learn these tricks when we will explore layers. BitBake execution To get us to a successful package or image, BitBake performs some steps that we need to go through to get an understanding of the workflow. In certain cases, some of these steps can be avoided; but we are not discussing such cases, considering them as corner cases. For details on these, we should refer to the BitBake user manual. Parsing metadata When we invoke the BitBake command to build our image, the first thing it does is parse our base configuration metadata. This metadata consists of build_bb/conf/bblayers.conf, multiple layer/conf/layer.conf, and poky/meta/conf/bitbake.conf. This data can be of the following types: Configuration data Class data Recipes Key variables BBFILES and BBPATH, which are constructed from the layer.conf file. Thus, the constructed BBPATH variable is used to locate configuration files under conf/ and class files under class/ directories. The BBFILES variable is used to find recipe files (.bb and .bbappend). bblayers.conf is used to set these variables. Next, the bitbake.conf file is parsed. If there is no bblayers.conf file, it is assumed that the user has set BBFILES and BBPATH directly in the environment. After having dealt with configuration files, class files inclusion and parsing are taken care of. These class files are specified using the INHERIT variable. Next, BitBake will use the BBFILES variable to construct a list of recipes to parse, along with any append files. Thus, after parsing, recipe values for various variables are stored into datastore. After the completion of a recipe parsing BitBake has: A list of tasks that the recipe has defined A set of data consisting of keys and values Dependency information of the tasks Preparing tasklist BitBake starts looking through the PROVIDES set in recipe files. The PROVIDES set defaults to the recipe name, and we can define multiple values to it. We can have multiple recipes providing a similar package. This task is accomplished by setting PROVIDES in the recipes. While actually making such recipes part of the build, we have to define PRREFERED_PROVIDER_foo so that our specific recipe foo can be used. We can do this in multiple locations. In the case of kernels, we use it in the manchin.conf file. BitBake iterates through the list of targets it has to build and resolves them, along with their dependencies. If PRREFERED_PROVIDER is not set and multiple versions of a package exist, BitBake will choose the highest version. Each target/recipe has multiple tasks, such as fetch, unpack, configure, and compile. BitBake considers each of these tasks as independent units to exploit parallelism in a multicore environment. Although these tasks are executed sequentially for a single package/recipe, for multiple packages, they are run in parallel. We may be compiling one recipe, configuring the second, and unpacking the third in parallel. Or, may be at the start, eight packages are all fetching their sources. For now, we should know the dependencies between tasks that are defined using DEPENDS and RDEPENDS. In DEPENDS, we provide the dependencies that our package needs to build successfully. So, BitBake takes care of building these dependencies before our package is built. RDEPENDS are the dependencies that are required for our package to execute/run successfully on the target system. So, BitBake takes care of providing these dependencies on the target's root filesystem. Executing tasks Tasks can be defined using the shell syntax or Python. In the case of shell tasks, a shell script is created under a temporary directory as run.do_taskname.pid and then, it is executed. The generated shell script contains all the exported variables and the shell functions, with all the variables expanded. Output from the task is saved in the same directory with log.do_taskname.pid. In the case of errors, BitBake shows the full path to this logfile. This is helpful for debugging. Summary In this article, you learned the goals and problem areas that BitBake has considered, thus making itself a unique option for Embedded Linux Development. You also learned how BitBake actually works. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Baking Bits with Yocto Project [article] The BSP Layer [article]
Read more
  • 0
  • 0
  • 9604
article-image-transactions-redis
Packt
07 Jul 2015
9 min read
Save for later

Transactions in Redis

Packt
07 Jul 2015
9 min read
In this article by Vinoo Das author of the book Learning Redis, we will see how Redis as a NOSQL data store, provides a loose sense of transaction. As in a traditional RDBMS, the transaction starts with a BEGIN and ends with either COMMIT or ROLLBACK. All these RDBMS servers are multithreaded, so when a thread locks a resource, it cannot be manipulated by another thread unless and until the lock is released. Redis by default has MULTI to start and EXEC to execute the commands. In case of a transaction, the first command is always MULTI, and after that all the commands are stored, and when EXEC command is received, all the stored commands are executed in sequence. So inside the hood, once Redis receives the EXEC command, all the commands are executed as a single isolated operation. Following are the commands that can be used in Redis for transaction: MULTI: This marks the start of a transaction block EXEC: This executes all the commands in the pipeline after MULTI WATCH: This watches the keys for conditional execution of a transaction UNWATCH: This removes the WATCH keys of a transaction DISCARD: This flushes all the previously queued commands in the pipeline (For more resources related to this topic, see here.) The following figure represents how transaction in Redis works: Transaction in Redis Pipeline versus transaction As we have seen for many generic terms in pipeline the commands are grouped and executed, and the responses are queued in a block and sent. But in transaction, until the EXEC command is received, all the commands received after MULTI are queued and then executed. To understand that, it is important to take a case where we have a multithreaded environment and see the outcome. In the first case, we take two threads firing pipelined commands at Redis. In this sample, the first thread fires a pipelined command, which is going to change the value of a key multiple number of times, and the second thread will try to read the value of that key. Following is the class which is going to fire the two threads at Redis: MultiThreadedPipelineCommandTest.java: package org.learningRedis.chapter.four.pipelineandtx; public class MultiThreadedPipelineCommandTest { public static void main(String[] args) throws InterruptedException {    Thread pipelineClient = new Thread(new PipelineCommand());    Thread singleCommandClient = new Thread(new SingleCommand());    pipelineClient.start();    Thread.currentThread().sleep(50);    singleCommandClient.start(); } } The code for the client which is going to fire the pipeline commands is as follows: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; import Redis.clients.jedis.Pipeline; public class PipelineCommand implements Runnable{ Jedis jedis = ConnectionManager.get(); @Override public void run() {      long start = System.currentTimeMillis();      Pipeline commandpipe = jedis.pipelined();      for(int nv=0;nv<300000;nv++){        commandpipe.sadd("keys-1", "name"+nv);      }      commandpipe.sync();      Set<String> data= jedis.smembers("keys-1");      System.out.println("The return value of nv1 after pipeline [ " + data.size() + " ]");    System.out.println("The time taken for executing client(Thread-1) "+ (System.currentTimeMillis()-start));    ConnectionManager.set(jedis); } } The code for the client which is going to read the value of the key when pipeline is executed is as follows: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; public class SingleCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {    Set<String> data= jedis.smembers("keys-1");    System.out.println("The return value of nv1 is [ " + data.size() + " ]");    ConnectionManager.set(jedis); } } The result will vary as per machine configuration but by changing the thread sleep time and running the program couple of times, the result will be similar to the one shown as follows: The return value of nv1 is [ 3508 ] The return value of nv1 after pipeline [ 300000 ] The time taken for executing client(Thread-1) 3718 Please fire FLUSHDB command every time you run the test, otherwise you end up seeing the value of the previous test run, that is 300,000 Now we will run the sample in a transaction mode, where the command pipeline will be preceded by MULTI keyword and succeeded by EXEC command. This client is similar to the previous sample where two clients in separate threads will fire commands to a single key on Redis. The following program is a test client that gives two threads one with commands in transaction mode and the second thread will try to read and modify the same resource: package org.learningRedis.chapter.four.pipelineandtx; public class MultiThreadedTransactionCommandTest { public static void main(String[] args) throws InterruptedException {    Thread transactionClient = new Thread(new TransactionCommand());    Thread singleCommandClient = new Thread(new SingleCommand());    transactionClient.start();    Thread.currentThread().sleep(30);    singleCommandClient.start(); } } This program will try to modify the resource and read the resource while the transaction is going on: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; public class SingleCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {    Set<String> data= jedis.smembers("keys-1");    System.out.println("The return value of nv1 is [ " + data.size() + " ]");    ConnectionManager.set(jedis); } } This program will start with MULTI command, try to modify the resource, end it with EXEC command, and later read the value of the resource: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; import Redis.clients.jedis.Transaction; import chapter.four.pubsub.ConnectionManager; public class TransactionCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {      long start = System.currentTimeMillis();      Transaction transactionableCommands = jedis.multi();      for(int nv=0;nv<300000;nv++){        transactionableCommands.sadd("keys-1", "name"+nv);      }      transactionableCommands.exec();      Set<String> data= jedis.smembers("keys-1");      System.out.println("The return value nv1 after tx [ " + data.size() + " ]");    System.out.println("The time taken for executing client(Thread-1) "+ (System.currentTimeMillis()-start));    ConnectionManager.set(jedis); } } The result of the preceding program will vary as per machine configuration but by changing the thread sleep time and running the program couple of times, the result will be similar to the one shown as follows: The return code is [ 1 ] The return value of nv1 is [ null ] The return value nv1 after tx [ 300000 ] The time taken for executing client(Thread-1) 7078 Fire the FLUSHDB command every time you run the test. The idea is that the program should not pick up a value obtained because of a previous run of the program. The proof that the single command program is able to write to the key is if we see the following line: The return code is [1]. Let's analyze the result. In case of pipeline, a single command reads the value and the pipeline command sets a new value to that key as evident in the following result: The return value of nv1 is [ 3508 ] Now compare this with what happened in case of transaction when a single command tried to read the value but it was blocked because of the transaction. Hence the value will be NULL or 300,000. The return value of nv1 after tx [0] or The return value of nv1 after tx [300000] So the difference in output can be attributed to the fact that in a transaction, if we have started a MULTI command, and are still in the process of queueing commands (that is, we haven't given the server the EXEC request yet), then any other client can still come in and make a request, and the response would be sent to the other client. Once the client gives the EXEC command, then all other clients are blocked while all of the queued transaction commands are executed. Pipeline and transaction To have a better understanding, let's analyze what happened in case of pipeline. When two different connections made requests to the Redis for the same resource, we saw a result where client-2 picked up the value while client-1 was still executing: Pipeline in Redis in a multi connection environment What it tells us is that requests from the first connection which is pipeline command is stacked as one command in its execution stack, and the command from the other connection is kept in its own stack specific to that connection. The Redis execution thread time slices between these two executions stacks, and that is why client-2 was able to print a value when the client-1 was still executing. Let's analyze what happened in case of transaction here. Again the two commands (transaction commands and GET commands) were kept in their own execution stacks, but when the Redis execution thread gave time to the GET command, and it went to read the value, seeing the lock it was not allowed to read the value and was blocked. The Redis execution thread again went back to executing the transaction commands, and again it came back to GET command where it was again blocked. This process kept happening until the transaction command released the lock on the resource and then the GET command was able to get the value. If by any chance, the GET command was able to reach the resource before the transaction lock, it got a null value. Please bear in mind that Redis does not block execution to other clients while queuing transaction commands but blocks only during executing them. Transaction in Redis multi connection environment This exercise gave us an insight into what happens in the case of pipeline and transaction. Summary In this article, we saw in brief how to use Redis, not simply as a datastore, but also as pipeline the commands which is so much more like bulk processing. Apart from that, we covered areas such as transaction, messaging, and scripting. We also saw how to combine messaging and scripting, and create reliable messaging in Redis. This capability of Redis makes it different from some of the other datastore solutions. Resources for Article: Further resources on this subject: Implementing persistence in Redis (Intermediate) [article] Using Socket.IO and Express together [article] Exploring streams [article]
Read more
  • 0
  • 1
  • 4199

article-image-processing-next-generation-sequencing-datasets-using-python
Packt
07 Jul 2015
25 min read
Save for later

Processing Next-generation Sequencing Datasets Using Python

Packt
07 Jul 2015
25 min read
In this article by Tiago Antao, author of Bioinformatics with Python Cookbook, you will process next-generation sequencing datasets using Python. If you work in life sciences, you are probably aware of the increasing importance of computational methods to analyze increasingly larger datasets. There is a massive need for bioinformaticians to process this data, and one the main tools is, of course, Python. Python is probably the fastest growing language in the field of data sciences. It includes a rich ecology of software libraries to perform complex data analysis. Another major point in Python is its great community, which is always ready to help and produce great documentation and high-quality reliable software. In this article, we will use Python to process next-generation sequencing datasets. This is one of the many examples of Python usability in bioinformatics; chances are that if you have a biological dataset to analyze, Python can help you. This is surely the case with population genetics, genomics, phylogenetics, proteomics, and many other fields. Next-generation Sequencing (NGS) is one of the fundamental technological developments of the decade in the field of life sciences. Whole Genome Sequencing (WGS), RAD-Seq, RNA-Seq, Chip-Seq, and several other technologies are routinely used to investigate important biological problems. These are also called high-throughput sequencing technologies with good reason; they generate vast amounts of data that need to be processed. NGS is the main reason why computational biology is becoming a "big data" discipline. More than anything else, this is a field that requires strong bioinformatics techniques. There is very strong demand for professionals with these skillsets. Here, we will not discuss each individual NGS technique per se (this will be a massive undertaking). We will use two existing WGS datasets: the Human 1000 genomes project (http://www.1000genomes.org/) and the Anopheles 1000 genomes dataset (http://www.malariagen.net/projects/vector/ag1000g). The code presented will be easily applicable for other genomic sequencing approaches; some of them can also be used for transcriptomic analysis (for example, RNA-Seq). Most of the code is also species-independent, that is, you will be able to apply them to any species in which you have sequenced data. As this is not an introductory text, you are expected to at least know what FASTA, FASTQ, BAM, and VCF files are. We will also make use of basic genomic terminology without introducing it (things such as exomes, nonsynonym mutations, and so on). You are required to be familiar with basic Python, and we will leverage that knowledge to introduce the fundamental libraries in Python to perform NGS analysis. Here, we will concentrate on analyzing VCF files. Preparing the environment You will need Python 2.7 or 3.4. You can use many of the available distributions, including the standard one at http://www.python.org, but we recommend Anaconda Python from http://continuum.io/downloads. We also recommend the IPython Notebook (Project Jupyter) from http://ipython.org/. If you use Anaconda, this and many other packages are available with a simple conda install. There are some amazing libraries to perform data analysis in Python; here, we will use NumPy (http://www.numpy.org/) and matplotlib (http://matplotlib.org/), which you may already be using in your projects. We will also make use of the less widely used seaborn library (http://stanford.edu/~mwaskom/software/seaborn/). For bioinformatics, we will use Biopython (http://biopython.org) and PyVCF (https://pyvcf.readthedocs.org). The code used here is available on GitHub at https://github.com/tiagoantao/bioinf-python. In your realistic pipeline, you will probably be using other tools, such as bwa, samtools, or GATK to perform your alignment and SNP calling. In our case, tabix and bgzip (http://www.htslib.org/) is needed. Analyzing variant calls After running a genotype caller (for example, GATK or samtools mpileup), you will have a Variant Call Format (VCF) file reporting on genomic variations, such as SNPs (Single-Nucleotide Polymorphisms), InDels (Insertions/Deletions), CNVs (Copy Number Variation) among others. In this recipe, we will discuss VCF processing with the PyVCF module over the human 1000 genomes project to analyze SNP data. Getting ready I am to believe that 2 to 20 GB of data for a tutorial is asking too much. Although, the 1000 genomes' VCF files with realistic annotations are in that order of magnitude, we will want to work with much less data here. Fortunately, the bioinformatics community has developed tools that allow partial download of data. As part of the samtools/htslib package (http://www.htslib.org/), you can download tabix and bgzip, which will take care of data management. For example: tabix -fh ftp://ftp- trace.ncbi.nih.gov/1000genomes/ftp/release/20130502/supporting/vcf_ with_sample_level_annotation/ALL.chr22.phase3_shapeit2_mvncall_ integrated_v5_extra_anno.20130502.genotypes.vcf.gz 22:1-17000000 |bgzip -c > genotypes.vcf.gz tabix -p vcf genotypes.vcf.gz The first line will perform a partial download of the VCF file for chromosome 22 (up to 17 Mbp) of the 1000 genomes project. Then, bgzip will compress it. The second line will create an index, which we will need for direct access to a section of the genome. The preceding code is available at https://github.com/tiagoantao/bioinf-python/blob/master/notebooks/01_NGS/Working_with_VCF.ipynb. How to do it… Take a look at the following steps: Let's start inspecting the information that we can get per record, as shown in the following code: import vcf v = vcf.Reader(filename='genotypes.vcf.gz')   print('Variant Level information') infos = v.infos for info in infos:    print(info)   print('Sample Level information') fmts = v.formats for fmt in fmts:    print(fmt)     We start by inspecting the annotations that are available for each record (remember that each record encodes variants, such as SNP, CNV, and InDel, and the state of that variant per sample). At the variant (record) level, we will find AC: the total number of ALT alleles in called genotypes, AF: the estimated allele frequency, NS: the number of samples with data, AN: the total number of alleles in called genotypes, and DP: the total read depth. There are others, but they are mostly specific to the 1000 genomes project (here, we are trying to be as general as possible). Your own dataset might have much more annotations or none of these.     At the sample level, there are only two annotations in this file: GT: genotype and DP: the per sample read depth. Yes, you have the per variant (total) read depth and the per sample read depth; be sure not to confuse both. Now that we know which information is available, let's inspect a single VCF record with the following code: v = vcf.Reader(filename='genotypes.vcf.gz') rec = next(v) print(rec.CHROM, rec.POS, rec.ID, rec.REF, rec.ALT, rec.QUAL, rec.FILTER) print(rec.INFO) print(rec.FORMAT) samples = rec.samples print(len(samples)) sample = samples[0] print(sample.called, sample.gt_alleles, sample.is_het, sample.phased) print(int(sample['DP']))     We start by retrieving standard information: the chromosome, position, identifier, reference base (typically, just one), alternative bases (can have more than one, but it is not uncommon as the first filtering approach to only accept a single ALT, for example, only accept bi-allelic SNPs), quality (PHRED scaled—as you may expect), and the FILTER status. Regarding the filter, remember that whatever the VCF file says, you may still want to apply extra filters (as in the next recipe).     Then, we will print the additional variant-level information (AC, AS, AF, AN, DP, and so on), followed by the sample format (in this case, DP and GT). Finally, we will count the number of samples and inspect a single sample checking if it was called for this variant. If available, the reported alleles, heterozygosity, and phasing status (this dataset happens to be phased, which is not that common). Let's check the type of variant and the number of nonbiallelic SNPs in a single pass with the following code: from collections import defaultdict f = vcf.Reader(filename='genotypes.vcf.gz')   my_type = defaultdict(int) num_alts = defaultdict(int)   for rec in f:    my_type[rec.var_type, rec.var_subtype] += 1    if rec.is_snp:        num_alts[len(rec.ALT)] += 1 print(num_alts) print(my_type)     We use the Python defaultdict collection type. We find that this dataset has InDels (both insertions and deletions), CNVs, and, of course, SNPs (roughly two-third being transitions with one-third transversions). There is a residual number (79) of triallelic SNPs. There's more… The purpose of this recipe is to get you up to speed on the PyVCF module. At this stage, you should be comfortable with the API. We do not delve much here on usage details because that will be the main purpose of the next recipe: using the VCF module to study the quality of your variant calls. It will probably not be a shocking revelation that PyVCF is not the fastest module on earth. This file format (highly text-based) makes processing a time-consuming task. There are two main strategies of dealing with this problem: parallel processing or converting to a more efficient format. Note that VCF developers will perform a binary (BCF) version to deal with part of these problems at http://www.1000genomes.org/wiki/analysis/variant-call-format/bcf-binary-vcf-version-2. See also The specification for VCF is available at http://samtools.github.io/hts-specs/VCFv4.2.pdf GATK is one of the most widely used variant callers; check https://www.broadinstitute.org/gatk/ samtools and htslib are both used for variant calling and SAM/BAM management; check http://htslib.org Studying genome accessibility and filtering SNP data If you are using NGS data, the quality of your VCF calls may need to be assessed and filtered. Here, we will put in place a framework to filter SNP data. More than giving filtering rules (an impossible task to be performed in a general way), we give you procedures to assess the quality of your data. With this, you can then devise your own filters. Getting ready In the best-case scenario, you have a VCF file with proper filters applied; if this is the case, you can just go ahead and use your file. Note that all VCF files will have a FILTER column, but this does not mean that all the proper filters were applied. You have to be sure that your data is properly filtered. In the second case, which is one of the most common, your file will have unfiltered data, but you have enough annotations. Also, you can apply hard filters (that is, no need for programmatic filtering). If you have a GATK annotated file, refer, for instance, to http://gatkforums.broadinstitute.org/discussion/2806/howto-apply-hard-filters-to-a-call-set. In the third case, you have a VCF file that has all the annotations that you need, but you may want to apply more flexible filters (for example, "if read depth > 20, then accept; if mapping quality > 30, accept if mapping quality > 40"). In the fourth case, your VCF file does not have all the necessary annotations, and you have to revisit your BAM files (or even other sources of information). In this case, the best solution is to find whatever extra information you have and create a new VCF file with the needed annotations. Some genotype callers like GATK allow you to specify with annotations you want; you may also want to use extra programs to provide more annotations, for example, SnpEff (http://snpeff.sourceforge.net/) will annotate your SNPs with predictions of their effect (for example, if they are in exons, are they coding on noncoding?). It is impossible to provide a clear-cut recipe; it will vary with the type of your sequencing data, your species of study, and your tolerance to errors, among other variables. What we can do is provide a set of typical analysis that is done for high-quality filtering. In this recipe, we will not use data from the Human 1000 genomes project; we want "dirty" unfiltered data that has a lot of common annotations that can be used to filter it. We will use data from the Anopheles 1000 genomes project (Anopheles is the mosquito vector involved in the transmission of the parasite causing malaria), which makes available filtered and unfiltered data. You can find more information about this project at http://www.malariagen.net/projects/vector/ag1000g. We will get a part of the centromere of chromosome 3L for around 100 mosquitoes, followed by a part somewhere in the middle of that chromosome (and index both), as shown in the following code: tabix -fh ftp://ngs.sanger.ac.uk/production/ag1000g/phase1/preview/ag1000g.AC. phase1.AR1.vcf.gz 3L:1-200000 |bgzip -c > centro.vcf.gz tabix -fh ftp://ngs.sanger.ac.uk/production/ag1000g/phase1/preview/ag1000g.AC. phase1.AR1.vcf.gz 3L:21000001-21200000 |bgzip -c > standard.vcf.gz tabix -p vcf centro.vcf.gz tabix -p vcf standard.vcf.gz As usual, the code to download this data is available at the https://github.com/tiagoantao/bioinf-python/blob/master/notebooks/01_NGS/Filtering_SNPs.ipynb notebook. Finally, a word of warning about this recipe: the level of Python here will be slightly more complicated than before. The more general code that we will write may be easier to reuse in your specific case. We will perform extensive use of functional programming techniques (lambda functions) and the partial function application. How to do it… Take a look at the following steps: Let's start by plotting the distribution of variants across the genome in both files as follows: %matplotlib inline from collections import defaultdict   import seaborn as sns import matplotlib.pyplot as plt   import vcf   def do_window(recs, size, fun):    start = None    win_res = []    for rec in recs:        if not rec.is_snp or len(rec.ALT) > 1:            continue        if start is None:            start = rec.POS        my_win = 1 + (rec.POS - start) // size        while len(win_res) < my_win:            win_res.append([])        win_res[my_win - 1].extend(fun(rec))    return win_res   wins = {} size = 2000 vcf_names = ['centro.vcf.gz', 'standard.vcf.gz'] for vcf_name in vcf_names:    recs = vcf.Reader(filename=vcf_name)    wins[name] = do_window(recs, size, lambda x: [1])     We start by performing the required imports (as usual, remember to remove the first line if you are not on the IPython Notebook). Before I explain the function, note what we will do.     For both files, we will compute windowed statistics: we will divide our file that includes 200,000 bp of data in windows of size 2,000 (100 windows). Every time we find a bi-allelic SNP, we will add one to the list related to that window in the window function. The window function will take a VCF record (an SNP—rec.is_snp—that is not bi-allelic—len(rec.ALT) == 1), determine the window where that record belongs (by performing an integer division of rec.POS by size), and extend the list of results of that window by the function that is passed to it as the fun parameter (which in our case is just one).     So, now we have a list of 100 elements (each representing 2,000 base pairs). Each element will be another list, which will have 1 for each bi-allelic SNP found. So, if you have 200 SNPs in the first 2,000 base pairs, the first element of the list will have 200 ones. Let's continue: def apply_win_funs(wins, funs):    fun_results = []    for win in wins:        my_funs = {}        for name, fun in funs.items():            try:                my_funs[name] = fun(win)            except:                my_funs[name] = None        fun_results.append(my_funs)    return fun_results   stats = {} fig, ax = plt.subplots(figsize=(16, 9)) for name, nwins in wins.items():    stats[name] = apply_win_funs(nwins, {'sum': sum})    x_lim = [i * size for i in range(len(stats[name]))]    ax.plot(x_lim, [x['sum'] for x in stats[name]], label=name) ax.legend() ax.set_xlabel('Genomic location in the downloaded segment') ax.set_ylabel('Number of variant sites (bi-allelic SNPs)') fig.suptitle('Distribution of MQ0 along the genome', fontsize='xx-large')     Here, we will perform a plot that contains statistical information for each of our 100 windows. The apply_win_funs will calculate a set of statistics for every window. In this case, it will sum all the numbers in the window. Remember that every time we find an SNP, we will add one to the window list. This means that if we have 200 SNPs, we will have 200 1s; hence; summing them will return 200.     So, we are able to compute the number of SNPs per window in an apparently convoluted way. Why we are doing things with this strategy will become apparent soon, but for now, let's check the result of this computation for both files (refer to the following figure): Figure 1: The number of bi-allelic SNPs distributed of windows of 2, 000 bp of size for an area of 200 Kbp near the centromere (blue) and in the middle of chromosome (green). Both areas come from chromosome 3L for circa 100 Ugandan mosquitoes from the Anopheles 1000 genomes project     Note that the amount of SNPs in the centromere is smaller than the one in the middle of the chromosome. This is expected because calling variants in chromosomes is more difficult than calling variants in the middle and also because probably there is less genomic diversity in centromeres. If you are used to humans or other mammals, you may find the density of variants obnoxiously high, that is, mosquitoes for you! Let's take a look at the sample-level annotation. We will inspect Mapping Quality Zero (refer to https://www.broadinstitute.org/gatk/guide/tooldocs/org_broadinstitute_gatk_tools_walkers_annotator_MappingQualityZeroBySample.php for details), which is a measure of how well all the sequences involved in calling this variant map clearly to this position. Note that there is also an MQ0 annotation at the variant-level: import functools   import numpy as np mq0_wins = {} vcf_names = ['centro.vcf.gz', 'standard.vcf.gz'] size = 5000 def get_sample(rec, annot, my_type):    res = []    samples = rec.samples    for sample in samples:        if sample[annot] is None: # ignoring nones            continue        res.append(my_type(sample[annot]))    return res   for vcf_name in vcf_names:    recs = vcf.Reader(filename=vcf_name)    mq0_wins[vcf_name] = do_window(recs, size, functools.partial(get_sample, annot='MQ0', my_type=int))     Start by inspecting this by looking at the last for; we will perform a windowed analysis by getting the MQ0 annotation from each record. We perform this by calling the get_sample function in which we return our preferred annotation (in this case, MQ0) cast with a certain type (my_type=int). We will use the partial application function here. Python allows you to specify some parameters of a function and wait for other parameters to be specified later. Note that the most complicated thing here is the functional programming style. Also, note that it makes it very easy to compute other sample-level annotations; just replace MQ0 with AB, AD, GQ, and so on. You will immediately have a computation for that annotation. If the annotation is not of type integer, no problem; just adapt my_type. This is a difficult programming style if you are not used to it, but you will reap the benefits very soon. Let's now print the median and top 75 percent percentile for each window (in this case, with a size of 5,000) as follows: stats = {} colors = ['b', 'g'] i = 0 fig, ax = plt.subplots(figsize=(16, 9)) for name, nwins in mq0_wins.items():    stats[name] = apply_win_funs(nwins, {'median': np.median, '75': functools.partial(np.percentile, q=75)})    x_lim = [j * size for j in range(len(stats[name]))]    ax.plot(x_lim, [x['median'] for x in stats[name]], label=name, color=colors[i])    ax.plot(x_lim, [x['75'] for x in stats[name]], '--', color=colors[i])    i += 1 ax.legend() ax.set_xlabel('Genomic location in the downloaded segment') ax.set_ylabel('MQ0') fig.suptitle('Distribution of MQ0 along the genome', fontsize='xx-large')     Note that we now have two different statistics on apply_win_funs: percentile and median. Again, we will pass function names as parameters (np.median) and perform the partial function application (np.percentile). The result can be seen in the following figure: Figure 2: Median (continuous line) and 75th percentile (dashed) of MQ0 of sample SNPs distributed on windows of 5,000 bp of size for an area of 200 Kbp near the centromere (blue) and in the middle of chromosome (green); both areas come from chromosome 3L for circa 100 Ugandan mosquitoes from the Anopheles 1000 genomes project     For the "standard" file, the median MQ0 is 0 (it is plotted at the very bottom, which is almost unseen); this is good as it suggests that most sequences involved in the calling of variants map clearly to this area of the genome. For the centromere, MQ0 is of poor quality. Furthermore, there are areas where the genotype caller could not find any variants at all; hence, the incomplete chart. Let's compare heterozygosity with the DP sample-level annotation:     Here, we will plot the fraction of heterozygosity calls as a function of the sample read depth (DP) for every SNP. We will first explain the result and only then the code that generates it.     The following screenshot shows the fraction of calls that are heterozygous at a certain depth: Figure 3: The continuous line represents the fraction of heterozygosite calls computed at a certain depth; in blue is the centromeric area, in green is the "standard" area; the dashed lines represent the number of sample calls per depth; both areas come from chromosome 3L for circa 100 Ugandan mosquitoes from the Anopheles 1000 genomes project In the preceding screenshot, there are two considerations to be taken into account:     At a very low depth, the fraction of heterozygote calls is biased low; this makes sense because the number of reads per position does not allow you to make a correct estimate of the presence of both alleles in a sample. So, you should not trust calls at a very low depth.     As expected, the number of calls in the centromere is way lower than calls outside it. The distribution of SNPs outside the centromere follows a common pattern that you can expect in many datasets. Here is the code: def get_sample_relation(recs, f1, f2):    rel = defaultdict(int)    for rec in recs:        if not rec.is_snp:              continue        for sample in rec.samples:            try:                 v1 = f1(sample)                v2 = f2(sample)                if v1 is None or v2 is None:                    continue # We ignore Nones                rel[(v1, v2)] += 1            except:                pass # This is outside the domain (typically None)    return rel   rels = {} for vcf_name in vcf_names:    recs = vcf.Reader(filename=vcf_name)    rels[vcf_name] = get_sample_relation(recs, lambda s: 1 if s.is_het else 0, lambda s: int(s['DP'])) Let's start by looking at the for loop. Again, we will use functional programming: the get_sample_relation function will traverse all the SNP records and apply the two functional parameters; the first determines heterozygosity, whereas the second gets the sample DP (remember that there is also a variant DP).     Now, as the code is complex as it is, I opted for a naive data structure to be returned by get_sample_relation: a dictionary where the key is the pair of results (in this case, heterozygosity and DP) and the sum of SNPs, which share both values. There are more elegant data structures with different trade-offs for this: scipy spare matrices, pandas' DataFrames, or maybe, you want to consider PyTables. The fundamental point here is to have a framework that is general enough to compute relationships among a couple of sample annotations.     Also, be careful with the dimension space of several annotations, for example, if your annotation is of float type, you might have to round it (if not, the size of your data structure might become too big). Now, let's take a look at all the plotting codes. Let's perform it in two parts; here is part 1: def plot_hz_rel(dps, ax, ax2, name, rel):    frac_hz = []    cnt_dp = []    for dp in dps:        hz = 0.0        cnt = 0          for khz, kdp in rel.keys():             if kdp != dp:                continue            cnt += rel[(khz, dp)]            if khz == 1:                hz += rel[(khz, dp)]        frac_hz.append(hz / cnt)        cnt_dp.append(cnt)    ax.plot(dps, frac_hz, label=name)    ax2.plot(dps, cnt_dp, '--', label=name)     This function will take a data structure (as generated by get_sample_relation) expecting that the first parameter of the key tuple is the heterozygosity state (0 = homozygote, 1 = heterozygote) and the second is the DP. With this, it will generate two lines: one with the fraction of samples (which are heterozygotes at a certain depth) and the other with the SNP count. Let's now call this function, as shown in the following code: fig, ax = plt.subplots(figsize=(16, 9)) ax2 = ax.twinx() for name, rel in rels.items():    dps = list(set([x[1] for x in rel.keys()]))    dps.sort()    plot_hz_rel(dps, ax, ax2, name, rel) ax.set_xlim(0, 75) ax.set_ylim(0, 0.2) ax2.set_ylabel('Quantity of calls') ax.set_ylabel('Fraction of Heterozygote calls') ax.set_xlabel('Sample Read Depth (DP)') ax.legend() fig.suptitle('Number of calls per depth and fraction of calls which are Hz',,              fontsize='xx-large')     Here, we will use two axes. On the left-hand side, we will have the fraction of heterozygozite SNPs, whereas on the right-hand side, we will have the number of SNPs. Then, we will call our plot_hz_rel for both data files. The rest is standard matplotlib code. Finally, let's compare variant DP with the categorical variant-level annotation: EFF. EFF is provided by SnpEFF and tells us (among many other things) the type of SNP (for example, intergenic, intronic, coding synonymous, and coding nonsynonymous). The Anopheles dataset provides this useful annotation. Let's start by extracting variant-level annotations and the functional programming style, as shown in the following code: def get_variant_relation(recs, f1, f2):    rel = defaultdict(int)    for rec in recs:        if not rec.is_snp:              continue        try:            v1 = f1(rec)            v2 = f2(rec)            if v1 is None or v2 is None:                continue # We ignore Nones            rel[(v1, v2)] += 1        except:            pass    return rel     The programming style here is similar to get_sample_relation, but we do not delve into the samples. Now, we will define the types of effects that we will work with and convert the effect to an integer as it would allow you to use it as in index, for example, matrices. Think about coding a categorical variable: accepted_eff = ['INTERGENIC', 'INTRON', 'NON_SYNONYMOUS_CODING', 'SYNONYMOUS_CODING']   def eff_to_int(rec):    try:        for annot in rec.INFO['EFF']:            #We use the first annotation            master_type = annot.split('(')[0]            return accepted_eff.index(master_type)    except ValueError:        return len(accepted_eff) We will now traverse the file; the style should be clear to you now: eff_mq0s = {} for vcf_name in vcf_names:    recs = vcf.Reader(filename=vcf_name)    eff_mq0s[vcf_name] = get_variant_relation(recs, lambda r: eff_to_int(r), lambda r: int(r.INFO['DP'])) Finally, we will plot the distribution of DP using the SNP effect, as shown in the following code: fig, ax = plt.subplots(figsize=(16,9)) vcf_name = 'standard.vcf.gz' bp_vals = [[] for x in range(len(accepted_eff) + 1)] for k, cnt in eff_mq0s[vcf_name].items():    my_eff, mq0 = k    bp_vals[my_eff].extend([mq0] * cnt) sns.boxplot(bp_vals, sym='', ax=ax) ax.set_xticklabels(accepted_eff + ['OTHER']) ax.set_ylabel('DP (variant)') fig.suptitle('Distribution of variant DP per SNP type',              fontsize='xx-large') Here, we will just print a box plot for the noncentromeric file (refer to the following screenshot). The results are as expected: SNPs in code areas will probably have more depth if they are in more complex regions (that is easier to call) than intergenic SNPs: Figure 4: Boxplot for the distribution of variant read depth across different SNP effects There's more… The approach would depend on the type of sequencing data that you have, the number of samples, and potential extra information (for example, pedigree among samples). This recipe is very complex as it is, but parts of it are profoundly naive (there is a limit of complexity that I could force on you on a simple recipe). For example, the window code does not support overlapping windows; also, data structures are simplistic. However, I hope that they give you an idea of the general strategy to process genomic high-throughput sequencing data. See also There are many filtering rules, but I would like to draw your attention to the need of reasonably good coverage (clearly more than 10 x), for example, refer to. Meynet et al "Variant detection sensitivity and biases in whole genome and exome sequencing" at http://www.biomedcentral.com/1471-2105/15/247/ Brad Chapman is one of the best known specialist in sequencing analysis and data quality with Python and the main author of Blue Collar Bioinformatics, a blog that you may want to check at https://bcbio.wordpress.com/ Brad is also the main author of bcbio-nextgen, a Python-based pipeline for high-throughput sequencing analysis. Refer to https://bcbio-nextgen.readthedocs.org Peter Cock is the main author of Biopython and is heavily involved in NGS analysis; be sure to check his blog, "Blasted Bionformatics!?" at http://blastedbio.blogspot.co.uk/ Summary In this article, we prepared the environment, analyzed variant calls and learned about genome accessibility and filtering SNP data.
Read more
  • 0
  • 0
  • 28749
Modal Close icon
Modal Close icon