Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-bootstrap-box
Packt
07 Aug 2015
6 min read
Save for later

Bootstrap in a Box

Packt
07 Aug 2015
6 min read
In this article written by Snig Bhaumik, author of the book Bootstrap Essentails, we explain the concept of Bootstrap, responsive design patterns, navigation patterns, and the different components that are included in Bootstrap. (For more resources related to this topic, see here.) Responsive design patterns Here are the few established and well-adopted patterns in Responsive Web Design: Fluid design: This is the most popular and easiest option for responsive design. In this pattern, larger screen multiple columns layout renders as a single column in a smaller screen in absolutely same sequence. Column drop: In this pattern also, the page gets rendered in a single column; however, the order of blocks gets altered. That means, if a content block is visible first in order in case of a larger screen, that might be rendered as second or third in case of a smaller screen. Layout shifter: This is a complex but powerful pattern where the whole layout of the screen contents gets altered in case of a smaller screen. This means that you need to develop different page layouts for large, medium, and small screens. Navigation patterns You should take care of the following things while designing a responsive web page. These are essentially the major navigational elements that you would concentrate on while developing a mobile friendly and responsive website: Menu bar Navigation/app bar Footer Main container shell Images Tabs HTML forms and elements Alerts and popups Embedded audios and videos, and so on You can see that there are lots of elements and aspects you need to take care of to create a fully responsive design. While all of these are achieved by using various features and technologies in CSS3, it is of course not an easy problem to solve without a framework that could help you do so. Precisely, you need a frontend framework that takes care of all the pains of technical responsive design implementation and releases you only for your brand and application design. Now, we introduce Bootstrap that would help you design and develop a responsive web design in a much optimized and efficient way. Introducing Bootstrap Simply put, Bootstrap is a frontend framework for faster and easier web development in the new standard of mobile-first philosophy. It uses HTML, CSS, and JavaScript. In August 2010, Twitter released Bootstrap as Open Source. There are quite a few similar frontend frameworks available in the industry, but Bootstrap is arguably the most popular framework in the lot. It is evident when we see Bootstrap is the most starred project in GitHub since 2012. Until now, you must be in a position to fathom why and where we need to use Bootstrap for web development; however, just to recap, here are the points in short. The mobile-first approach A responsive design Automatic browser support and handling Easy to adapt and get going What Bootstrap includes The following diagram demonstrates the overall structure of Bootstrap: CSS Bootstrap comes with fundamental HTML elements styled, global CSS classes, classes for advanced grid patterns, and lots of enhanced and extended CSS classes. For example, this is how the HTML global element is configured in Bootstrap CSS: html { font-family: sans-serif; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } This is how a standard HR HTML element is styled: hr { height: 0; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; } Here is an example of new classes introduced in Bootstrap: .glyphicon { position: relative; top: 1px; display: inline-block; font-family: 'Glyphicons Halflings'; font-style: normal; font-weight: normal; line-height: 1; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } Components Bootstrap offers a rich set of reusable and built-in components, such as breadcrumbs, progress bars, alerts, and navigation bars. The components are technically custom CSS classes specially crafted for the specific purpose. For example, if you want to create a breadcrumb in your page, you simply add a DIV tag in your HTML using Bootstrap’s breadcrumb class: <ol class="breadcrumb"> <li><a href="#">Home</a></li> <li><a href="#">The Store</a></li> <li class="active">Offer Zone</li> </ol> In the background (stylesheet), this Bootstrap class is used to create your breadcrumb: .breadcrumb { padding: 8px 15px; margin-bottom: 20px; list-style: none; background-color: #f5f5f5; border-radius: 4px; } .breadcrumb > li { display: inline-block; } .breadcrumb > li + li:before { padding: 0 5px; color: #ccc; content: "/ 0a0"; } .breadcrumb > .active { color: #777; } Please note that these set of code blocks are simply snippets. JavaScript Bootstrap framework comes with a number of ready-to-use JavaScript plugins. Thus, when you need to create Popup windows, Tabs, Carousels or Tooltips, and so on, you just use one of the prepackaged JavaScript plugins. For example, if you need to create a tab control in your page, you use this: <div role="tabpanel"> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"><a href="#recent" aria-controls="recent" role="tab" data-toggle="tab">Recent Orders</a></li> <li role="presentation"><a href="#all" aria-controls="al" role="tab" data-toggle="tab">All Orders</a></li> <li role="presentation"><a href="#redeem" aria-controls="redeem" role="tab" data-toggle="tab">Redemptions</a></li> </ul>   <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="recent"> Recent Orders</div> <div role="tabpanel" class="tab-pane" id="all">All Orders</div> <div role="tabpanel" class="tab-pane" id="redeem">Redemption History</div> </div> </div> To activate (open) a tab, you write this JavaScript code: $('#profileTab li:eq(1) a').tab('show'); As you could guess by looking at the syntax of this JavaScript line that the Bootstrap JS plugins are built on top of jQuery. Thus, the JS code you would write for Bootstrap are also all based on jQuery. Customization Even though Bootstrap offers most (if not all) standard features and functionalities for Responsive Web Design, there might be several cases when you would want to customize and extend the framework. One of the very basic requirements for customization would be to deploy your own branding and color combinations (themes) instead of the Bootstrap default ones. There can be several such use cases where you would want to change the default behavior of the framework. Bootstrap offers very easy and stable ways to customize the platform. When you use the Bootstrap CSS, all the global and fundamental HTML elements automatically become responsive and would properly behave as the client device on which the web page is browsed. The built-in components are also designed to be responsive. As the developer, you shouldn’t be worried about how these advanced components would behave in different devices and client agents. Summary In this article we have discussed the basics of Bootstarp along with a brief explanation on the design patterns and the navigation patterns. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Creating a Responsive Magento Theme with Bootstrap 3 [article]
Read more
  • 0
  • 0
  • 11725

article-image-nltk-hackers
Packt
07 Aug 2015
9 min read
Save for later

NLTK for hackers

Packt
07 Aug 2015
9 min read
In this article written by Nitin Hardeniya, author of the book NLTK Essentials, we will learn that "Life is short, we need Python" that's the mantra I follow and truly believe in. As fresh graduates, we learned and worked mostly with C/C++/JAVA. While these languages have amazing features, Python has a charm of its own. The day I started using Python I loved it. I really did. The big coincidence here is that I finally ended up working with Python during my initial projects on the job. I started to love the kind of datastructures, Libraries, and echo system Python has for beginners as well as for an expert programmer. (For more resources related to this topic, see here.) Python as a language has advanced very fast and spatially. If you are a Machine learning/ Natural language Processing enthusiast, then Python is 'the' go-to language these days. Python has some amazing ways of dealing with strings. It has a very easy and elegant coding style, and most importantly a long list of open libraries. I can go on and on about Python and my love for it. But here I want to talk about very specifically about NLTK (Natural Language Toolkit), one of the most popular Python libraries for Natural language processing. NLTK is simply awesome, and in my opinion,it's the best way to learn and implement some of the most complex NLP concepts. NLTK has variety of generic text preprocessing tool, such as Tokenization, Stop word removal, Stemming, and at the same time,has some very NLP-specific tools,such as Part of speech tagging, Chunking, Named Entity recognition, and dependency parsing. NLTK provides some of the easiest solutions to all the above stages of NLP and that's why it is the most preferred library for any text processing/ text mining application. NLTK not only provides some pretrained models that can be applied directly to your dataset, it also provides ways to customize and build your own taggers, tokenizers, and so on. NLTK is a big library that has many tools available for an NLP developer. I have provided a cheat-sheet of some of the most common steps and their solutions using NLTK. In our book, NLTK Essentials, I have tried to give you enough information to deal with all these processing steps using NLTK. To show you the power of NLTK, let's try to develop a very easy application of finding topics in the unstructured text in a word cloud. Word CloudNLTK Instead of going further into the theoretical aspects of natural language processing, let's start with a quick dive into NLTK. I am going to start with some basic example use cases of NLTK. There is a good chance that you have already done something similar. First, I will give a typical Python programmer approach and then move on to NLTK for a much more efficient, robust, and clean solution. We will start analyzing with some example text content: >>>import urllib2>>># urllib2 is use to download the html content of the web link>>>response = urllib2.urlopen('http://python.org/')>>># You can read the entire content of a file using read() method>>>html = response.read()>>>print len(html)47020 For the current example, I have taken the content from Python's home page: https://www.python.org/. We don't have any clue about the kind of topics that are discussed in this URL, so let's say that we want to start an exploratory data analysis (EDA). Typically in a text domain, EDA can have many meanings, but will go with a simple case of what kinds of terms dominate the documents. What are the topics? How frequent are they? The process will involve some level of preprocessing we will try to do this in a pure Python wayand then we will do it using NLTK. Let's start with cleaning the html tags. One way to do this is to select just tokens, including numbers and character. Anybody who has worked with regular expression should be able to convert html string into a list of tokens: >>># regular expression based split the string>>>tokens = [tok for tok in html.split()]>>>print "Total no of tokens :"+ str(len(tokens))>>># first 100 tokens>>>print tokens[0:100]Total no of tokens :2860['<!doctype', 'html>', '<!--[if', 'lt', 'IE', '7]>', '<html', 'class="no-js', 'ie6', 'lt-ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', '<!--[if', 'IE', '7]>', '<html', 'class="no-js', 'ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', ''type="text/css"', 'media="not', 'print,', 'braille,' ...] As you can see, there is an excess of html tags and other unwanted characters when we use the preceding method. A cleaner version of the same task will look something like this: >>>import re>>># using the split function https://docs.python.org/2/library/re.html>>>tokens = re.split('W+',html)>>>print len(tokens)>>>print tokens[0:100]5787['', 'doctype', 'html', 'if', 'lt', 'IE', '7', 'html', 'class', 'no', 'js', 'ie6', 'lt', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '7', 'html', 'class', 'no', 'js', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '8', 'msapplication', 'tooltip', 'content', 'The', 'official', 'home', 'of', 'the', 'Python', 'Programming', 'Language', 'meta', 'name', 'apple' ...] This looks much cleaner now. But still you can do more; I leave it to you to try to remove as much noise as you can. You can still look for word length as a criteria and remove words that have a length one—it will remove elements,such as 7, 8, and so on, which are just noise in this case. Now let's go to NLTK for the same task. There is a function called clean_html() that can do all the work we were looking for: >>>import nltk>>># http://www.nltk.org/api/nltk.html#nltk.util.clean_html>>>clean = nltk.clean_html(html)>>># clean will have entire string removing all the html noise>>>tokens = [tok for tok in clean.split()]>>>print tokens[:100]['Welcome', 'to', 'Python.org', 'Skip', 'to', 'content', '&#9660;', 'Close', 'Python', 'PSF', 'Docs', 'PyPI', 'Jobs', 'Community', '&#9650;', 'The', 'Python', 'Network', '&equiv;', 'Menu', 'Arts', 'Business' ...] Cool, right? This definitely is much cleaner and easier to do. No analysis in any EDA can start without distribution. Let's try to get the frequency distribution. First, let's do it the Python way, then I will tell you the NLTK recipe. >>>import operator>>>freq_dis={}>>>for tok in tokens:>>>    if tok in freq_dis:>>>        freq_dis[tok]+=1>>>    else:>>>        freq_dis[tok]=1>>># We want to sort this dictionary on values ( freq in this case )>>>sorted_freq_dist= sorted(freq_dis.items(), key=operator.itemgetter(1), reverse=True)>>> print sorted_freq_dist[:25][('Python', 55), ('>>>', 23), ('and', 21), ('to', 18), (',', 18), ('the', 14), ('of', 13), ('for', 12), ('a', 11), ('Events', 11), ('News', 11), ('is', 10), ('2014-', 10), ('More', 9), ('#', 9), ('3', 9), ('=', 8), ('in', 8), ('with', 8), ('Community', 7), ('The', 7), ('Docs', 6), ('Software', 6), (':', 6),  ('3:', 5), ('that', 5), ('sum', 5)] Naturally, as this is Python's home page, Python and the >>> interpreters are the most common terms, also giving a sense about the website. A better and efficient approach is to use NLTK's FreqDist() function. For this, we will take a look at the same code we developed before: >>>import nltk>>>Freq_dist_nltk=nltk.FreqDist(tokens)>>>print Freq_dist_nltk>>>for k,v in Freq_dist_nltk.items():>>>    print str(k)+':'+str(v)<FreqDist: 'Python': 55, '>>>': 23, 'and': 21, ',': 18, 'to': 18, 'the': 14, 'of': 13, 'for': 12, 'Events': 11, 'News': 11, ...>Python:55>>>:23and:21,:18to:18the:14of:13for:12Events:11News:11 Let's now do some more funky things. Let's plot this: >>>Freq_dist_nltk.plot(50, cumulative=False)>>># below is the plot for the frequency distributions We can see that the cumulative frequency is growing, and at words such as other and frequency 400, the curve is going into long tail. Still, there is some noise, and there are words such asthe, of, for, and =. These are useless words, and there is a terminology for these words. These words are stop words,such asthe, a, and an. Article pronouns are generally present in most of the documents; hence, they are not discriminative enough to be informative. In most of the NLP and information retrieval tasks, people generally remove stop words. Let's go back again to our running example: >>>stopwords=[word.strip().lower() for word in open("PATH/english.stop.txt")]>>>clean_tokens=[tok for tok in tokens if len(tok.lower())>1 and (tok.lower() not in stopwords)]>>>Freq_dist_nltk=nltk.FreqDist(clean_tokens)>>>Freq_dist_nltk.plot(50, cumulative=False) This looks much cleaner now! After finishing this much, you should be able to get something like this using word cloud: Please go to http://www.wordle.net/advanced for more word clouds. Summary To summarize, this article was intended to give you a brief introduction toNatural Language Processing. The book does assume some background in NLP andprogramming in Python, but we have tried to give a very quick head start to Pythonand NLP. Resources for Article: Further resources on this subject: Hadoop Monitoring and its aspects [Article] Big Data Analysis (R and Hadoop) [Article] SciPy for Signal Processing [Article]
Read more
  • 0
  • 0
  • 2823

article-image-camera-api
Packt
07 Aug 2015
4 min read
Save for later

The Camera API

Packt
07 Aug 2015
4 min read
In this article by Purusothaman Ramanujam, the author of PhoneGap Beginner's Guide Third Edition, we will look at the Camera API. The Camera API provides access to the device's camera application using the Camera plugin identified by the cordova-plugin-camera key. With this plugin installed, an app can take a picture or gain access to a media file stored in the photo library and albums that the user created on the device. The Camera API exposes the following two methods defined in the navigator.camera object: getPicture: This opens the default camera application or allows the user to browse the media library, depending on the options specified in the configuration object that the method accepts as an argument cleanup: This cleans up any intermediate photo file available in the temporary storage location (supported only on iOS) (For more resources related to this topic, see here.) As arguments, the getPicture method accepts a success handler, failure handler, and optionally an object used to specify several camera options through its properties as follows: quality: This is a number between 0 and 100 used to specify the quality of the saved image. destinationType: This is a number used to define the format of the value returned in the success handler. The possible values are stored in the following Camera.DestinationType pseudo constants: DATA_URL(0): This indicates that the getPicture method will return the image as a Base64-encoded string FILE_URI(1): This indicates that the method will return the file URI NATIVE_URI(2): This indicates that the method will return a platform-dependent file URI (for example, assets-library:// on iOS or content:// on Android) sourceType: This is a number used to specify where the getPicture method can access an image. The following possible values are stored in the Camera.PictureSourceType pseudo constants: PHOTOLIBRARY (0), CAMERA (1), and SAVEDPHOTOALBUM (2): PHOTOLIBRARY: This indicates that the method will get an image from the device's library CAMERA: This indicates that the method will grab a picture from the camera SAVEDPHOTOALBUM: This indicates that the user will be prompted to select an album before picking an image allowEdit: This is a Boolean value (the value is true by default) used to indicate that the user can make small edits to the image before confirming the selection; it works only in iOS. encodingType: This is a number used to specify the encoding of the returned file. The possible values are stored in the Camera.EncodingType pseudo constants: JPEG (0) and PNG (1). targetWidth and targetHeight: These are the width and height in pixels, to which you want the captured image to be scaled; it's possible to specify only one of the two options. When both are specified, the image will be scaled to the value that results in the smallest aspect ratio (the aspect ratio of an image describes the proportional relationship between its width and height). mediaType: This is a number used to specify what kind of media files have to be returned when the getPicture method is called using the Camera.PictureSourceType.PHOTOLIBRARY or Camera.PictureSourceType.SAVEDPHOTOALBUM pseudo constants as sourceType; the possible values are stored in the Camera.MediaType object as pseudo constants and are PICTURE (0), VIDEO (1), and ALLMEDIA (2). correctOrientation: This is a Boolean value that forces the device camera to correct the device orientation during the capture. cameraDirection: This is a number used to specify which device camera has to be used during the capture. The values are stored in the Camera.Direction object as pseudo constants and are BACK (0) and FRONT (1). popoverOptions: This is an object supported on iOS to specify the anchor element location and arrow direction of the popover used on iPad when selecting images from the library or album. saveToPhotoAlbum: This is a Boolean value (the value is false by default) used in order to save the captured image in the device's default photo album. The success handler receives an argument that contains the URI to the file or data stored in the file's Base64-encoded string, depending on the value stored in the encodingType property of the options object. The failure handler receives a string containing the device's native code error message as an argument. Similarly, the cleanup method accepts a success handler and a failure handler. The only difference between the two is that the success handler doesn't receive any argument. The cleanup method is supported only on iOS and can be used when the sourceType property value is Camera.PictureSourceType.CAMERA and the destinationType property value is Camera.DestinationType.FILE_URI. Summary In this article, we looked at the various properties available with the Camera API. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article] iPhone JavaScript: Installing Frameworks [article]
Read more
  • 0
  • 0
  • 4574

article-image-detecting-touchscreen-gestures
Packt
06 Aug 2015
18 min read
Save for later

Detecting Touchscreen Gestures

Packt
06 Aug 2015
18 min read
In this article by Kyle Mew author of the book, Android 5 Programming by Example, we will learn how to: Add a GestureDetector to a view Add an OnTouchListener and an OnGestureListener Detect and refine fling gestures Use the DDMS Logcat to observe the MotionEvent class Edit the Logcat filter configuration Simplify code with a SimpleOnGestureListener Add a GestureDetector to an Activity Edit the Manifest to control launch behavior Hide UI elements Create a splash screen Lock screen orientation (For more resources related to this topic, see here.) Adding a GestureDetector to a view Together, view.GestureDetector and view.View.OnTouchListener are all that are required to provide our ImageView with gesture functionality. The listener contains an onTouch() callback that relays each MotionEvent to the detector. We are going to program the large ImageView so that it can display a small gallery of related pictures that can be accessed by swiping left or right on the image. There are two steps to this task as, before we implement our gesture detector, we need to provide the data for it to work on. Adding the gallery data As this app is for demonstration and learning purposes, and so we can progress as quickly as possible, we will only provide extra images for one or two of the ancient sites in the project. Here is how it's done: Open the Ancient Britain project. Open the MainData.java file. Add the following arrays: static Integer[] hengeArray = {R.drawable.henge_large, R.drawable.henge_2, R.drawable.henge_3, R.drawable.henge_4}; static Integer[] horseArray = {}; static Integer[] wallArray = {R.drawable.wall_large, R.drawable.wall_2}; static Integer[] skaraArray = {}; static Integer[] towerArray = {}; static Integer[][] galleryArray = {hengeArray, horseArray, wallArray, skaraArray, towerArray}; Either download the project files from the Packt website or find four of your own images (around 640 x 480 px). Name them henge_2, henge_3, henge_4, and wall_2 and place them in your res/drawable directory. This is all very straightforward, and the code that will accompany it allows you to have individual arrays of any length. This is all we need to add to our gallery data. Now, we need to code our GestureDetector and OnTouchListener. Adding the GestureDetector Along with the OnTouchListener that we will define for our ImageView, the GestureDetector has its own listeners. Here we will use GestureDetector.OnGestureListener to detect a fling gesture and collect the MotionEvent that describe it. Follow these steps to program your ImageView to respond to fling gestures: Open the DetailActivity.java file. Declare the following class fields: private static final int MIN_DISTANCE = 150; private static final int OFF_PATH = 100; private static final int VELOCITY_THRESHOLD = 75; private GestureDetector detector; View.OnTouchListener listener; private int ImageIndex; In the onCreate() method assigns both the detector and listener like this: detector = new GestureDetector(this, new GalleryGestureDetector()); listener = new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { return detector.onTouchEvent(event); } }; Beneath this, add the following line: ImageIndex = 0; Beneath the line detailImage = (ImageView) findViewById(R.id.detail_image);, add the following line: detailImage.setOnTouchListener(listener); Create the following inner class: class GalleryGestureDetector implements GestureDetector.OnGestureListener { } Before dealing with the errors this generates, add the following field to the class: private int item; { item = MainActivity.currentItem; } Click anywhere on the line registering the error and press Alt + Enter. Then select Implement Methods, making sure that you have the Copy JavaDoc and Insert @Override boxes checked. Complete the onDown() method like this: @Override public boolean onDown(MotionEvent e) { return true; } Fill in the onShowPress() method: @Override public void onShowPress(MotionEvent e) { detailImage.setElevation(4); } Then fill in the onFling() method: @Override public boolean onFling(MotionEvent event1, MotionEvent event2, float velocityX, float velocityY) { if (Math.abs(event1.getY() - event2.getY()) > OFF_PATH) return false; if (MainData.galleryArray[item].length != 0) { // Swipe left if (event1.getX() - event2.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex++; if (ImageIndex == MainData.galleryArray[item].length) ImageIndex = 0; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } else { // Swipe right if (event2.getX() - event1.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex--; if (ImageIndex < 0) ImageIndex = MainData.galleryArray[item].length - 1; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } } } detailImage.setElevation(0); return true; } Test the project on an emulator or handset. The process of gesture detection in the preceding code begins when the OnTouchListener listener's onTouch() method is called. It then passes that MotionEvent to our gesture detector class, GalleryGestureDetector, which monitors motion events, sometimes stringing them together and timing them until one of the recognized gestures is detected. At this point, we can enter our own code to control how our app responds as we did here with the onDown(), onShowPress(), and onFling() callbacks. It is worth taking a quick look at these methods in turn. It may seem, at the first glance, that the onDown() method is redundant; after all, it's the fling gesture that we are trying to catch. In fact, overriding the onDown() method and returning true from it is essential in all gesture detections as all the gestures begin with an onDown() event. The purpose of the onShowPress() method may also appear unclear as it seems to do a little more than onDown(). As the JavaDoc states, this method is handy for adding some form of feedback to the user, acknowledging that their touch has been received. The Material Design guidelines strongly recommend such feedback and here we have raised the view's elevation slightly. Without including our own code, the onFling() method will recognize almost any movement across the bounding view that ends in the user's finger being raised, regardless of direction or speed. We do not want very small or very slow motions to result in action; furthermore, we want to be able to differentiate between vertical and horizontal movement as well as left and right swipes. The MIN_DISTANCE and OFF_PATH constants are in pixels and VELOCITY_THRESHOLD is in pixels per second. These values will need tweaking according to the target device and personal preference. The first MotionEvent argument in onFling() refers to the preceding onDown() event and, like any MotionEvent, its coordinates are available through its getX() and getY() methods. The MotionEvent class contains dozens of useful classes for querying various event properties—for example, getDownTime(), which returns the time in milliseconds since the current onDown() event. In this example, we used GestureDetector.OnGestureListener to capture our gesture. However, the GestureDetector has three such nested classes, the other two being SimpleOnGestureListener and OnDoubleTapListener. SimpleOnGestureListener provides a more convenient way to detect gestures as we only need to implement those methods that relate to the gestures we are interested in capturing. We will shortly edit our Activity so that it implements the SimpleOnGestureListener instead, allowing us to tidy our code and remove the four callbacks that we do not need. The reason for taking this detour, rather than applying the simple listener to begin with, was to get to see all of the gestures available to us through a gesture listener and demonstrate how useful JavaDoc comments can be, particularly if we are new to the framework. For example, take a look at the following screenshot: Another very handy tool is the Dalvik Debug Monitor Server (DDMS), which allows us to see what is going on inside our apps while they are running. The workings of our gesture listener are a good place to do this as most of its methods operate invisibly. Viewing gesture activity with DDMS To view the workings of our OnGestureListener with DDMS, we need to first create a tag to identify our messages and then a filter to view them. The following steps demonstrate how to do this: Open the DetailActivity.java file. Declare the following constant: private static final String DEBUG_TAG = "tag"; Add the following line inside the onDown() method: Log.d(DEBUG_TAG, "onDown"); Add the line Log.d(DEBUG_TAG, "onShowPress"); to the onShowPress() method and do the same for each of our OnGestureDetector methods. Add the following lines to the appropriate clauses in onFling(): Log.d(DEBUG_TAG, "left"); Log.d(DEBUG_TAG, "right"); Open the Android DDMS pane from the Android tab at the bottom of the window or by pressing Alt + 6. If logcat is not visible, it can be opened with the icon to the right of the top-right drop-down menu. Click on this drop-down menu and select Edit Filter Configuration. Complete the dialog as shown in the following screenshot: You can now run the project on a handset or emulator and view, in the Logcat, which gestures are being triggered and how. Your output should resemble the one here: 02-17 14:39:00.990 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:01.039 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp 02-17 14:39:03.503 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:03.601 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onShowPress 02-17 14:39:04.101 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onLongPress 02-17 14:39:10.484 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:10.541 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.091 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.232 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onFling 02-17 14:39:11.680 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ right 02-17 14:39:01.039   1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp DDMS is an invaluable tool when it comes to debugging our apps and seeing what is going on beneath the hood. Once a Log Tag has been defined in the code, we can then create a filter for it so that we see only the messages we are interested in. The Log class contains several methods to report information based on its level of importance. We used Log.d, which stands for debug. All these methods work with the same two parameters: Log.[method](String tag, String message). The full list of these methods is as follows: Log.v: Verbose Log.d: Debug Log.i: Information Log.w: Warning Log.e: Error Log.wtf: Unexpected error It is worth noting that most debug messages will be ignored during the packaging for distribution except for the verbose messages; thus, it is essential to remove them before your final build. Having seen a little more of the inner workings of our gesture detector and listener, we can now strip our code of unused methods by implementing GestureDetector.SimpleOnGestureListener. Implementing a SimpleOnGestureListener It is very simple to convert our gesture detector from one class of listener to another. All we need to do is change the class declaration and delete the unwanted methods. To do this, perform the following steps: Open the DetailActivity file. Change the class declaration for our gesture detector class to the following: class GalleryGestureDetector extends GestureDetector.SimpleOnGestureListener { Delete the onShowPress(), onSingleTapUp(), onScroll(), and onLongPress() methods. This is all you need to do to switch to the SimpleOnGestureListener. We have now successfully constructed and edited a gesture detector to allow the user to browse a series of images. You will have noticed that there is no onDoubleTap() method in the gesture listener. Double-taps can, in fact, be handled with the third GestureDetector listener, OnDoubleTapListener, which operates in a very similar way to the other two. However, Google, in its UI guidelines, recommends that a long press should be used instead, whenever possible. Before moving on to multitouch events, we will take a look at how to attach a GestureDetector listener to an entire Activity by adding a splash screen to our project. In the process, we will also see how to create a Full-Screen Activity and how to edit the Maniftest file so that our app launches with the splash screen. Adding a GestureDetector to an Activity The method we have employed so far allows us to attach a GestureDetector listener to any view or views and this, of course, applies to ViewGroups such as Layouts. There are times when we may want to detect gestures to be applied to the whole screen. For this purpose, we will create a splash screen that can be dismissed with a long press. There are two things we need to do before implementing the gesture detector: creating a layout and editing the Manifest file so that the app launches with our splash screen. Designing the splash screen layout The main difference between processing gestures for a whole Activity and an individual widget, is that we do not need an OnTouchListener as we can override the Activity's own onTouchEvent(). Here is how it is done: Create a new Blank Activity from the Project Explorer context menu called SplashActivity.java. The Activity wizard should have created an associated XML layout called activity_splash.xml. Open this and view it using the Text tab. Remove all the padding properties from the root layout so that it looks similar to this: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.example.kyle.ancientbritain .SplashActivity"> Here we will need an image to act as the background for our splash screen. If you have not downloaded the project files from the Packt website, find an image, roughly of the size and aspect of your target device's screen, upload it to the project drawable folder, and call it splash. The file I used is 480 x 800 px. Remove the TextView that the wizard placed inside the layout and replace it with this ImageView: <ImageView android:id="@+id/splash_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/splash"/> Create a TextView beneath this, such as the following: <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" android:gravity="center_horizontal" android:textAppearance="?android:attr/ textAppearanceLarge" android:textColor="#fffcfcbd"/> Add the following text property: android:text="Welcome to <b>Ancient Britain</b>npress and hold nanywhere on the screennto start" To save time adding string resources to the strings.xml file, enter a hardcoded string such as the preceding one and heed the warning from the editor to have the string extracted for you like this: There is nothing in this layout that we have not encountered before. We removed all the padding so that our splash image will fill the layout; however, you will see from the preview that this does not appear to be the case. We will deal with this next in our Java code, but we need to edit our Manifest first so that the app gets launched with our SplashActivity. Editing the Manifest It is very simple to configure the AndroidManifest file so that an app will get launched with whichever Activity we choose; the way it does so is with an intent. While we are editing the Manifest, we will also configure the display to fill the screen. Simply follow these steps: Open the res/values-v21/styles.xml file and add the following style: <style name="SplashTheme" parent="android:Theme.Material. NoActionBar.Fullscreen"> </style> Open the AndroidManifest.xml file. Cut-and-paste the <intent-filter> element from MainActivity to SplashActivity. Include the following properties so that the entire <activity> node looks similar to this: <activity android:name=".SplashActivity" android:theme="@style/SplashTheme" android:screenOrientation="portrait" android:configChanges="orientation|screenSize" android:label="Old UK" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> We have encountered themes and styles before and, here, we took advantage of a built-in theme designed for full screen activities. In many cases, we might have designed a landscape layout here but, as is often the case with splash screens, we locked the orientation with the android:screenOrientation property. The android:configChanges line is not actually needed here, but is included as it is useful to know about it. Configuring any attribute such as this prevents the system from automatically reloading the Activity whenever the device is rotated or the screen size changed. Instead of the Activity restarting, the onConfigurationChanged() method is called. This was not needed here as the screen size and orientation were taken care of in the previous lines of code and this line was only included as a point of interest. Finally, we changed the value of android:label. You may have noticed that, depending on the screen size of the device you are using, the name of our app is not displayed in full on the home screen or apps drawer. In such cases, when you want to use a shortened name for your app, it can be inserted here. With everything else in place, we can get on with adding our gesture detector. This is not dissimilar to the way we did this before but, this time, we will apply the detector to the whole screen and will be listening for a long press, rather than a fling. Adding the GestureDetector Along with implementing a gesture detector for the entire Activity here, we will also take the final step in configuring our splash screen so that the image fills the screen, but maintains its aspect ratio. Follow these steps to complete the app splash screen. Open the SplashActivity file. Declare a GestureDetector as we did in the earlier exercise: private GestureDetector detector; In the onCreate() method, assign and configure our splash image and gesture detector like this: ImageView imageView = (ImageView) findViewById(R.id.splash_image); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); detector = new GestureDetector(this, new SplashListener()); Now, override the Activity's onTouchEvent() like this: @Override public boolean onTouchEvent(MotionEvent event) { this.detector.onTouchEvent(event); return super.onTouchEvent(event); } Create the following SimpleOnGestureListener class: private class SplashListener extends GestureDetector. SimpleOnGestureListener { @Override public boolean onDown(MotionEvent e) { return true; } @Override public void onLongPress(MotionEvent e) { startActivity(new Intent(getApplicationContext(), MainActivity.class)); } } Build and run the app on your phone or an emulator. The way a gesture detector is implemented across an entire Activity should be familiar by this point, as should the capturing of the long press event. The ImageView.setScaleType(ImageView.ScaleType) method is essential here; it is a very useful method in general. The CENTER_CROP constant scales the image to fill the view while maintaining the aspect ratio, cropping the edges when necessary. There are several similar ScaleTypes, such as CENTER_INSIDE, which scales the image to the maximum size possible without cropping it, and CENTER, which does not scale the image at all. The beauty of CENTER_CROP is that it means that we don't have to design a separate image for every possible aspect ratio on the numerous devices our apps will end up running on. Provided that we make allowances for very wide or very narrow screens by not including essential information too close to the edges, we only need to provide a handful of images of varying pixel densities to maintain the image quality on large, high-resolution devices. The scale type of ImageView can be set from within XML with android:scaleType="centerCrop", for example. You may have wondered why we did not use the built-in Full-Screen Activity from the wizard; we could easily have done so. The template code the wizard creates for a Full-Screen Activity provides far more features than we needed for this exercise. Nevertheless, the template is worth taking a look at, especially if you want a fullscreen that brings the status bar and other components into view when the user interacts with the Activity. That brings us to the end of this article. Not only have we seen how to make our apps interact with touch events and gestures, but also how to send debug messages to the IDE and make a Full-Screen Activity. Summary We began this article by adding a GestureDetector to our project. We then edited it so that we could filter out meaningful touch events (swipe right and left, in this case). We went on to see how the SimpleOnGestureListener can save us a lot of time when we are only interested in catching a subset of the recognized gestures. We also saw how to use DDMS to pass debug messages during runtime and how, through a combination of XML and Java, the status and action bars can be hidden and the entire screen be filled with a single view or view group. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [Article] Saying Hello to Unity and Android [Article] Testing with the Android SDK [Article]
Read more
  • 0
  • 0
  • 7934

article-image-simplify-deployment-infrastructure-manifest-part-2
Cody A.
06 Aug 2015
8 min read
Save for later

Simplify Deployment with an Infrastructure Manifest, Part 2

Cody A.
06 Aug 2015
8 min read
This is the second part of a post on using a Manifest of your infrastructure for automation. The first part described how to use your Cloud API to transform Application Definitions into an Infrastructure Manifest. This post will show examples of automation tools built using an Infrastructure Manifest. In particular, we'll explore application deployment and load balancer configuration management. Recall our example Infrastructure Manifest from Part 1: { "prod": { "us-east-1": { "appserve01ea1": { "applications": [ "appserve" ], "zone": "us-east-1a", "fqdn": "ec2-1-2-3-4.compute-1.amazonaws.com", "private ip": "10.9.8.7", "public ip": "1.2.3.4", "id": "i-a1234bc5" }, ... }, ... } As I mentioned previously, this Manifest can form the basis for numerous automations. Some tools my team at Signal has built on top of this concept are automated deployments, load balancing, security group management, and DNS. Application Deployment Let's see how an Infrastructure Manifest can simplify application deployment. Although we'll use Fabric as the basis for our deployment system, the concept should work with Chef and many other push-based deployment systems as well. from json import load as json_decode from urllib2 import urlopen MANIFEST = json_decode(urlopen(env.manifest)) for hostname, meta in MANIFEST.iteritems(): for role in meta['roles']: env.roledefs[role].append(hostname) Note: For this to work, you must set the manifest URL in Fabric's environment as env.manifest. For example, you can set this in the ~/.fabricrc file or pass it on the command-line. manifest=http://manifest:5000/api/prod/us-east-1/manifest That's all Fabric really requires to know where to deploy each application! Given the manifest above, this would add the "appserve" role so that you can run tasks on these instances simultaneously. For example, to deploy the "appserve" application to all the hosts with this role: @task @roles('appserve') def deploy_appserve(): # standard Fabric deploy logic here Now calling fab deploy_appserve will run the commands to deploy the "appserve application on each host with the "appserve" role. Easy, right? You might want to deploy some applications to every host in your infrastructure. Instead of adding these special roles to every Application Definition, you can include them here. For example, if you have a custom monitoring application ("mymon"), then you can read the list of all hosts from the Manifest and add them to the "mymon" role. # set up special cases for roledefs: env.roledefs = defaultdict(list, { 'mymon': list(MANIFEST.keys()), }) Now, after adding a deploy_mymon task, you'll be able to easily deploy "mymon" to all hosts in your infrastructure. Even if you auto-deploy using a specialized git receiver, Jenkins hooks, or similar, this approach will enable you to make your deployments cloud-aware, to deploy each application to the appropriate hosts in your cloud. That's it! Deployments can't be much simpler than this. Load Balancer Configuration Management A common challenge in cloud environments is maintaining the list of all hosts for load balancer configurations. If you don't want to lock in to a vendor or cloud-specific solution such as Amazon ELB, you may choose an open source software load balancer such as HAProxy. However, this leaves you with the challenge of maintaining the configurations as hosts appear and disappear in your cloud-based infrastructure. This problem is amplified when you use software-based load balancers between each set of services (or each tier) in your application. Using the Infrastructure Manifest, a first-pass solution can be quite simple. You can revision-control the configuration templates and interpolate the application ports and host information from the Manifest. Then periodically update the generated configuration files and distribute them using your existing configuration management software (such as Puppet or Chef). Let's say you want to generate a HAProxy configuration for your load balancer. The complete configuration file might look like this: global user haproxy group haproxy daemon frontend main_vip bind *:80 # ACLs for basic name-based virtual-hosts acl appserve_host hdr_beg(host) -i app.example.com acl uiserve_host hdr_beg(host) -i portal.example.com use_backend appserve if appserve_host use_backend uiserve if uiserve_host default_backend uiserve backend appserve balance roundrobin option httpclose option httpchk GET /hc http-check disable-on-404 server appserve01ea1 10.42.1.91:8080 check server appserve02ea1 10.42.1.92:8080 check server appserve03ea1 10.42.1.93:8080 check backend uiserve balance roundrobin option httpclose option httpchk GET /hc server uiserve01ea1 10.42.1.111:8082 check server uiserve02ea1 10.42.1.112:8082 check The simplest way to produce this configuration file is to generate it from a template. There are many templating solutions from which to choose. I'm fond of Jinja2, so we'll use that for exploring this solution in Python. We want to load the template from a file located in a "templates" directory, so we start by creating a Jinja2 loader and environment: from jinja2 import Environment, FileSystemLoader import os loader = FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates')) environment = Environment(loader=loader, lstrip_blocks=True) The template corresponding to this output could look like this. We'll call it 'lb.txt' since it's for the lb server group. global user haproxy group haproxy daemon frontend main_vip bind *:80 # ACLs for basic name-based virtual-hosts acl appserve_host hdr_beg(host) -i app.example.com acl uiserve_host hdr_beg(host) -i portal.example.com use_backend appserve if appserve_host use_backend uiserve if uiserve_host default_backend uiserve backend appserve balance roundrobin option httpclose option httpchk GET {{ vips.appserve.healthcheck_resource }} http-check disable-on-404 {%- for server in vips.appserve.servers %} server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.appserve.backend_port }} check {%- endfor %} backend uiserve balance roundrobin option httpclose option httpchk GET {{ vips.uiserve.healthcheck_resource }} {%- for server in vips.uiserve.servers %} server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.uiserve.backend_port }} check {%- endfor %} You can see by examining the template that it only expects a single variable: vips. This is a map of application names to their load balancer configuration. Specifically, each vip contains a backend port, a healthcheck resource (i.e., HTTP path), and a list of servers (with server name and private IP address for each). Coincidentally, all of this information is available in the Infrastructure Manifest and Application Definitions we developed in Part 1. We can easily fetch this information from the webapp. from requests import get def main(manifest_host, env, region, server_group): manifest = get('http://%s/api/%s/%s/manifest' % (manifest_host, env, region)).json() applications = get('http://%s/api/applications' % manifest_host).json() print generate_haproxy(manifest, applications, server_group) Note: we didn't actually add the /api/applications endpoint last week, so its left as an exercise for the reader; hint: jsonify(config()['APPLICATIONS']). Now we can dive into the meat of this tool, the generate_haproxy function. As you might guess, this uses the Jinja2 environment to render the template. But first it must merge the Application Definitions and Manifest into the vips variable that the template expects. def generate_haproxy(manifest, applications, server_group): apps = {} for application, meta in applications.iteritems(): app_object = { 'servers': [], 'frontend_port': meta['frontend'], 'backend_port': meta['backend'], 'healthcheck_resource': meta['healthcheck']['resource'] } for server in manifest: if application in manifest[server]['applications']: app_object['servers'].append({'name': server, 'details': manifest[server]}) app_object['servers'].sort(key=lambda e: e['name']) apps[application] = app_object return environment.get_template("%s.txt" % server_group).render(vips=apps) There's not much going on here. We iterate through all the applications and create a vip (app_object) with all the needed variables for each one. Then we render the server_group's template with Jinja2. Finally, we can call the main we created above to see this in action: main('localhost:5000', 'prod', 'us-east-1', 'lb') This will print the HAProxy configuration for the lb load balancer group for your production us-east-1 region. (It assumes that the Manifest webapp is running on the same host.) Depending on what hosts you have in your cloud infrastructure, this should print something like the complete HAProxy configuration file shown at the top. To easily keep your load balancer configurations up-to-date, you could run this regularly for each environment and region. Then the generated files could be distributed using your existing configuration management system. Alternatively, if your load balancers support programmatic rule updates, that would be even cleaner than this simple first-pass approach which relies on configuration file updates. I hope this spurs your imagination and shows the benefit of using an Infrastructure Manifest to automate all the things. About the author Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that's changing the service model underlying the Internet.
Read more
  • 0
  • 0
  • 2574

article-image-rundown-example
Packt
06 Aug 2015
10 min read
Save for later

Rundown Example

Packt
06 Aug 2015
10 min read
In this article by Miguel Oliveira, author of the book Microsoft System Center Orchestrator 2012 R2 Essentials, we will learn to get started on the creational process. We will be able to driven on how to address and connect all the pieces together in order to successfully create a Runbook. (For more resources related to this topic, see here.) Runbook for Active Directory User Account Provisioning Now, for this Runbook, we've been challenged by our HR department to come out with a solution for them to be able to create new user accounts for recently joined employees. The request was specifically drawn with the target for them (HR) to be able to: Provide the first and last name Provide the department name Get that user added to the proper department group and get all the information of the user Send the newly created account to the IT department to provide a machine, a phone, and an e-mail address With these requirements at the back of our heads, let's see which activities we need to get into our Runbook. I'll place these in steps for this example, so it's easy to follow: Data input: So, we definitely need an activity to allow the HR to feed the information into the Runbook. For this, we can use the Initialize Data activity (Runbook control category), or we could work along with a monitored file and read the data from a line, or even from a SharePoint list. But to keep it simple for now, let's use the Initialize Data. Data processing: In here, the idea would be to retrieve the Department given by the HR and process it to retrieve the group (the Get Group activity from the Active Directory category) and include our user (the Add User To Group activity from the Active Directory category) into the group we've retrieved; but in between, we'll need to create the user account (Create User activity from the Active Directory category), and generate a password (the Generate Random Text activity from the Utilities category). Data output: At the very end of all this, send an e-mail (the Send Email activity from the Email category) back to HR with the account information and status of its creation and inform our IT department (for security reasons) too about the account that has been created. We're also going to closely watch for errors with a few activities that will show us whether an error occurs. Let's see the look of this Runbook from a structured point (and actually almost how it's going to look in the end) and we'll detail the activities and options within them step by step from there. Here's the aspect of the Runbook structured with the activities properly linked between them allowing the data bus to flow and transport the published data from the beginning to the end: As described in the steps, we start with an Initialize Data activity in which we're going to request some inputs from the person executing the Runbook. To create a user, we'll need his First Name and Last Name and also the Department. For that, we'll fill in the following information in the Fetch User Details activity seen in the previous screenshot. For the sake of avoiding errors, the HR department should have a proper list of departments that we know will translate into a proper group in the upcoming activities. After filling the information, the processing of the information begins and with it, our automation process that will find the group for that department, create our user account, set a password, change password on the first login, add the user to the group, and enable the account. For that, we'll start with the Get Group activity in which we'll fill in the following: Set up the proper configuration in the Get Group Properties window for the Active Directory Domain in which you'll want this to execute, and in the Filters options, set to filter Sam Account Name of the group as the Department filled by the HR department. Now we'll set another prerequisite to create the account—the password! For this, we'll get the Generate Random Text activity and set it with the following parameters: These values should be set to accordingly accommodate your existing security policy and minimum password requirements for your domain. These previous activities are all we need to have the necessary values to proceed with the account creation by using the Create User activity. These should be the parameters filled in. All of these parameters are actually being retrieved from the Published Data from the last activities. As the list is long, we'll detail them here for your better understanding. Everything that's between {} is Published Data: Common Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} Department: {Display Name from "Get Group"} Display Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} First Name: {First Name from "Fetch User Details"} Last Name: {Last Name from "Fetch User Details"} Password: {Random text from "Generate Random Text"} User Must Change Password: True SAM Account Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"} User Principal Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.local Email: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.com Manager: {Managed By from "Get Group"} As said previously, most of the data comes from the Published Data and we've created subscriptions in all these fields to retrieve it. The only two fields that have data different from Published Data are the User Must Change Password, User Principal Name (UPN), and Email. The User Must Change Password is a Boolean field that will display only Yes or No, and in the UPN and e-mail we've added the domain information (@test.local and @test.com) to the Published Data. Depending on the Create User activity's output, it will trigger a different activity. For now, let's assume that the activity returns a success on the execution, this will make the Runbook follow the smart link that goes on with the Get User activity. The Get User activity will retrieve all the information concerning the newly created user account that will be useful for the next activities down the line. In order to retrieve the proper information, we'll need to configure the following in the Filters area within the activity: You'll need to add a filter, selecting Sam Account Name and Relation as Equals for Value under the subscribed data from Sam Account Name that comes out of the Create User activity. From here, we'll link with the activity that Add User to Group (here renamed to Add User to Department) and within that activity we're going to specify the group and the user so the activity can add the user into the group. It should look exactly like the screenshot that follows: We'll once again assume that everything's running as expected and prepare our next activity that is to enable user account and for this one, we'll use the Enable User activity. The configuration of the activity can be seen in the next screenshot: Once again, we'll get the information out of the Published Data and feed it into the activity. After this activity is completed, we're going to log the execution and information output into the platform with the Send Platform Event activity so we can see any necessary information available from the execution. Here is a sample of the configuration for the message output: To get the Details text box expanded this way, right-click on it and select Expand… from the menu, then you can format and include the data that you feel is more important for you to see it. Then we'll send an e-mail for the HR team with the account creation details so they can communicate to the newly arrived employee and another e-mail for the IT department only with the account name and the department (plus the group name) for security reasons. Here are the samples of these two activities, starting with the HR e-mail: Let's go point by point on this configuration sample. In the Details section, we've settled the following: Subject: Account {Sam Account Name from "Get User"} Created Recipients: to: hr.dept@test.com Message: The message description is given in the following screenshot: Message option that consists of choosing the Priority of the message (high, normal, or low), and set the necessary SMTP authentication parameters (account, password, and domain) so you can send the message through your e-mail service. If you have an application e-mail service relay, you can leave the SMTP authentication without any configuration. In connect Connect option, you'll find the place to configure the e-mail address that you want the user to see and the SMTP connection (server, port, and SSL) through which you'll send your messages. Now our Send Email IT activity will be more or less the same, with the exception for the destination and the message itself. It should be something a little more or less like the following screenshot: By now you've got the idea and you're pumped to create new Runbooks, but we still have to do some error control on some of these tasks; although they're chained, if one fails, everything fails. So for this Runbook, we'll create error control on two tasks that if we observe well, are more or less the only two that can fail! One is the Create User Account activity, which can fail due to the user account existing or by some issue with privileges on its creation. The other is Add User To Department that might fail to add the user into the group for some reason. So for this, we'll create two notification activities called Send Event and Log Message that we'll rename to User Account Error and Group Error respectively. If we look into the User Account Error activity, we'll set something more or less like the following screenshot: A quick explanation of the settings is as follows: Computer: This is the computer to which Windows Event Viewer we're going to write the event into. In this case, we'll concentrate over our Management Server, but you might have a logging server for this. Message: The message gets logged into the windows event viewer. Here, we can subscribe for the error data coming out of the last activity executed. Severity: This is usually an Error. You can set Information or Warning if you are deploying these activities to keep a track on each given step. So for our Group Error Properties the philosophy will be the same. Now that we are all set, we'll need to work our smart links so that they can direct the Runbook execution flow into the following activity depending on the previous activity output (success or error). In the end, your Runbook should look a little bit more like this: That's it for the Runbook for Active Directory User Account Provisioning. We'll now speed up a little bit more on the other Runbooks as you'll have a much clearer understanding after this first sample. Summary We've seen one of the Runbook samples these Runbooks should serve as the base for real case scenarios in the environment and help us in the creativity process and also to better understand the configurations necessary on each activity in order to proceed successfully. Resources for Article: Further resources on this subject: Unpacking System Center 2012 Orchestrator [article] Working with VMware Infrastructure [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 1783
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-blueprinting-your-infrastructure
Packt
05 Aug 2015
22 min read
Save for later

Blueprinting Your Infrastructure

Packt
05 Aug 2015
22 min read
In this article written by Gourav Shah, author of the book Ansible Playbook Essentials, we will learn about the following topics: The anatomy of a playbook What plays are and how to write a Hosts inventory and search patterns Ansible modules and the batteries-included approach Orchestrating infrastructure with Ansible Ansible as an orchestrator (For more resources related to this topic, see here.) Getting introduced to Ansible Ansible is a simple, flexible, and extremely powerful tool that gives you the ability to automate common infrastructure tasks, run ad hoc commands, and deploy multitier applications spanning multiple machines. Even though you can use Ansible to launch commands on a number of hosts in parallel, the real power lies in managing those using playbooks. As systems engineer, infrastructure that we typically need to automate contains complex multitier applications. Each of which represents a class of servers, for example, load balancers, web servers, database servers, caching applications, and middleware queues. Since many of these applications have to work in tandem to provide a service, there is topology involved as well. For example, a load balancer would connect to web servers, which in turn read/write to a database and connect to the caching server to fetch in-memory objects. Most of the time, when we launch such application stacks, we need to configure these components in a very specific order. Here is an example of a very common three-tier web application running a load balancer, a web server, and a database backend:   Ansible lets you translate this diagram into a blueprint, which defines your infrastructure policies. The format used to specify such policies is what playbooks are. Example policies and the sequence in which those are to be applied is shown in the following steps: Install, configure, and start the MySQL service on the database servers. Install and configure the web servers that run Nginx with PHP bindings. Deploy a Wordpress application on the web servers and add respective configurations to Nginx. Start the Nginx service on all web servers after deploying Wordpress. Finally, install, configure, and start the haproxy service on the load balancer hosts. Update haproxy configurations with the hostnames of all the web servers created earlier. The following is a sample playbook that translates the infrastructure blueprint into policies enforceable by Ansible:   Plays A playbook consists of one or more plays, which map groups of hosts to well-defined tasks. The preceding example contains three plays, each to configure one layer in the multitiered web application. Plays also define the order in which tasks are configured. This allows us to orchestrate multitier deployments. For example, configure the load balancers only after starting the web servers, or perform two-phase deployment where the first phase only adds this configurations and the second phase starts the services in the desired order. YAML – the playbook language As you may have already noticed, the playbook that we wrote previously resembles more of a text configuration than a code snippet. This is because the creators of Ansible chose to use a simple, human-readable, and familiar YAML format to blueprint the infrastructure. This adds to Ansible's appeal, as users of this tool need not learn any special programming language to get started with. Ansible code is self-explanatory and self-documenting in nature. A quick crash course on YAML should suffice to understand the basic syntax. Here is what you need to know about YAML to get started with your first playbook: The first line of a playbook should begin with "--- " (three hyphens) which indicates the beginning of the YAML document. Lists in YAML are represented with a hyphen followed by a white space. A playbook contains a list of plays; they are represented with "- ". Each play is an associative array, a dictionary, or a map in terms of key-value pairs. Indentations are important. All members of a list should be at the same indentation level. Each play can contain key-value pairs separated by ":" to denote hosts, variables, roles, tasks, and so on. Our first playbook Equipped with the basic rules explained previously and assuming readers have done a quick dive into YAML fundamentals, we will now begin writing our first playbook. Our problem statement includes the following: Create a devops user on all hosts. This user should be part of the devops group. Install the "htop" utility. Htop is an improved version of top—an interactive system process monitor. Add the Nginx repository to the web servers and start it as a service. Now, we will create our first playbook and save it as simple_playbook.yml containing the following code: --- - hosts: all remote_user: vagrant sudo: yes tasks: - group:      name: devops      state: present - name: create devops user with admin privileges       user:      name: devops      comment: "Devops User"      uid: 2001      group: devops - name: install htop package    action: apt name=htop state=present update_cache=yes   - hosts: www user: vagrant sudo: yes tasks: - name: add official nginx repository    apt_repository:      repo: 'deb http://nginx.org/packages/ubuntu/ lucid nginx' - name: install nginx web server and ensure its at the latest     version    apt:      name: nginx      state: latest - name: start nginx service    service:      name: nginx      state: started Our playbook contains two plays. Each play consists of the following two important parts: What to configure: We need to configure a host or group of hosts to run the play against. Also, we need to include useful connection information, such as which user to connect as, whether to use sudo command, and so on. What to run: This includes the specification of tasks to be run, including which system components to modify and which state they should be in, for example, installed, started, or latest. This could be represented with tasks and later on, by roles. Let's now look at each of these briefly. Creating a host inventory Before we even start writing our playbook with Ansible, we need to define an inventory of all hosts that need to be configured, and make it available for Ansible to use. Later, we will start running plays against a selection of hosts from this inventory. If you have an existing inventory, such as cobbler, LDAP, a CMDB software, or wish to pull it from a cloud provider, such as ec2, it can be pulled from Ansible using the concept of a dynamic inventory. For text-based local inventory, the default location is /etc/ansible/hosts. For our learning environment, however, we will create a custom inventory file customhosts in our working directory, the contents of which are shown as follows. You are free to create your own inventory file: #customhosts #inventory configs for my cluster [db] 192.168.61.11 ansible_ssh_user=vagrant   [www] www-01.example.com ansible_ssh_user=ubuntu www-02 ansible_ssh_user=ubuntu   [lb] lb0.example.com Now, when our playbook maps a play to the group, the www (hosts: www), hosts in that group will be configured. The all keywords will match to all hosts from the inventory. The following are the guidelines to for creating inventory files: Inventory files follow INI style configurations, which essentially include configuration blocks that start with host group/class names included in "[ ]". This allows the selective execution on classes of systems, for example, [namenodes]. A single host can be part of multiple groups. In such cases, host variables from both the groups will get merged, and the precedence rules apply. We will discuss variables and precedence in detail later. Each group contains a list of hosts and connection details, such as the SSH user to connect as, the SSH port number if non-default, SSH credentials/keys, sudo credentials, and so on. Hostnames can also contain globs, ranges, and more, to make it easy to include multiple hosts of the same type, which follow some naming patterns. After creating an inventory of the hosts, it's a good idea to validate connectivity using Ansible's ping module (for example, ansible -m ping all). Patterns In the preceding playbook, the following lines decide which hosts to select to run a specific play: - hosts: all - hosts: www The first code will match all hosts, and the second code will match hosts which are part of the www group. Patterns can be any of the following or their combinations: Pattern Types Examples Group name namenodes Match all all or * Range namenode[0:100] Hostnames/hostname globs *.example.com, host01.example.com Exclusions namenodes:!secondaynamenodes Intersection namenodes:&zookeeper Regular expressions ~(nn|zk).*.example.org   Tasks Plays map hosts to tasks. Tasks are a sequence of actions performed against a group of hosts that match the pattern specified in a play. Each play typically contains multiple tasks that are run serially on each machine that matches the pattern. For example, take a look at the following code snippet: - group:    name:devops    state: present - name: create devops user with admin privileges user:    name: devops    comment: "Devops User"    uid: 2001    group: devops In the preceding example, we have two tasks. The first one is to create a group, and second is to create a user and add it to the group created earlier. If you notice, there is an additional line in the second task, which starts with name:. While writing tasks, it's good to provide a name with a human-readable description of what this task is going to achieve. If not, the action string will be printed instead. Each action in a task list can be declared by specifying the following: The name of the module Optionally, the state of the system component being managed The optional parameters With newer versions of Ansible (0.8 onwards), writing an action keyword is now optional. We can directly provide the name of the module instead. So, both of these lines will have a similar action, that is,. installing a package with the apt module: action: apt name=htop state=present update_cache=yes apt: name=nginx state=latest Ansible stands out from other configuration management tools, with its batteries-included included approach. These batteries are "modules." It's important to understand what modules are before we proceed. Modules Modules are the encapsulated procedures that are responsible for managing specific system components on specific platforms. Consider the following example: The apt module for Debian and the yum module for RedHat helps manage system packages The user module is responsible for adding, removing, or modifying users on the system The service module will start/stop system services Modules abstract the actual implementation from users. They expose a declarative syntax that accepts a list of the parameters and states of the system components being managed. All this can be declared using the human-readable YAML syntax, using key-value pairs. In terms of functionality, modules resemble providers for those of you who are familiar with Chef/Puppet software. Instead of writing procedures to create a user, with Ansible we declare which state our component should be in, that is, which user to create, its state, and its characteristics, such as UID, group, shell, and so on. The actual procedures are inherently known to Ansible via modules, and are executed in the background. The Command and Shell modules are special ones. They neither take key-value pairs as parameters, nor are idempotent. Ansible comes preinstalled with a library of modules, which ranges from the ones which manage basic system resources to more sophisticated ones that send notifications, perform cloud integrations, and so on. If you want to provision an ec2 instance, create a database on the remote PostgreSQL server, and get notifications on IRC, then Ansible has a module for it. Isn't this amazing? No need to worry about finding an external plugin, or struggle to integrate with cloud providers, and so on. To find a list of modules available, you can refer to the Ansible documentation at http://docs.ansible.com/list_of_all_modules.html. Ansible is extendible too. If you do not find a module that does the job for you, it's easy to write one, and it doesn't have to be in Python. A module can be written for Ansible in the language of your choice. This is discussed in detail at http://docs.ansible.com/developing_modules.html. The modules and idempotence Idempotence is an important characteristic of a module. It is something which can be applied on your system multiple times, and will return deterministic results. It has built-in intelligence. For instance, we have a task that uses the apt module to install Nginx and ensure that it's up to date. Here is what happens if you run it multiple times: Every time idempotance is run multiple times, the apt module will compare what has been declared in the playbook versus the current state of that package on the system. The first time it runs, Ansible will determine that Nginx is not installed, and will go ahead with the installation. For every consequent run, it will skip the installation part, unless there is a new version of the package available in the upstream repositories. This allows executing the same task multiple times without resulting in the error state. Most of the Ansible modules are idempotent, except for the command and shell modules. Users will have to make these modules idempotent. Running the playbook Ansible comes with the ansible-playbook command to launch a playbook with. Let's now run the plays we created: $ ansible-playbook simple_playbook.yml -i customhosts Here is what happens when you run the preceding command: The ansible-playbook parameter is the command that takes the playbook as an argument (simple_playbook.yml) and runs the plays against the hosts The simple_playbook parameter contains the two plays that we created: one for common tasks, and the other for installing Nginx The customhosts parameter is our host's inventory, which lets Ansible know which hosts, or groups of hosts, to call plays against Launching the preceding command will start calling plays, orchestrating in the sequence that we described in the playbook. Here is the output of the preceding command:   Let's now analyze what happened: Ansible reads the playbooks specified as an argument to the ansible-playbook command and starts executing plays in the serial order. The first play that we declared, runs against the "all" hosts. The all keyword is a special pattern that will match all hosts (similar to *). So, the tasks in the first play will be executed on all hosts in the inventory we passed as an argument. Before running any of the tasks, Ansible will gather information about the systems that it is going to configure. This information is collected in the form of facts. The first play includes the creation of the devops group and user, and installation of the htop package. Since we have three hosts in our inventory, we see one line per host being printed, which indicates whether there was a change in the state of the entity being managed. If the state was not changed, "ok" will be printed. Ansible then moves to the next play. This is executed only on one host, as we have specifed "hosts:www" in our play, and our inventory contains a single host in the group "www". During the second play, the Nginx repository is added, the package is installed, and the service is started. Finally, Ansible prints the summary of the playbook run in the "PLAY RECAP" section. It indicates how many modifications were made, if any of the hosts were unreachable, or execution failed on any of the systems. What if a host is unresponsive, or fails to run tasks? Ansible has built-in intelligence, which will identify such issues and take the failed host out of rotation. It will not affect the execution on other hosts. Orchestrating Infrastructure with Ansible Orchestration can mean different things at different times when used in different scenarios. The following are some of the orchestration scenarios described: Running ad hoc commands in parallel on a group of hosts, for example, using a for loop to walk over a group of web servers to restart the Apache service. This is the crudest form of orchestration. Invoking an orchestration engine to launch another configuration management tool to enforce correct ordering. Configuring a multitier application infrastructure in a certain order with the ability to have fine-grained control over each step, and the flexibility to move back and forth while configuring multiple components. For example, installing the database, setting up the web server, coming back to the database, creating a schema, going to web servers to start services, and more. Most real-world scenarios are similar to the last scenario, which involve a multitier application stacks and more than one environment, where it's important to bring up and update nodes in a certain order, and in a coordinated way. It's also useful to actually test that the application is up and running before moving on to the next. The workflow to set up the stack for the first time versus pushing updates can be different. There can be times when you would not want to update all the servers at once, but do them in batches so that downtime is avoided. Ansible as an orchestrator When it comes to orchestration of any sort, Ansible really shines over other tools. Of course, as the creators of Ansible would say, it's more than a configuration management tool, which is true. Ansible can find a place for itself in any of the orchestration scenarios discussed earlier. It was designed to manage complex multitier deployments. Even if you have your infrastructure being automated with other configuration management tools, you can consider Ansible to orchestrate those. Let's discuss the specific features that Ansible ships with, which are useful for orchestration. Multiple playbooks and ordering Unlike most other configuration management systems, Ansible supports running different playbooks at different times to configure or manage the same infrastructure. You can create one playbook to set up the application stack for the first time, and another to push updates over time in a certain manner. Another property of the playbook is that it can contain more than one play, which allows the separation of groups of hosts for each tier in the application stack, and configures them at the same time. Pre-tasks and post-tasks We have used pre-tasks and post-tasks earlier, which are very relevant while orchestrating, as these allow us to execute a task or run validations before and after running a play. Let's use the example of updating web servers that are registered with the load balancer. Using pre-tasks, a web server can be taken out of a load balancer, then the role is applied to the web servers to push updates, followed by post-tasks which register the web server back to the load balancer. Moreover, if these servers are being monitored by Nagios, alerts can be disabled during the update process and automatically enabled again using pre-tasks and post-tasks. This can avoid the noise that the monitoring tool may generate in the form of alerts. Delegation If you would like tasks to be selectively run on a certain class of hosts, especially the ones outside the current play, the delegation feature of Ansible can come in handy. This is relevant to the scenarios discussed previously and is commonly used with pre-tasks and post-tasks. For example, before updating a web server, it needs to be deregistered from the load balancer. Now, this task should be run on the load balancer, which is not part of the play. This dilemma can be solved by using the delegation feature. With pre-tasks, a script can be launched on the load balancer using the delegate_to keyword, which does the deregistering part as follows: - name: deregister web server from lb shell: < script to run on lb host > delegate_to: lbIf there areis more than one load balancers, anan inventory group can be iterated over as, follows: - name: deregister web server from lb shell: < script to run on lb host > delegate_to: "{{ item }}" with_items: groups.lb Rolling updates This is also called batch updates or zero-downtime updates. Let's assume that we have 100 web servers that need to be updated. If we define these in an inventory and launch a playbook against them, Ansible will start updating all the hosts in parallel. This can also cause downtime. To avoid complete downtime and have a seamless update, it would make sense to update them in batches, for example, 20 at a time. While running a playbook, batch size can be mentioned by using the serial keyword in the play. Let's take a look at the following code snippet: - hosts: www remote_user: vagrant sudo: yes serial: 20 Tests While orchestrating, it's not only essential to configure the applications in order, but also to ensure that they are actually started, and functioning as expected. Ansible modules, such as wait_for and uri, help you build that testing into the playbooks, for example: - name: wait for mysql to be up wait_for: host=db.example.org port=3106 state=started - name: check if a uri returns content uri: url=http://{{ inventory_hostname }}/api register: apicheck The wait_for module can be additionally used to test the existence of a file. It's also useful when you would like to wait until a service is available before proceeding. Tags Ansible plays map roles to specific hosts. While the plays are run, the entire logic that is called from the main task is executed. While orchestrating, we may need to just run a part of the tasks based on the phases that we want to bring the infrastructure in. One example is a zookeeper cluster, where it's important to bring up all the nodes in the cluster at the same time, or in a gap of a few seconds. Ansible can orchestrate this easily with a two-phase execution. In the first phase, you can install and configure the application on all nodes, but not start it. The second phase involves starting the application on all nodes almost simultaneously. This can be achieved by tagging individual tasks, for example, configure, install, service, and more. For example, let's take a look at the following screenshot:   While running a playbook, all tasks with a specific tag can be called using –-tags as follows: $ Ansible-playbook -i customhosts site.yml –-tags install Tags can not only be applied to tasks, but also to the roles, as follows: { role: nginx, when: Ansible_os_family == 'Debian', tags: 'www' } If a specific task needs to be executed always, even if filtered with a tag, use a special tag called always. This will make the task execute unless an overriding option, such as --skip-tags always is used. Patterns and limits Limits can be used to run tasks on a subset of hosts, which are filtered by patterns. For example, the following code would run tasks only on hosts that are part of the db group: $ Ansible-playbook -i customhosts site.yml --limit db Patterns usually contain a group of hosts to include or exclude. A combination of more than one pattern can be specified as follows: $ Ansible-playbook -i customhosts site.yml --limit db,lb Having a colon as separator can be used to filter hosts further. The following command would run tasks on all hosts except for the ones that belong to the groups www and db: $ Ansible-playbook -i customhosts site.yml --limit 'all:!www:!db' Note that this usually needs to be enclosed in quotes. In this pattern, we used the all group, which matches all hosts in the inventory, and can be replaced with *. That was followed by ! to exclude hosts in the db group. The output of this command is as follows, which shows that plays by the name db and www were skipped as no hosts matched due to the filter we used previously: Let's now see these orchestration features in action. We will begin by tagging the role and do the multiphase execution followed by writing a new playbook to manage updates to the WordPress application. Review questions Do you think you've understood the article well enough? Try answering the following questions to test your understanding: What is idempotence when it comes to modules? What is the host's inventory and why is it required? Playbooks map ___ to ___ (fill in the blanks) What types of patterns can you use while selecting a list of hosts to run plays against? Where is the actual procedure to execute an action on a specific platform defined? Why is it said that Ansible comes with batteries included? Summary In this article, you learned about what Ansible playbooks are, what components those are made up of, and how to blueprint your infrastructure with it. We also did a primer on YAML—the language used to create plays. You learned about how plays map tasks to hosts, how to create a host inventory, how to filter hosts with patterns, and how to use modules to perform actions on our systems. We then created a simple playbook as a proof of concept. We also learned about orchestration, using Ansible as an orchestrator and different tasks that we can perform using Ansible as an orchestrator. Resources for Article: Further resources on this subject: Advanced Playbooks [article] Ansible – An Introduction [article] Getting Started with Ansible [article]
Read more
  • 0
  • 0
  • 2441

article-image-deploy-toshi-bitcoin-node-docker-aws
Alex Leishman
05 Aug 2015
8 min read
Save for later

Deploy Toshi Bitcoin Node with Docker on AWS

Alex Leishman
05 Aug 2015
8 min read
Toshi is an implementation of the Bitcoin protocol, written in Ruby and built by Coinbase in response to their fast growth and need to build Bitcoin infrastructure at scale. This post will cover: How to deploy Toshi to an Amazon AWS instance with Redis and PostgreSQL using Docker. How to query the data to gain insights into the Blockchain To get the most out of this post you will need some basic familiarity with Linux, SQL and AWS. Most Bitcoin nodes run “Bitcoin Core”, which is written in C++ and serves as the de-facto standard implementation of the Bitcoin protocol. Its advantages are that it is fast for light-medium use and efficiently stores the transaction history of the network (the blockchain) in LevelDB, a key-value datastore developed at Google. It has wallet management features and an easy-to-use JSON RPC interface for communicating with other applications. However, Bitcoin Core has some shortcomings that make it difficult to use for wallet/address management in at-scale applications. Its database, although efficient, makes it impossible or very difficult to perform certain queries on the blockchain. For example, if you wanted to get the balance of any bitcoin address, you would have to write a script to parse the blockchain separately to find the answer. Additionally, Bitcoin Core starts to significantly slow down when it has to manage and monitor large amounts of addresses (> ~10^7). For a web app with hundreds of thousands of users, each regularly generating new addresses, Bitcoin Core is not ideal. Toshi attempts to address the flexibility and scalability issues facing Bitcoin Core by parsing and storing the entire blockchain in an easily-queried PostgreSQL database. Here is a list of tables in Toshi’s DB: schema.txt We will see the direct benefit of this structure when we start querying our data to gain insights from the blockchain. Since Toshi is written in Ruby it has the added advantage of being developer friendly and easy to customize. The main downside of Toshi is the need for ~10x more storage than Bitcoin core, as storing and indexing the blockchain in well-indexed relational DB requires significantly more disk space. First we will create an instance on Amazon AWS. You will need at least 300GB of storage for the Postgres database. Be sure to auto assign a public IP and allow TLS incoming connections on Port 5000, as this is how we will access the Toshi web interface. Once you get your instance up and running, SSH into the instance using the commands given by Amazon. First we will set up a user for Toshi: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi Adding user `toshi' ... Adding new group `toshi' (1001) ... Adding new user `toshi' (1001) with group `toshi' ... Creating home directory `/home/toshi' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for toshi Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y Then we will add the new user to the sudoers group and switch to that user: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi sudo Adding user `toshi' to group `sudo' ... Adding user toshi to group sudo Done. ubuntu@ip-172-31-62-77:~$ su – toshi toshi@ip-172-31-62-77:~$ Next, we will install Docker and all of its dependencies through an automated script available on the Docker website. This will provision our instance with the necessary software packages. toshi@ip-172-31-62-77:~$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir ..... Then we will clone the Toshi repo from Github and move into the new directory: toshi@ip-172-31-62-77:~$ git clone https://github.com/coinbase/toshi.gittoshi@ip-172-31-62-77:~$ cd toshi/ Next, build the coinbase/toshi Docker image from the Dockerfile located in the /toshi directory. Don’t forget the dot at the end of the command!! toshi@ip-172-31-62-77:~/toshi$ sudo docker build -t=coinbase/toshi .Sending build context to Docker daemon 13.03 MB Sending build context to Docker daemon … … … Removing intermediate container c15dd6c961c2 Step 3 : ADD Gemfile /toshi/Gemfile INFO[0120] Error getting container dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-524950-dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465' on '/var/lib/docker/devicemapper/mnt/dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465': no such file or directory Note, you might see ‘Error getting container’ when this runs. If so don’t worry about it at this point. Next, we will build and run our Redis and Postgres containers. toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_db -d postgres toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_redis -d redis This will build and run Docker containers named toshi_db and toshi_redis based on standard postgres and redis images pulled from Dockerhub. The ‘-d’ flag indicates that the container will run in the background (daemonized). If you see ‘Error response from daemon: Cannot start container’ error while running either of these commands, simply run ‘sudo docker start toshi_redis [or toshi_postgres]’ again. To ensure that our containers are running properly, run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de43ccc8e80 redis:latest "/entrypoint.sh redi 7 minutes ago Up 3 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 8 minutes ago Up 2 minutes 5432/tcp toshi_db You should see both containers running, along with their port numbers. When we run our Toshi container we need to tell it where to find the Postgres and Redis containers, so we must find the toshi_db and toshi_redis IP addresses. Remember we have not run a Toshi container yet, we only built the image from the Dockerfile. You can think of a container as a running version of an image. To learn more about Docker see the docs. toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_db | grep IPAddress "IPAddress": "172.17.0.3", toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_redis | grep IPAddress "IPAddress": "172.17.0.2", Now we have everything we need to get our Toshi container up and running. To do this run: sudo docker run --name toshi_main -d -p 5000:5000 -e REDIS_URL=redis://172.17.0.2:6379 -e DATABASE_URL=postgres://postgres:@172.17.0.3:5432 -e TOSHI_ENV=production coinbase/toshi sh -c 'bundle exec rake db:create db:migrate; foreman start' Be sure to replace the IP addresses in the above command with your own. This creates a container named ‘toshi_main’, runs it as a daemon (-d) and sets three environment variables in the container (-e) which are required for Toshi to run. It also maps port 5000 inside the container to port 5000 of our host (-p). Lastly it runs a shell script in the container (sh –c) which creates and migrates the database, then starts the Toshi web server. To see that it has started properly run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 017c14cbf432 coinbase/toshi:latest "sh -c 'bundle exec 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp toshi_main 4de43ccc8e80 redis:latest "/entrypoint.sh redi 43 minutes ago Up 38 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 43 minutes ago Up 38 minutes 5432/tcp toshi_db If you have set your AWS security settings properly, you should be able to see the syncing progress of Toshi in your browser. Find your instance’s public IP address from the AWS console and then point your browser there using port 5000. For example: ‘http://54.174.195.243:5000/’. You can also see the logs of our Toshi container by running: toshi@ip-172-31-62-77:~$ sudo docker logs –f toshi_main That’s it! We’re all up and running. Be prepared to wait a long time for the blockchain to finish syncing. This could take more than a week or two, but you can start playing around with the data right away through the GUI to get a sense of the power you now have. About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 4602

article-image-animation-features-unity-5
Packt
05 Aug 2015
16 min read
Save for later

Animation features in Unity 5

Packt
05 Aug 2015
16 min read
In this article by Valera Cogut, author of the book Unity 5 for Android Essentials you will learn new Mecanim animation features and awesome new audio features in Unity 5. (For more resources related to this topic, see here.) New Mecanim animation features in Unity 5 Unity 5 contains some new awesome possibilities for the Mecanim animation system. Let's look at the new shiny features known in Unity 5. State machine behavior Now, you can inherit your classes from StateMachineBehaviour in order to be able to attach them to your Mecanim animation states. This class has the following very important callbacks: OnStateEnter OnStateUpdate OnStateExit OnStateMove OnStateIK The StateMachineBehaviour scripts behave like MonoBehaviour scripts, which you can attach on as many objects as you wish; the same is true for StateMachineBehaviour. You can use this solution with or without any animation at all. State machine transition Unity 5 introduced a new awesome feature for Mecanim animation systems known as state machine transitions in order to construct a higher abstraction level. In addition, entry and exit nodes were created. By these two additional nodes to StateMachine, you can now branch your start or finish state depending on your special conditions and requirements. These mixes of transitions are possible: StateMachine | StateMachine, State | StateMachine, State | State. In addition, you also can reorder your layers or parameters. This is the new UI that allows it by a very simple and useful drag-n-drop method. Asset creation API One more awesome possibility in Unity 5 was introduced using scripts in Unity Editor in order to programmatically create assets, such as layers, controllers, states, StateMachine, and blend trees. You can use different solutions with a high-level API provided by Unity engine maintenance and a low-level API, where you should manage all your assets manually. You can find more about both API versions on Unity documentation pages. Direct blend tree Another new feature that was introduced with the new BlendTree type is known as direct. It provides direct mapping and animator parameters to the weight of BlendTree children. Possibilities with Unity 5 have been enhanced with two useful features for Mecanim animation system: Camera can scale, orbit, and pan You can access your parameters in runtime Programmatically creating assets by Unity 5 API The following code snippets are self-explanatory, pretty simple, and straightforward. I list them just as a very useful reminder. Creating the controller To create a controller you can use the following code: var animatorController = UnityEditor.Animations.AnimatorController.CreateAnimatorControllerAtPath ("Assets/Your/Folder/Name/state_machine_transitions.controller"); Adding parameters To add parameters to the controller, you can use this code: animatorController.AddParameter("Parameter1", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter2", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter3″, UnityEditor.Animations.AnimatorControllerParameterType.Trigger); Adding state machines To add state machines, you can use the following code: var sm1 = animatorController.layers[0].stateMachine; var sm2 = sm1.AddStateMachine("sm2"); var sm3 = sm1.AddStateMachine("sm3"); Adding states To add states, you can use the code given here: var s1 = sm2.AddState("s1″); var s2 = sm3.AddState("s2″); var s3 = sm3.AddState("s3″); Adding transitions To add transitions, you can use the following code: var exitTransition = s1.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter1"); exitTransition.duration = 0;   var transition1 = sm2.AddAnyStateTransition(s1); transition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); transition.duration = 0;   var transition2 = sm3.AddEntryTransition(s2); transition2.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3″); sm3.AddEntryTransition(s3); sm3.defaultState = s2;   var exitTransition = s3.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3"); exitTransition.duration = 0;   var smt = rootStateMachine.AddStateMachineTransition(sm2, sm3); smt.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); sm2.AddStateMachineTransition(sm1, sm3); Going deeper into new audio features Let's start with new amazing Audio Mixer possibilities. Now, you can do true submixing of audio in Unity 5. In the following figure, you can see a very simple example with different sound categories required in a game: Now in Unity 5, you can mix different sound collections within categories and tune up volume control and effects only once in a single place so that you can save a lot of time and effort. This new awesome audio feature in Unity 5 allows you to create a fantastic mood and atmosphere for your game. Each Audio Mixer can have a hierarchy of AudioGroups: The Audio Mixer can not only do a lot of useful things, but also mix different sound groups in one place. Different audio effects are applied sequentially in each AudioGroup. Now you're getting closer to the amazing, awesome, and shiny new features in Unity 5 for audio system! A callback script OnAudioFilterRead, which made possible the processing of samples directly into their scripts, previously was handled exclusively by the code. Unity now also supports custom plugins to create different effects. With these innovations, Unity 5 for audio system now has its own applications synthesizer, which has become much easier and more flexible than possible. Mood transitions As mentioned earlier, the mood of the game can be controlled with a mix of sound. This can be achieved with the involvement of new stems and music or ambient sounds. Another common way to accomplish this is to move the state of the mixture. A very effective way of taking mood where you want to go is by changing the volume section's mixture and transferring it to the different states of effect parameters. Inside, everything is the Audio Mixer's ability to identify pictures. Pictures capture the status of all parameters in Audio Mixer. Everything from investigative wet levels to AudioGroup tone levels can be captured and moved between the various parameters. You can even create a complex mixture of states between a whole bunch of pictures in your game, creating all kinds of possibilities and goals. Imagine installing all these things without having to write a line of code to the script. Physics and particle system effects in Unity 5 Physics for 2D and 3D in Unity are very similar, because they use the same concepts like Ias rigidbodies, joints, and colliders. However, Box2D has more features than Unity's 2D physics engine. It is not a problem to mix 2D and 3D physics engines (built-in, custom, third-party) in Unity. So, Unity provides an easy development way for your innovative games and applications. If you need to develop some real-life physics in your project, then you should not write your own library, framework, or engine, except specific requirements. However, you should try existing physics engines, libraries, or frameworks with many features already made. Let's start our introduction into Unity's built-in physics engine. In the case that you need to set your object under Unity's built-in physics management, you just need to attach the Rigidbody component to this object. After that, your object can collide with other entities in its world and gravity will have an affect on it. In other words, Rigidbody will be simulated physically. In your scripts, you can move any of your Rigidbodies by adding vector forces to them. It is not recommended to move the Transform component of a non-kinematic Rigidbody, because it will not collide correctly with other items. Instead, you can apply forces and torque to your Rigidbody. A Rigidbody can be used also to develop cars with wheel colliders and with some of your scripts to apply forces to it. Furthermore, a Rigidbody is used not only for vehicles, but also you can use it for any other physics issues such as airplanes, robots with various scripts for applying forces, and with joints. The most useful way to utilize a Rigidbody is to use it in collaboration with some primitive colliders (built-in in Unity) such as BoxCollider and SphereCollider. Next, we will show you two things to remember about Rigidbody: In your object's hierarchy, you must never have a child and its parent with the Rigidbody component together at the same time It is not recommended to scale Rigidbody's parent object One of the most important and fundamental components of physics in Unity is a Rigidbody component. This component activates physics calculations on the attached object. If you need your object to react to collisions( for example, while playing billiards, balls collide with each other and scatter in different directions) then you must also attach a Collider component on your GameObject. If you have attached a Rigidbody component to your object, then your object will move through the physics engine, and I recommend that you do not move your object by changing its position or rotation in the Transform component. If you need some way to move your object, you should apply the various forces acting on the object so that the Unity physics engine assumes all obligations for the calculation of collisions and moving dynamic objects. Also, in some situations, there is a need for a Rigidbody component, but your object must be moved only by changing its position or rotation properties in the Transform component. It is sometimes necessary to use components without Rigidbody calculating collisions of the object and its motion physics. That is, your object will move by your script or, for example, by running your animation. In order to solve this problem, you should just activate its IsKinematic property. Sometimes, it is required to use a combination of these two modes when IsKinematic is turned on and when it is turned off. You can create a symbiosis of these two modes, changing the IsKinematic parameter directly in your code or in your animation. Changing the IsKinematic property very often from your code or from your animation can be the cause of overhead in your performance. Therefore, you should use it very carefully and only when you really need it. A kinematic Rigidbody object is defined by the IsKinematic toggle option. If a Rigidbody is Kinematic, this object will not be affected by collisions, gravity, or forces. There is a Rigidbody component for 3D physics engine and an analogous Rigidbody2D for 2D physics engine. A kinematic Rigidbody can interact with other non-kinematic Rigidbodies. In the event of using kinematic Rigidbodies, you should translate their positions and rotation values of the Transform component by your scripts or animations. When there is a collision between Kinematic and non-kinematic Rigidbodies, then the Kinematic object will properly wake up non-kinematic Rigidbody. Furthermore, the first Rigidbody will apply friction to the second Rigidbody if the second object is on top of the first object. Let's list some possible usage examples of kinematic Rigidbodies: There are situations when you need your objects to be under physics management, but sometimes to be controlled explicitly from your scripts or animations. As an example, you can attach Rigidbodies to the bones of your animated personage and connect them with joints in order to utilize your entity as a ragdoll. If you are controlling your character by Unity's animation system, you should enable the IsKinematic checkbox. Sometimes you may require your hero to be affected by Unity's built-in physics engine if you are hitting the hero. In this case you should disable the IsKinematic checkbox. If you need a moving item that can push different items, yet not by itself. In case you have a moving platform and you need to place some Rigidbody objects on top, you ought to enable the IsKinematic checkbox rather than simply attaching a collider without a Rigidbody. You may need to enable the IsKinematic property of your Rigidbody object that is animated and has a genuine Rigidbody follower by utilizing one of the accessible joints. Earlier, I mentioned the collider, but now is the time to discuss this component in more detail. In the case of Unity, the physics engine can calculate collisions. You must specify geometric shapes for your object by attaching the Collider component. In most cases, the collider does not have to be the same shape as your mesh with many polygons. Therefore, it is desirable to use simple colliders, which will significantly improve your performance, otherwise with more complex geometric shapes you risk significantly increasing the computing time for physics collisions. Simple colliders in Unity are known as primitive colliders: BoxCollider, BoxCollider2D, SphereCollider, CircleCollider2D, and CapsuleCollider. Also, no one forbids you to combine different primitive colliders to create a more realistic geometric shape that the physics engine can handle very fast compared to MeshCollider. Therefore, to accelerate your performance, you should use primitive colliders wherever possible. You can also hang on to the child objects of different primitive colliders, which will change its position and rotation, depending on the parent Transform component. The Rigidbody component must be attached only to the GameObject root in the hierarchy of your entity. Unity provides a MeshCollider component for 3D physics and a PolygonCollider2D component for 2D physics. The MeshCollider component will use your object's mesh for its geometric shape. In PolygonCollider2D, you can edit directly in Unity and create any 2D geometry for your 2D physical computations. In order to react in collisions between different mesh colliders, you must enable a Convex property. You will certainly sacrifice performance for more accurate physics calculations, but if you have the right balance between quality and performance, then you can achieve good performance only through a proper approach. Objects are static when they have a Collider component without a Rigidbody component. Therefore, you should not move or rotate them by changing properties in their Transform component, because it will leave a heavy imprint on your performance as a physics engine should recalculate many polygons of various objects for right collisions and ray casts. Dynamic objects are those that have a Rigidbody component. Static objects (attached with the Collider component and without Rigidbody components) can interact with dynamic objects (attached with Collider and Rigidbody components). Furthermore, static objects will not be moved by collisions like dynamic objects. Also, Rigidbodies can sleep in order to increase performance. Unity provides the ability to control sleep in a Rigidbodies component directly in the code using following functions: Rigidbody.IsSleeping() Rigidbody.Sleep() Rigidbody.WakeUp() There are two variables characterized in the physics manager. You can open physics manager right from Unity menu here: Edit | Project Settings | Physics: Rigidbody.sleepVelocity: The default value is 0.14. This indicates lower limitations for linear velocity (from zero to infinity) below which objects will sleep. Rigidbody.sleepAngularVelocity: The default value is 0.14. This indicates lower limitations for angular velocity (from zero to infinity) below which objects will sleep. Rigidbodies awaken when: An alternate Rigidbody impacts the resting Rigidbody An alternate Rigidbody was joined through a joint At the point of adjusting a property of the Rigidbody At the point of adding force vectors A kinematic Rigidbody can wake the other sleeping Rigidbodies while static objects (attached with a Collider component and without a Rigidbody component) can't wake your sleeping Rigidbodies. The PhysX physics engine which is integrated into Unity works well on mobile devices, but mobile devices certainly have far fewer resources than powerful desktops. Let's look at a few points to optimize the physics engine in Unity: First of all, note that you can adjust the Fixed Timestep parameter in the time manager in order to reduce costs for the physical execution time updates. If you increase the value, you can increase the quality and accuracy of physics in your game or in your application, but you will lose the time to process. This can greatly reduce your productivity, or in other words, it can increase CPU overhead. The maximum allowed timestep indicates how much time will be spent in the worst case for physical treatment. The total processing time for physics depends on the awake rigidbodies and colliders in the scene, as well as the level of complexity of the colliders. Unity provides the ability to use physical materials for setting various properties such as friction and elasticity. For example, a piece of ice in your game may have very low friction or equal to zero (minimum value), while a jumping ball may have a very high friction force or equal to one (maximum value) and also very high elasticity. You should play with the settings of your physical materials for different objects and choose the most suitable solution for you and the best solution for your performance. Triggers do not require a lot of processing costs by the physics engine and can greatly help in improving your performance. Triggers are useful in situations where, for example, in your game you need to identify areas near all lights that are automatically turned on in the evening or night if the player is in its trigger zone or in other words within the geometric shape of its collider, which you can design as you wish. Unity triggers allow writing the three callbacks, which will be called when your object enters the trigger, while your object is staying in trigger, and when this object leaves the trigger. Thus, you can register any of these functions, the necessary instructions, for example, turn on the flashlight when entering the trigger zone or turn it off when exiting the trigger zone. It is important to know that in Unity, static objects (objects without a Rigidbody component) will not cause your callbacks to get into the zone trigger if your trigger does not contain a Rigidbody component; that is, in other words at least one of these objects must have a Rigidbody component in order to not ignore your callbacks. In the case of two triggers, there should be at least one object attached with a Rigidbody component to your callbacks were not ignored. Remember that when two objects are attached with Rigidbody and Collider components and if at least one of them is the trigger, then the trigger callbacks will be called and not the collision callbacks. I would also like to point out that your callbacks will be called for each object included in the collision or trigger zone. Also, you can directly control whether your collider is a trigger or not by setting the flag isTrigger value to true or false in your code. Of course, you can mix both options in order to obtain the best performance. All collision callbacks will be called only if at least one of two interacted rigidbodies is not kinematic. Summary This article covered new Mecanim animation features in Unity 5. You were introduced to the new awesome audio features in Unity 5. We also covered many useful details for your performance within Unity built-in physics and particle systems. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 6954

article-image-sentiment-analysis-twitter-data-part-2
Janu Verma
03 Aug 2015
6 min read
Save for later

Sentiment Analysis of Twitter data, Part 2

Janu Verma
03 Aug 2015
6 min read
Sentiment Analysis aims to determine how a certain person or group reacts to a specific topic. Traditionally, we would run surveys to gather data and do statistical analysis. With Twitter, it works by extracting tweets containing references to the desired topic, computing the sentiment polarity and strength of each tweet, and then aggregating the results for all such tweets. Companies use this to gather public opinion on their products and services, and make data-informed decisions. In Part 1 we explored what sentiment analysis is and why it is effective. We also looked at the two main methods used: lexical and machine learning. Now in this Part 2 post we will examine some actual examples of using sentiment analysis. Let’s start by examining the AFINN model. AFINN Model: In the AFINN model, the authors have computed sentiment scores for a list of words relevant to microblogging. The sentiment of a tweet is computed based on the sentiment scores of the terms in the tweet. The sentiment of the tweet is defined to be equal to the sum of the sentiment scores for each term in the tweet. The AFINN-111 dictionary contains 2477 English words rated for valence with an integer value between -5 and 5. The words have been manually labelled by Finn Arup Neilsen in 2009-2010. It has lots of words/phrases from Internet lingo such as ‘wtf’, ‘wow’, ‘’wowow’, ‘lol’, ‘lmao’, dipshit’ etc. Think of the AFINN list as more Urban dictionary, than Oxford dictionary.   Some of the words are the grammatically different versions of the same stem, such as ‘favorite’ and ‘favorites’ are listed as two different words with different valence scores. In the definition of sentiment of a tweet, the words in the tweet that are not in the AFINN list are assumed to have zero sentiment score. Implementations of the AFINN model can be found here. Naive Bayes Classifier The Naive Bayes classifier can be trained on a corpus of labeled (positive and negative) tweets and then employed to assign polarity to a new tweet. The features used in this model are the words with their frequencies in the tweet strings. You may want to keep or remove URLs, emoticons and short tokens, depending on the application. This classifier essentially computes the probability of a tweet being positive or negative. One first computes the probability of a word to be present in a positive or negative tweet. This can be easily computed from the training data. Prob(word in +ve tweet) = frequency of occurrence of this word in positive tweets, for instance, what fraction of all tweets containing this word are positive. Negative tweets are similar. A tweet contains many words. The probability of a set of words to be in a positive tweet is defined as the product of the probabilities for each word. This is the Naive (=independence) assumption. In general this computation is not very easy. Using the pre-estimated values of these probabilities, you can compute the probability of a tweet to be positive or negative using Bayes theorem. Whenever a new tweet is fed to the classifier, it will predict the polarity of the tweet based on the probability of its having that polarity. An implementation of Naive Bayes classifier for classifying spam and non-spam messages can be found here. The same script can be used for classifying positive and negative tweets. In most of the cases, we want to include a third category of neutral tweets, or tweets that have zero polarity with respect to the given topic. The methods described above were chosen for simplicity, several other methods in both the categories are prevalent today. Lots of companies using sentiment analysis employ lexical methods where they create dictionaries based on their trade algorithms and the domain of the application. For machine learning based analysis, instead of Naive Bayes, one can use more sophisticated algorithms like SVMs. Challenges Sentiment analysis is a very useful, but there are many challenges that need to be overcome to achieve good results. The very first step in opinion mining, something which I swept under the rug so far, is that we have to identify tweets that are relevant to our topic. Tweets containing the given word can be a decent choice, although not perfect. Once we have identified tweets to be analyzed, we need to sure that the tweets DO contain sentiment. Neutral tweets can be a part of our model, but only polarized tweets tell us something subjective. Even though the tweets are polarized, we still need to make sure that the sentiment in the tweet is related to the topic we are studying. For example, suppose we are studying sentiment related to a movie Mission Impossible, then the tweet: “Tom Cruise in Mission Impossible is pathetic!”. Now this tweet has a negative sentiment, but is directed at the actor rather than the movie. This is not a great example, as the sentiment of the actor and movie is related. The main challenge in Sentiment analysis using lexical methods is to build a dictionary that contains words/phrases and their sentiment scores. It is very hard to do so in full generality, and often the best idea is to choose a subject and build a list for that. Thus sentiment analysis is highly domain centric, so the techniques developed for stocks may not work for movies. To solve these problems, you need expertise in NLP and computational linguistics. They correspond to entity extraction, NER, and entity pattern extraction in NLP terminology. Beyond Twitter Facebook performed an experiment to measure the effect of removing positive (or negative) posts from the people's news feeds on how positive (or negative) their own posts were in the days after these changes were made. They found that the people from whose news feeds negative posts were removed produced a larger percentage of positive words as well as a smaller percentage of negative words in their posts. The group of people from whose news feeds negative posts were removed showed similar tendencies. The procedure and results of this experiment were a paper in the Proceedings of the National Academy of Sciences. Though, I don’t subscribe to the idea of using users are subjects to a physiological experiment without their knowledge, this is a cool application of sentiment analysis subject area. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology. Janu hold a Masters in Theoretical Physics from University of Cambridge in UK, and dropped out from mathematics PhD program (after 3 years) at Kansas State University. Until 24th January save 50% on our hottest Machine Learning titles, as we celebrate Machine Learning week. From Python to Spark to R, we've got a range of languages and tools covered so you can get to grips with Machine Learning from a range of perspectives. As well as savings on popular titles, we're also giving away a free Machine Learning eBook every day this week - visit our Free Learning page to get yours!
Read more
  • 0
  • 0
  • 6279
article-image-directives-and-services-ionic
Packt
28 Jul 2015
18 min read
Save for later

Directives and Services of Ionic

Packt
28 Jul 2015
18 min read
In this article by Arvind Ravulavaru, author of Learning Ionic, we are going to take a look at Ionic directives and services, which provides reusable components and functionality that can help us in developing applications even faster. (For more resources related to this topic, see here.) Ionic Platform service The first service we are going to deal with is the Ionic Platform service ($ionicPlatform). This service provides device-level hooks that you can tap into to better control your application behavior. We will start off with the very basic ready method. This method is fired once the device is ready or immediately, if the device is already ready. All the Cordova-related code needs to be written inside the $ionicPlatform.ready method, as this is the point in the app life cycle where all the plugins are initialized and ready to be used. To try out Ionic Platform services, we will be scaffolding a blank app and then working with the services. Before we scaffold the blank app, we will create a folder named chapter5. Inside that folder, we will run the following command: ionic start -a "Example 16" -i app.example.sixteen example16 blank Once the app is scaffolded, if you open www/js/app.js, you should find a section such as: .run(function($ionicPlatform) {   $ionicPlatform.ready(function() {     // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard     // for form inputs)     if(window.cordova && window.cordova.plugins.Keyboard) {       cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);     }     if(window.StatusBar) {       StatusBar.styleDefault();     }   }); }) You can see that the $ionicPlatform service is injected as a dependency to the run method. It is highly recommended to use $ionicPlatform.ready method inside other AngularJS components such as controllers and directives, where you are planning to interact with Cordova plugins. In the preceding run method, note that we are hiding the keyboard accessory bar by setting: cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); You can override this by setting the value to false. Also, do notice the if condition before the statement. It is always better to check for variables related to Cordova before using them. The $ionicPlatform service comes with a handy method to detect the hardware back button event. A few (Android) devices have a hardware back button and, if you want to listen to the back button pressed event, you will need to hook into the onHardwareBackButton method on the $ionicPlatform service: var hardwareBackButtonHandler = function() {   console.log('Hardware back button pressed');   // do more interesting things here }         $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler); This event needs to be registered inside the $ionicPlatform.ready method preferably inside AngularJS's run method. The hardwareBackButtonHandler callback will be called whenever the user presses the device back button. A simple functionally that you can do with this handler is to ask the user if they want to really quit your app, making sure that they have not accidently hit the back button. Sometimes this may be annoying. Thus, you can provide a setting in your app whereby the user selects if he/she wants to be alerted when they try to quit. Based on that, you can either defer registering the event or you can unsubscribe to it. The code for the preceding logic will look something like this: .run(function($ionicPlatform) {     $ionicPlatform.ready(function() {         var alertOnBackPress = localStorage.getItem('alertOnBackPress');           var hardwareBackButtonHandler = function() {             console.log('Hardware back button pressed');             // do more interesting things here         }           function manageBackPressEvent(alertOnBackPress) {             if (alertOnBackPress) {                 $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler);             } else {                 $ionicPlatform.offHardwareBackButton(hardwareBackButtonHandler);             }         }           // when the app boots up         manageBackPressEvent(alertOnBackPress);           // later in the code/controller when you let         // the user update the setting         function updateSettings(alertOnBackPressModified) {             localStorage.setItem('alertOnBackPress', alertOnBackPressModified);             manageBackPressEvent(alertOnBackPressModified)         }       }); }) In the preceding code snippet, we are looking in localStorage for the value of alertOnBackPress. Next, we create a handler named hardwareBackButtonHandler, which will be triggered when the back button is pressed. Finally, a utility method named manageBackPressEvent() takes in a Boolean value that decides whether to register or de-register the callback for HardwareBackButton. With this set up, when the app starts we call the manageBackPressEvent method with the value from localStorage. If the value is present and is equal to true, we register the event; otherwise, we do not. Later on, we can have a settings controller that lets users change this setting. When the user changes the state of alertOnBackPress, we call the updateSettings method passing in if the user wants to be alerted or not. The updateSettings method updates localStorage with this setting and calls the manageBackPressEvent method, which takes care of registering or de-registering the callback for the hardware back pressed event. This is one powerful example that showcases the power of AngularJS when combined with Cordova to provide APIs to manage your application easily. This example may seem a bit complex at first, but most of the services that you are going to consume will be quite similar. There will be events that you need to register and de-register conditionally, based on preferences. So, I thought this would be a good place to share an example such as this, assuming that this concept will grow on you. registerBackButtonAction The $ionicPlatform also provides a method named registerBackButtonAction. This is another API that lets you control the way your application behaves when the back button is pressed. By default, pressing the back button executes one task. For example, if you have a multi-page application and you are navigating from page one to page two and then you press the back button, you will be taken back to page one. In another scenario, when a user navigates from page one to page two and page two displays a pop-up dialog when it loads, pressing the back button here will only hide the pop-up dialog but will not navigate to page one. The registerBackButtonAction method provides a hook to override this behavior. The registerBackButtonAction method takes the following three arguments: callback: This is the method to be called when the event is fired priority: This is the number that indicates the priority of the listener actionId (optional): This is the ID assigned to the action By default the priority is as follows: Previous view = 100 Close side menu = 150 Dismiss modal = 200 Close action sheet = 300 Dismiss popup = 400 Dismiss loading overlay = 500 So, if you want a certain functionality/custom code to override the default behavior of the back button, you will be writing something like this: var cancelRegisterBackButtonAction = $ionicPlatform.registerBackButtonAction(backButtonCustomHandler, 201); This listener will override (take precedence over) all the default listeners below the priority value of 201—that is dismiss modal, close side menu, and previous view but not above the priority value. When the $ionicPlatform.registerBackButtonAction method executes, it returns a function. We have assigned that function to the cancelRegisterBackButtonAction variable. Executing cancelRegisterBackButtonAction de-registers the registerBackButtonAction listener. The on method Apart from the preceding handy methods, $ionicPlatform has a generic on method that can be used to listen to all of Cordova's events (https://cordova.apache.org/docs/en/edge/cordova_events_events.md.html). You can set up hooks for application pause, application resume, volumedownbutton, volumeupbutton, and so on, and execute a custom functionality accordingly. You can set up these listeners inside the $ionicPlatform.ready method as follows: var cancelPause = $ionicPlatform.on('pause', function() {             console.log('App is sent to background');             // do stuff to save power         });   var cancelResume = $ionicPlatform.on('resume', function() {             console.log('App is retrieved from background');             // re-init the app         });           // Supported only in BlackBerry 10 & Android var cancelVolumeUpButton = $ionicPlatform.on('volumeupbutton', function() {             console.log('Volume up button pressed');             // moving a slider up         });   var cancelVolumeDownButton = $ionicPlatform.on('volumedownbutton', function() {             console.log('Volume down button pressed');             // moving a slider down         }); The on method returns a function that, when executed, de-registers the event. Now you know how to control your app better when dealing with mobile OS events and hardware keys. Content Next, we will take a look at content-related directives. The first is the ion-content directive. Navigation The next component we are going to take a look at is the navigation component. The navigation component has a bunch of directives as well as a couple of services. The first directive we are going to take a look at is ion-nav-view. When the app boots up, $stateProvider will look for the default state and then will try to load the corresponding template inside the ion-nav-view. Tabs and side menu To understand navigation a bit better, we will explore the tabs directive and the side menu directive. We will scaffold the tabs template and go through the directives related to tabs; run this: ionic start -a "Example 19" -i app.example.nineteen example19 tabs Using the cd command, go to the example19 folder and run this:   ionic serve This will launch the tabs app. If you open www/index.html file, you will notice that this template uses ion-nav-bar to manage the header with ion-nav-back-button inside it. Next open www/js/app.js and you will find the application states configured: .state('tab.dash', {     url: '/dash',     views: {       'tab-dash': {         templateUrl: 'templates/tab-dash.html',         controller: 'DashCtrl'       }     }   }) Do notice that the views object has a named object: tab-dash. This will be used when we work with the tabs directive. This name will be used to load a given view when a tab is selected into the ion-nav-view directive with the name tab-dash. If you open www/templates/tabs.html, you will find a markup for the tabs component: <ion-tabs class="tabs-icon-top tabs-color-active-positive">     <!-- Dashboard Tab -->   <ion-tab title="Status" icon-off="ion-ios-pulse" icon-on="ion- ios-pulse-strong" href="#/tab/dash">     <ion-nav-view name="tab-dash"></ion-nav-view>   </ion-tab>     <!-- Chats Tab -->   <ion-tab title="Chats" icon-off="ion-ios-chatboxes-outline" icon-on="ion-ios-chatboxes" href="#/tab/chats">     <ion-nav-view name="tab-chats"></ion-nav-view>   </ion-tab>     <!-- Account Tab -->   <ion-tab title="Account" icon-off="ion-ios-gear-outline" icon- on="ion-ios-gear" href="#/tab/account">     <ion-nav-view name="tab-account"></ion-nav-view>   </ion-tab>   </ion-tabs> The tabs.html will be loaded before any of the child tabs load, since tab state is defined as an abstract route. The ion-tab directive is nested inside ion-tabs and every ion-tab directive has an ion-nav-view directive nested inside it. When a tab is selected, the route with the same name as the name attribute on the ion-nav-view will be loaded inside the corresponding tab. Very neatly structured! You can read more about tabs directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionTabs/. Next, we are going to scaffold an app using the side menu template and go through the navigation inside it; run this: ionic start -a "Example 20" -i app.example.twenty example20 sidemenu Using the cd command, go to the example20 folder and run this:   ionic serve This will launch the side menu app. We start off exploring with www/index.html. This file has only the ion-nav-view directive inside the body. Next, we open www/js/app/js. Here, the routes are defined as expected. But one thing to notice is the name of the views for search, browse, and playlists. It is the same—menuContent—for all: .state('app.search', {     url: "/search",     views: {       'menuContent': {         templateUrl: "templates/search.html"       }     } }) If we open www/templates/menu.html, you will notice ion-side-menus directive. It has two children ion-side-menu-content and ion-side-menu. The ion-side-menu-content displays the content for each menu item inside the ion-nav-view named menuContent. This is why all the menu items in the state router have the same view name. The ion-side-menu is displayed on the left-hand side of the page. You can set the location on the ion-side-menu to the right to show the side menu on the right or you can have two side menus. Do notice the menu-toggle directive on the button inside ion-nav-buttons. This directive is used to toggle the side menu. If you want to have the menu on both sides, your menu.html will look as follows: <ion-side-menus enable-menu-with-back-views="false">   <ion-side-menu-content>     <ion-nav-bar class="bar-stable">       <ion-nav-back-button>       </ion-nav-back-button>         <ion-nav-buttons side="left">         <button class="button button-icon button-clear ion- navicon" menu-toggle="left">         </button>       </ion-nav-buttons>       <ion-nav-buttons side="right">         <button class="button button-icon button-clear ion- navicon" menu-toggle="right">         </button>       </ion-nav-buttons>     </ion-nav-bar>     <ion-nav-view name="menuContent"></ion-nav-view>   </ion-side-menu-content>     <ion-side-menu side="left">     <ion-header-bar class="bar-stable">       <h1 class="title">Left</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu>   <ion-side-menu side="right">     <ion-header-bar class="bar-stable">       <h1 class="title">Right</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu> </ion-side-menus> You can read more about side menu directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionSideMenus/. This concludes our journey through the navigation directives and services. Next, we will move to Ionic loading. Ionic loading The first service we are going to take a look at is $ionicLoading. This service is highly useful when you want to block a user's interaction from the main page and indicate to the user that there is some activity going on in the background. To test this, we will scaffold a new blank template and implement $ionicLoading; run this: ionic start -a "Example 21" -i app.example.twentyone example21 blank Using the cd command, go to the example21 folder and run this:   ionic serve This will launch the blank template in the browser. We will create an app controller and define the show and hide methods inside it. Open www/js/app.js and add the following code: .controller('AppCtrl', function($scope, $ionicLoading, $timeout) {       $scope.showLoadingOverlay = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     $scope.hideLoadingOverlay = function() {         $ionicLoading.hide();     };       $scope.toggleOverlay = function() {         $scope.showLoadingOverlay();           // wait for 3 seconds and hide the overlay         $timeout(function() {             $scope.hideLoadingOverlay();         }, 3000);     };   }) We have a function named showLoadingOverlay, which will call $ionicLoading.show(), and a function named hideLoadingOverlay(), which will call $ionicLoading.hide(). We have also created a utility function named toggleOverlay(), which will call showLoadingOverlay() and after 3 seconds will call hideLoadingOverlay(). We will update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-stable">         <h1 class="title">$ionicLoading service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-dark" ng-click="toggleOverlay()">             Toggle Overlay         </button>     </ion-content> </body> We have a button that calls toggleOverlay(). If you save all the files, head back to the browser, and click on the Toggle Overlay button, you will see the following screenshot: As you can see, the overlay is shown till the hide method is called on $ionicLoading. You can also move the preceding logic inside a service and reuse it across the app. The service will look like this: .service('Loading', function($ionicLoading, $timeout) {     this.show = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     this.hide = function() {         $ionicLoading.hide();     };       this.toggle= function() {         var self  = this;         self.show();           // wait for 3 seconds and hide the overlay         $timeout(function() {             self.hide();         }, 3000);     };   }) Now, once you inject the Loading service into your controller or directive, you can use Loading.show(), Loading.hide(), or Loading.toggle(). If you would like to show only a spinner icon instead of text, you can call the $ionicLoading.show method without any options: $scope.showLoadingOverlay = function() {         $ionicLoading.show();     }; Then, you will see this: You can configure the show method further. More information is available at http://ionicframework.com/docs/nightly/api/service/$ionicLoading/. You can also use the $ionicBackdrop service to show just a backdrop. Read more about $ionicBackdrop at http://ionicframework.com/docs/nightly/api/service/$ionicBackdrop/. You can also checkout the $ionicModal service at http://ionicframework.com/docs/api/service/$ionicModal/; it is quite similar to the loading service. Popover and Popup services Popover is a contextual view that generally appears next to the selected item. This component is used to show contextual information or to show more information about a component. To test this service, we will be scaffolding a new blank app: ionic start -a "Example 23" -i app.example.twentythree example23 blank Using the cd command, go to the example23 folder and run this:   ionic serve This will launch the blank template in the browser. We will add a new controller to the blank project named AppCtrl. We will be adding our controller code in www/js/app.js. .controller('AppCtrl', function($scope, $ionicPopover) {       // init the popover     $ionicPopover.fromTemplateUrl('button-options.html', {         scope: $scope     }).then(function(popover) {         $scope.popover = popover;     });       $scope.openPopover = function($event, type) {         $scope.type = type;         $scope.popover.show($event);     };       $scope.closePopover = function() {         $scope.popover.hide();         // if you are navigating away from the page once         // an option is selected, make sure to call         // $scope.popover.remove();     };   }); We are using the $ionicPopover service and setting up a popover from a template named button-options.html. We are assigning the current controller scope as the scope to the popover. We have two methods on the controller scope that will show and hide the popover. The openPopover method receives two options. One is the event and second is the type of the button we are clicking (more on this in a moment). Next, we update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-positive">         <h1 class="title">Popover Service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-block button-dark" ng- click="openPopover($event, 'dark')">             Dark Button         </button>         <button class="button button-block button-assertive" ng- click="openPopover($event, 'assertive')">             Assertive Button         </button>         <button class="button button-block button-calm" ng- click="openPopover($event, 'calm')">             Calm Button         </button>     </ion-content>     <script id="button-options.html" type="text/ng-template">         <ion-popover-view>             <ion-header-bar>                 <h1 class="title">{{type}} options</h1>             </ion-header-bar>             <ion-content>                 <div class="list">                     <a href="#" class="item item-icon-left">                         <i class="icon ion-ionic"></i> Option One                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-help-buoy"></i> Option Two                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-hammer"></i> Option Three                     </a>                     <a href="#" class="item item-icon-left" ng- click="closePopover()">                         <i class="icon ion-close"></i> Close                     </a>                 </div>             </ion-content>         </ion-popover-view>     </script> </body> Inside ion-content, we have created three buttons, each themed with a different mood (dark, assertive, and calm). When a user clicks on the button, we show a popover that is specific for that button. For this example, all we are doing is passing in the name of the mood and showing the mood name as the heading in the popover. But you can definitely do more. Do notice that we have wrapped our template content inside ion-popover-view. This takes care of positioning the modal appropriately. The template must be wrapped inside the ion-popover-view for the popover to work correctly. When we save all the files and head back to the browser, we will see the three buttons. Depending on the button you click, the heading of the popover changes, but the options remain the same for all of them: Then, when we click anywhere on the page or the close option, the popover closes. If you are navigating away from the page when an option is selected, make sure to call: $scope.popover.remove(); You can read more about Popover at http://ionicframework.com/docs/api/controller/ionicPopover/. Our GitHub organization With the ever-changing frontend world, keeping up with latest in the business is quite essential. During the course of the book, Cordova, Ionic, and Ionic CLI has evolved a lot and we are predicting that they will keep evolving till they become stable. So, we have created a GitHub organization named Learning Ionic (https://github.com/learning-ionic), which consists of code for all the chapters. You can raise issues, submit pull requests and we will also try and keep it updated with the latest changes. So, you can always refer back to GitHub organization for the possible changes. Summary In this article, we looked at various Ionic directives and services that help us develop applications easily. Resources for Article: Further resources on this subject: Mailing with Spring Mail [article] Implementing Membership Roles, Permissions, and Features [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 13291

article-image-programmable-dc-motor-controller-lcd
Packt
23 Jul 2015
23 min read
Save for later

Programmable DC Motor Controller with an LCD

Packt
23 Jul 2015
23 min read
In this article by Don Wilcher author of the book Arduino Electronics Blueprints, we will see how a programmable logic controller (PLC) is used to operate various electronic and electromechanical devices that are wired to I/O wiring modules. The PLC receives signals from sensors, transducers, and electromechanical switches that are wired to its input wiring module and processes the electrical data by using a microcontroller. The embedded software that is stored in the microcontroller's memory can control external devices, such as electromechanical relays, motors (the AC and DC types), solenoids, and visual displays that are wired to its output wiring module. The PLC programmer programs the industrial computer by using a special programming language known as ladder logic. The PLC ladder logic is a graphical programming language that uses computer instruction symbols for automation and controls to operate robots, industrial machines, and conveyor systems. The PLC, along with the ladder logic software, is very expensive. However, with its off-the-shelf electronic components, Arduino can be used as an alternate mini industrial controller for Maker type robotics and machine control projects. In this article, we will see how Arduino can operate as a mini PLC that is capable of controlling a small electric DC motor with a simple two-step programming procedure. Details regarding how one can interface a transistor DC motor with a discrete digital logic circuit to an Arduino and write the control cursor selection code will be provided as well. This article will also provide the build instructions for a programmable motor controller. The LCD will provide the programming directions that are needed to operate an electric motor. The parts that are required to build a programmable motor controller are shown in the next section. Parts list The following list comprises the parts that are required to build the programmable motor controller: Arduino Uno: one unit 1 kilo ohm resistor (brown, black, red, gold): three units A 10-ohm resistor (brown, black, black, gold): one unit A 10-kilo ohm resistor (brown, black, orange, gold): one unit A 100-ohm resistor (brown, black, brown, gold): one unit A 0.01 µF capacitor: one unit An LCD module: one unit A 74LS08 Quad AND Logic Gate Integrated Circuit: one unit A 1N4001 general purpose silicon diode: one unit A DC electric motor (3 V rated): one unit Single-pole double-throw (SPDT) electric switches: two units 1.5 V batteries: two units 3 V battery holder: one unit A breadboard Wires A programmable motor controller block diagram The block diagram of the programmable DC motor controller with a Liquid Crystal Display (LCD) can be imagined as a remote control box with two slide switches and an LCD, as shown in following diagram:   The Remote Control Box provides the control signals to operate a DC motor. This box is not able to provide the right amount of electrical current to directly operate the DC motor. Therefore, a transistor motor driver circuit is needed. This circuit has sufficient current gain hfe to operate a small DC motor. A typical hfe value of 100 is sufficient for the operation of a DC motor. The Enable slide switch is used to set the remote control box to the ready mode. The Program switch allows the DC motor to be set to an ON or OFF operating condition by using a simple selection sequence. The LCD displays the ON or OFF selection prompts that help you operate the DC motor. The remote control box diagram is shown in the next image. The idea behind the concept diagram is to illustrate how a simple programmable motor controller can be built by using basic electrical and electronic components. The Arduino is placed inside the remote control box and wired to the Enable/Program switches and the LCD. External wires are attached to the transistor motor driver, DC motor, and Arduino. The block diagram of the programmable motor controller is an engineering tool that is used to convey a complete product design by using simple graphics. The block diagram also allows ease in planning the breadboard to prototype and test the programmable motor controller in a maker workshop or a laboratory bench. A final observation regarding the block diagram of the programmable motor controller is that the basic computer convention of inputs is on the left, the processor is located in the middle, and the outputs are placed on the right-hand side of the design layout. As shown, the SPDT switches are on the left-hand side, Arduino is located in the middle, and the transistor motor driver with the DC Motor is on the right-hand side of the block diagram. The LCD is shown towards the right of the block diagram because it is an output device. The LCD allows visual selection between the ON/OFF operations of the DC motor by using Program switch. This left-to-right design method allows ease in building the programmable motor controller as well as troubleshooting errors during the testing phase of the project. The block diagram for a programmable motor controller is as follows:   Building the programmable motor controller The block diagram of the programmable motor controller has more circuits than the block diagram of the sound effects machine. As discussed previously, there are a variety of ways to build the (prototype) electronic devices. For instance, they can be built on a Printed Circuit Board (PCB) or an experimenter's/prototype board. The construction base that was used to build this device was a solderless breadboard, which is shown in the next image. The placement of the electronic parts, as shown in the image, are not restricted to the solderless breadboard layout. Rather, it should be used as a guideline. Another method of placing the parts onto the solderless breadboard is to use the block diagram that was shown earlier. This method of arranging the parts that was illustrated in the block diagram allows ease in testing each subcircuit separately. For example, the Program/Enable SPDT switches' subcircuits can be tested by using a DC voltmeter. Placing a DC voltmeter across the Program switch and 1 kilo ohm resistor and toggling switch several times will show a voltage swing between 0 V and +5 V. The same testing method can be carried out on the Enable switch as well. The transistor motor driver circuit is tested by placing a +5 V signal on the base of the 2N3904 NPN transistor. When you apply +5 V to the transistor's base, the DC motor turns on. The final test for the programmable DC motor controller is to adjust the contrast control (10 kilo ohm) to see whether the individual pixels are visible on the LCD. This electrical testing method, which is used to check the programmable DC motor controller is functioning properly, will minimize the electronic I/O wiring errors. Also, the electrical testing phase ensures that all the I/O circuits of the electronics used in the circuit are working properly, thereby allowing the maker to focus on coding the software. Following is the wiring diagram of programmable DC motor controller with the LCD using a solderless breadboard:   As shown in the wiring diagram, the electrical components that are used to build the programmable DC motor controller with the LCD circuit are placed on the solderless breadboard for ease in wiring the Arduino, LCD, and the DC motor. The transistor shown in the preceding image is a 2N3904 NPN device with a pin-out arrangement consisting of an emitter, a base, and a collector respectively. If the transistor pins are wired incorrectly, the DC motor will not turn on. The LCD module is used as a visual display, which allows operating selection of the DC motor. The program slide switch turns the DC motor ON or OFF. Although most of the 16-pin LCD modules have the same electrical pin-out names, consult the manufacturer's datasheet of the available device in hand. There is also a 10 kilo ohm potentiometer to control the LCD's contrast. On wiring the LCD to the Arduino, supply power to the microcontroller board by using the USB cable that is connected to a desktop PC or a notebook. Adjust the 10 kilo ohm potentiometer until a row of square pixels are visible on the LCD. The Program slide switch is used to switch between the ON or OFF operating mode of the DC motor, which is shown on the LCD. The 74LS08 Quad AND gate is a 14-pin Integrated Circuit (IC) that is used to enable the DC motor or get the electronic controller ready to operate the DC motor. Therefore, the Program slide switch must be in the ON position for the electronic controller to operate properly. The 1N4001 diode is used to protect the 2N3904 NPN transistor from peak currents that are stored by the DC motor's winding while turning on the DC motor. When the DC motor is turned off, the 1N4001 diode will direct the peak current to flow through the DC motor's windings, thereby suppressing the transient electrical noise and preventing damage to the transistor. Therefore, it's important to include this electronic component into the design, as shown in the wiring diagram, to prevent electrical damage to the transistor. Besides the wiring diagram, the circuit's schematic diagram will aid in building the programmable motor controller device. Let's build it! In order to build the programmable DC motor controller, follow the following steps: Wire the programmable DC motor controller with the LCD circuit on a solderless breadboard, as shown in the previous image as well as the circuit's schematic diagram that is shown in the next image. Upload the software of the programmable motor controller to the Arduino by using the sketch shown next. Close both the Program and Enables switches. The motor will spin. When you open the Enable switch, the motor stops. The LCD message tells you how one can set the Program switch for an ON and OFF motor control. The Program switch allows you to select between the ON and OFF motor control functions. With the Program switch closed, toggling the Enable switch will turn the motor ON and OFF. Opening the Program switch will prevent the motor from turning on. The next few sections will explain additional details on the I/O interfacing of discrete digital logic circuits and a small DC motor that can be connected to the Arduino. A sketch is a unit of code that is uploaded to and run on an Arduino board. /* * programmable DC motor controller w/LCD allows the user to select ON and OFF operations using a slide switch. To * enable the selected operation another slide switch is used to initiate the selected choice. * Program Switch wired to pin 6. * Output select wired to pin 7. * LCD used to display programming choices (ON or OFF). * created 24 Dec 2012 * by Don Wilcher */ // include the library code: #include <LiquidCrystal.h>   // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2);   // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;     // pin number for PROGRAM input control signal const int OUTPin = 7;     // pin number for OUTPUT control signal   // variable will change: int ProgramStatus = 0; // variable for reading Program input status     void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT);   // set up the LCD's number of rows and columns: lcd.begin(16, 2);   // set cursor for messages andprint Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( "1. ON"); lcd.setCursor(0, 1); lcd.print ( "2. OFF");   }   void loop(){ // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW); } } The schematic diagram of the circuit that is used to build the programmable DC motor controller and upload the sketch to the Arduino is shown in the following image:   Interfacing a discrete digital logic circuit with Arduino The Enable switch, along with the Arduino, is wired to a discrete digital Integrated Circuit (IC) that is used to turn on the transistor motor driver. The discrete digital IC used to turn on the transistor motor driver is a 74LS08 Quad AND gate. The AND gate provides a high output signal when both the inputs are equal to +5 V. The Arduino provides a high input signal to the 74LS08 AND gate IC based on the following line of code:    digitalWrite(OUTPin, HIGH); The OUTPin constant name is declared in the Arduino sketch by using the following declaration statement: const int OUTPin = 7;     // pin number for OUTPUT control signal The Enable switch is also used to provide a +5V input signal to the 74LS08 AND gate IC. The Enable switch circuit schematic diagram is as follows:   Both the inputs must have the value of logic 1 (+5 V) to make an AND logic gate produce a binary 1 output. In the following section, the truth table of an AND logic gate is given. The table shows all the input combinations along with the resultant outputs. Also, along with the truth table, the symbol for an AND logic gate is provided. A truth table is a graphical analysis tool that is used to test digital logic gates and circuits. By setting the inputs of a digital logic gate to binary 1 (5 V) or binary 0 (0 V), the truth table will show the binary output values of 1 or 0 of the logic gate. The truth table of an AND logic gate is given as follows:   Another tool that is used to demonstrate the operation of the digital logic gates is the Boolean Logic expression. The Boolean Logic expression for an AND logic gate is as follows:   A Boolean Logic expression is an algebraic equation that defines the operation of a logic gate. As shown for the AND gate, the Boolean Logic expression circuit's output, which is denoted by C, is only equal to the product of A and B inputs. Another way of observing the operation of the AND gate, based on its Boolean Logic Expression, is by setting the value of the circuit's inputs to 1. Its output has the same binary bit value. The truth table graphically shows the results of the Boolean Logic expression of the AND gate. A common application of the AND logic gate is the Enable circuit. The output of the Enable circuit will only be turned on when both the inputs are on. When the Enable circuit is wired correctly on the solderless breadboard and is working properly, the transistor driver circuit will turn on the DC motor that is wired to it. The operation of the programmable DC motor controller's Enable circuit is shown in the following truth table:   The basic computer circuit that makes the decision to operate the DC motor is the AND logic gate. The previous schematic diagram of the Enable Switch circuit shows the electrical wiring to the specific pins of the 74LS08 IC, but internally, the AND logic gate is the main circuit component for the programmable DC motor controller's Enable function. Following is the diagram of 74LS08 AND Logic Gate IC:   To test the Enable circuit function of the programmable DC motor controller, the Program switch is required. The schematic diagram of the circuit that is required to wire the Program Switch to the Arduino is shown in the following diagram. The Program and Enable switch circuits are identical to each other because two 5 V input signals are required for the AND logic gate to work properly. The Arduino sketch that was used to test the Enable function of the programmable DC motor is shown in the following diagram:   The program for the discrete digital logic circuit with an Arduino is as follows: // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;   // pin number for PROGRAM input control signal const int OUTPin = 7;     // pin number for OUTPUT control signal   // variable will change: int ProgramStatus = 0;       // variable for reading Program input status   void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT); }   void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW);   } } Connect a DC voltmeter's positive test lead to the D7 pin of the Arduino. Upload the preceding sketch to the Arduino and close the Program and Enable switches. The DC voltmeter should approximately read +5 V. Opening the Enable switch will display 0 V on the DC voltmeter. The other input conditions of the Enable circuit can be tested by using the truth table of the AND Gate that was shown earlier. Although the DC motor is not wired directly to the Arduino, by using the circuit schematic diagram shown previously, the truth table will ensure that the programmed Enable function is working properly. Next, connect the DC voltmeter to the pin 3 of the 74LS08 IC and repeat the truth table test again. The pin 3 of the 74LS08 IC will only be ON when both the Program and Enable switches are closed. If the AND logic gate IC pin generates wrong data on the DC voltmeter when compared to the truth table, recheck the wiring of the circuit carefully and properly correct the mistakes in the electrical connections. When the corrections are made, repeat the truth table test for proper operation of the Enable circuit. Interfacing a small DC motor with a digital logic gate The 74LS08 AND Logic Gate IC provides an electrical interface between reading the Enable switch trigger and the Arduino's digital output pin, pin D7. With both the input pins (1 and 2) of the 74LS08 AND logic gate set to binary 1, the small 14-pin IC's output pin 3 will be High. Although the logic gate IC's output pin has a +5 V source present, it will not be able to turn a small DC motor. The 74LS08 logic gate's sourcing current is not able to directly operate a small DC motor. To solve this problem, a transistor is used to operate a small DC motor. The transistor has sufficient current gain hfe to operate the DC motor. The DC motor will be turned on when the transistor is biased properly. Biasing is a technique pertaining to the transistor circuit, where providing an input voltage that is greater than the base-emitter junction voltage (VBE) turns on the semiconductor device. A typical value for VBE is 700 mV. Once the transistor is biased properly, any electrical device that is wired between the collector and +VCC (collector supply voltage) will be turned on. An electrical current will flow from +VCC through the DC motor's windings and the collector-emitter is grounded. The circuit that is used to operate a small DC motor is called a Transistor motor driver, and is shown in the following diagram:   The Arduino code that is responsible for the operation of the transistor motor driver circuit is as follows: void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{      digitalWrite(OUTPin,LOW);   } } Although the transistor motor driver circuit was not directly wired to the Arduino, the output pin of the microcontroller prototyping platform indirectly controls the electromechanical part by using the 74LS08 AND logic gate IC. A tip to keep in mind when using the transistors is to ensure that the semiconductor device can handle the current requirements of the DC motor that is wired to it. If the DC motor requires more than 500 mA of current, consider using a power Metal Oxide Semiconductor Field Effect Transistor (MOSFET) instead. A power MOSFET device such as IRF521 (N-Channel) and 520 (N-Channel) can handle up to 1 A of current quite easily, and generates very little heat. The low heat dissipation of the power MOSFET (PMOSFET) makes it more ideal for the operation of high-current motors than a general-purpose transistor. A simple PMOSFET DC motor driver circuit can easily be built with a handful of components and tested on a solderless breadboard, as shown in the following image. The circuit schematic for the solderless breadboard diagram is shown after the breadboard image as well. Sliding the Single Pole-Double Throw (SPDT)switch in one position biases the PMOSFET and turns on the DC motor. Sliding the switch in the opposite direction turns off the PMOSFET and the DC motor.   Once this circuit has been tested on the solderless breadboard, replace the 2N3904 transistor in the programmable DC Motor controller project with the power-efficient PMOSFET component mentioned earlier. As an additional reference, the schematic diagram of the transistor relay driver circuit is as follows:   A sketch of the LCD selection cursor The LCD provides a simple user interface for the operation of a DC motor that is wired to the Arduino-based programmable DC motor controller. The LCD provides the two basic motor operations of ON and OFF. Although the LCD shows the two DC motor operation options, the display doesn't provide any visual indication of selection when using the Program switch. An enhancement feature of the LCD is that it shows which DC motor operation has been selected by adding a selection symbol. The LCD selection feature provides a visual indicator of the DC motor operation that was selected by the Program switch. This selection feature can be easily implemented for the programmable DC motor controller LCD by adding a > symbol to the Arduino sketch. After uploading the original sketch from the Let's build it section of this article, the LCD will display two DC motor operation options, as shown in the following image:   The enhancement concept sketch of the new LCD selection feature is as follows:   The selection symbol points to the DC motor operation that is based on the Program switch position. (For reference, see the schematic diagram of the programmable DC motor controller circuit.) The partially programmable DC motor controller program sketch that comes without an LCD selection feature Comparing the original LCD DC motor operation selection with the new sketch, the differences with regard to the programming features are as follows: void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program switch is ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);   } else{    digitalWrite(OUTPin,LOW);   } } The partially programmable DC motor controller program sketch with an LCD selection feature This code feature will provide a selection cursor on the LCD to choose the programmable DC motor controller operation mode: // set cursor for messages and print Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( ">1.Closed(ON)"); lcd.setCursor(0, 1); lcd.print ( ">2.Open(OFF)");     void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);      lcd.setCursor(0,0);      lcd.print( ">1.Closed(ON)");      lcd.setCursor(0,1);      lcd.print ( " 2.Open(OFF) ");   } else{      digitalWrite(OUTPin,LOW);      lcd.setCursor(0,1);      lcd.print ( ">2.Open(OFF)");      lcd.setCursor(0,0);      lcd.print( " 1.Closed(ON) ");   } } The most obvious difference between the two partial Arduino sketches is that the LCD selection feature has several lines of code as compared to the original one. As the slide position of the Program switch changes, the LCD's selection symbol instantly moves to the correct operating mode. Although the DC motor can be observed directly, the LCD confirms the operating mode of the electromechanical device. The complete LCD selection sketch is shown in the following section. As a design-related challenge, try displaying an actual arrow for the DC motor operating mode on the LCD. As illustrated in the sketch, an arrow can be built by using the keyboard symbols or the American Standard Code for Information Interchange (ASCII) code. /* * programmable DC motor controller w/LCD allows the user to select ON and OFF operations using a slide switch. To * enable the selected operation another slide switch is used to initiate the selected choice. * Program Switch wired to pin 6. * Output select wired to pin 7. * LCD used to display programming choices (ON or OFF) with selection arrow. * created 28 Dec 2012 * by Don Wilcher */ // include the library code: #include <LiquidCrystal.h>   // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2);   // constants won't change. They're used here to // set pin numbers: const int ProgramPin = 6;     // pin number for PROGRAM input control signal const int OUTPin = 7;       // pin number for OUTPUT control signal     // variable will change: int ProgramStatus = 0;       // variable for reading Program input status     void setup() { // initialize the following pin as an output: pinMode(OUTPin, OUTPUT);   // initialize the following pin as an input: pinMode(ProgramPin, INPUT);   // set up the LCD's number of rows and columns: lcd.begin(16, 2);   // set cursor for messages andprint Program select messages on the LCD. lcd.setCursor(0,0); lcd.print( ">1.Closed(ON)"); lcd.setCursor(0, 1); lcd.print ( ">2.Open(OFF)");   }   void loop(){   // read the status of the Program Switch value: ProgramStatus = digitalRead(ProgramPin);   // check if Program select choice is 1.ON. if(ProgramStatus == HIGH) {    digitalWrite(OUTPin, HIGH);      lcd.setCursor(0,0);      lcd.print( ">1.Closed(ON)");      lcd.setCursor(0,1);      lcd.print ( " 2.Open(OFF) ");   } else{      digitalWrite(OUTPin,LOW);      lcd.setCursor(0,1);      lcd.print ( ">2.Open(OFF)");      lcd.setCursor(0,0);      lcd.print( " 1.Closed(ON) ");   } } Congratulations on building your programmable motor controller device! Summary In this article, a programmable motor controller was built using an Arduino, AND gate, and transistor motor driver. The fundamentals of digital electronics, which include the concepts of Boolean logic expressions and truth tables were explained in the article. The AND gate is not able to control a small DC motor because of the high amount of current that is needed to operate it properly. PMOSFET (IRF521) is able to operate a small DC motor because of its high current sourcing capability. The circuit that is used to wire a transistor to a small DC motor is called a transistor DC motor driver. The DC motor can be turned on or off by using the LCD cursor selection feature of the programmable DC motor controller. Resources for Article: Further resources on this subject: Arduino Development [article] Prototyping Arduino Projects using Python [article] The Arduino Mobile Robot [article]
Read more
  • 0
  • 0
  • 13261

article-image-simple-pathfinding-algorithm-maze
Packt
23 Jul 2015
10 min read
Save for later

A Simple Pathfinding Algorithm for a Maze

Packt
23 Jul 2015
10 min read
In this article by Mário Kašuba, author of the book Lua Game Development Cookbook, explains that maze pathfinding can be used effectively in many types of games, such as side-scrolling platform games or top-down, gauntlet-like games. The point is to find the shortest viable path from one point on the map to another. This can be used for moving NPCs and players as well. (For more resources related to this topic, see here.) Getting ready This article will use a simple maze environment to find a path starting at the start point and ending at the exit point. You can either prepare one by yourself or let the computer create one for you. A map will be represented by a 2D-map structure where each cell will consist of a cell type and cell connections. The cell type values are as follows: 0 means a wall 1 means an empty cell 2 means the start point 3 means the exit point Cell connections will use a bitmask value to get information about which cells are connected to the current cell. The following diagram contains cell connection bitmask values with their respective positions: Now, the quite common problem in programming is how to implement an efficient data structure for 2D maps. Usually, this is done either with a relatively large one-dimensional array or with an array of arrays. All these arrays have a specified static size, so map dimensions are fixed. The problem arises when you use a simple 1D array and you need to change the map size during gameplay or the map size should be unlimited. This is where map cell indexing comes into place. Often you can use this formula to compute the cell index from 2D map coordinates: local index = x + y * map_width map[index] = value There's nothing wrong with this approach when the map size is definite. However, changing the map size would invalidate the whole data structure as the map_width variable would change its value. A solution to this is to use indexing that's independent from the map size. This way you can ensure consistent access to all elements even if you resize the 2D map. You can use some kind of hashing algorithm that packs map cell coordinates into one value that can be used as a unique key. Another way to accomplish this is to use the Cantor pairing function, which is defined for two input coordinates:   Index value distribution is shown in the following diagram: The Cantor pairing function ensures that there are no key collisions no matter what coordinates you use. What's more, it can be trivially extended to support three or more input coordinates. To illustrate the usage of the Cantor pairing function for more dimensions, its primitive form will be defined as a function cantor(k1, k2), where k1 and k2 are input coordinates. The pairing function for three dimensions will look like this: local function cantor3D(k1, k2, k3) return cantor(cantor(k1, k2), k3) end Keep in mind that the Cantor pairing function always returns one integer value. With higher number of dimensions, you'll soon get very large values in the results. This may pose a problem because the Lua language can offer 52 bits for integer values. For example, for 2D coordinates (83114015, 11792250) you'll get a value 0x000FFFFFFFFFFFFF that still can fit into 52-bit integer values without rounding errors. The larger coordinates will return inaccurate values and subsequently you'd get key collisions. Value overflow can be avoided by dividing large maps into smaller ones, where each one uses the full address space that Lua numbers can offer. You can use another coordinate to identify submaps. This article will use specialized data structures for a 2D map with the Cantor pairing function for internal cell indexing. You can use the following code to prepare this type of data structure: function map2D(defaultValue) local t = {} -- Cantor pair function local function cantorPair(k1, k2)    return 0.5 * (k1 + k2) * ((k1 + k2) + 1) + k2 end setmetatable(t, {    __index = function(_, k)      if type(k)=="table" then        local i = rawget(t, cantorPair(k[1] or 1, k[2] or 1))        return i or defaultValue      end    end,    __newindex = function(_, k, v)      if type(k)=="table" then        rawset(t, cantorPair(k[1] or 1, k[2] or 1), v)      else        rawset(t, k, v)      end    end, }) return t end The maze generator as well as the pathfinding algorithm will need a stack data structure. How to do it… This section is divided into two parts, where each one solves very similar problems from the perspective of the maze generator and the maze solver. Maze generation You can either load a maze from a file or generate a random one. The following steps will show you how to generate a unique maze. First, you'll need to grab a maze generator library from the GitHub repository with the following command: git clone https://github.com/soulik/maze_generator This maze generator uses the depth-first approach with backtracking. You can use this maze generator in the following steps. First, you'll need to set up maze parameters such as maze size, entry, and exit points. local mazeGenerator = require 'maze' local maze = mazeGenerator { width = 50, height = 25, entry = {x = 2, y = 2}, exit = {x = 30, y = 4}, finishOnExit = false, } The final step is to iteratively generate the maze map until it's finished or a certain step count is reached. The number of steps should always be one order of magnitude greater than the total number of maze cells mainly due to backtracking. Note that it's not necessary for each maze to connect entry and exit points in this case. for i=1,12500 do local result = maze.generate() if result == 1 then    break end end Now you can access each maze cell with the maze.map variable in the following manner: local cell = maze.map[{x, y}] local cellType = cell.type local cellConnections = cell.connections Maze solving This article will show you how to use a modified Trémaux's algorithm, which is based on depth-first search and path marking. This method guarantees finding the path to the exit point if there's one. It relies on using two keys in each step: current position and neighbors. This algorithm will use three state variables—the current position, a set of visited cells, and the current path from the starting point: local currentPosition = {maze.entry.x, maze.entry.y} local visistedCells = map2D(false) local path = stack() The whole maze solving process will be placed into one loop. This algorithm is always finite, so you can use the infinite while loop. -- A placeholder for neighbours function that will be defined later local neighbours   -- testing function for passable cells local cellTestFn = function(cell, position) return (cell.type >= 1) and (not visitedCells[position]) end   -- include starting point into path visitedCells[currentPosition] = true path.push(currentPosition)   while true do local currentCell = maze.map[currentPosition] -- is current cell an exit point? if currentCell and    (currentCell.type == 3 or currentCell.type == 4) then    break else    -- have a look around and find viable cells    local possibleCells = neighbours(currentPosition, cellTestFn)    if #possibleCells > 0 then      -- let's try the first available cell      currentPosition = possibleCells[1]      visitedCells[currentPosition] = true      path.push(currentPosition)    elseif not path.empty() then      -- get back one step      currentPosition = path.pop()    else      -- there's no solution      break    end end end This fairly simple algorithm uses the neighbours function to obtain a list of cells that haven't been visited yet: -- A shorthand for direction coordinates local neighbourLocations = { [0] = {0, 1}, [1] = {1, 0}, [2] = {0, -1}, [3] = {-1, 0}, }   local function neighbours(position0, fn) local neighbours = {} local currentCell = map[position0] if type(currentCell)=='table' then    local connections = currentCell.connections    for i=0,3 do      -- is this cell connected?      if bit.band(connections, 2^i) >= 1 then        local neighbourLocation = neighbourLocations[i]        local position1 = {position0[1] + neighbourLocation[1],         position0[2] + neighbourLocation[2]}        if (position1[1]>=1 and position1[1] <= maze.width and         position1[2]>=1 and position1[2] <= maze.height) then          if type(fn)=="function" then            if fn(map[position1], position1) then              table.insert(neighbours, position1)            end          else            table.insert(neighbours, position1)          end        end      end    end end return neighbours end When this algorithm finishes, a valid path between entry and exit points is stored in the path variable represented by the stack data structure. The path variable will contain an empty stack if there's no solution for the maze. How it works… This pathfinding algorithm uses two main steps. First, it looks around the current maze cell to find cells that are connected to the current maze cell with a passage. This will result in a list of possible cells that haven't been visited yet. In this case, the algorithm will always use the first available cell from this list. Each step is recorded in the stack structure, so in the end, you can reconstruct the whole path from the exit point to the entry point. If there are no maze cells to go, it will head back to the previous cell from the stack. The most important is the neighbours function, which determines where to go from the current point. It uses two input parameters: current position and a cell testing function. It looks around the current cell in four directions in clockwise order: up, right, down, and left. There must be a passage from the current cell to each surrounding cell; otherwise, it'll just skip to the next cell. Another step determines whether the cell is within the rectangular maze region. Finally, the cell is passed into the user-defined testing function, which will determine whether to include the current cell in a list of usable cells. The maze cell testing function consists of a simple Boolean expression. It returns true if the cell has a correct cell type (not a wall) and hasn't been visited yet. A positive result will lead to inclusion of this cell to a list of usable cells. Note that even if this pathfinding algorithm finds a path to the exit point, it doesn't guarantee that this path is the shortest possible. Summary We have learned how pathfinding works in games with a simple maze.With pathfinding algorithm, you can create intelligent game opponents that won't jump into a lava lake at the first opportunity. Resources for Article: Further resources on this subject: Mesh animation [article] Getting into the Store [article] Creating a Direct2D game window class [article]
Read more
  • 0
  • 0
  • 24989
Packt
23 Jul 2015
18 min read
Save for later

Elasticsearch – Spicing Up a Search Using Geo

Packt
23 Jul 2015
18 min read
A geo point refers to the latitude and longitude of a point on Earth. Each location on it has its own unique latitude and longitude. Elasticsearch is aware of geo-based points and allows you to perform various operations on top of it. In many contexts, it's also required to consider a geo location component to obtain various functionalities. For example, say you need to search for all the nearby restaurants that serve Chinese food or I need to find the nearest cab that is free. In some other situation, I need to find to which state a particular geo point location belongs to understand where I am currently standing. This article by Vineeth Mohan, author of the book Elasticsearch Blueprints, is modeled such that all the examples mentioned are related to real-life scenarios, of restaurant searching, for better understanding. Here, we take the example of sorting restaurants based on geographical preferences. A number of cases ranging from the simple, such as finding the nearest restaurant, to the more complex case, such as categorization of restaurants based on distance are covered in this article. What makes Elasticsearch unique and powerful is the fact that you can combine geo operation with any other normal search query to yield results clubbed with both the location data and the query data. (For more resources related to this topic, see here.) Restaurant search Let's consider creating a search portal for restaurants. The following are its requirements: To find the nearest restaurant with Chinese cuisine, which has the word ChingYang in its name. To decrease the importance of all restaurants outside city limits. To find the distance between the restaurant and current point for each of the preceding restaurant matches. To find whether the person is in a particular city's limit or not. To aggregate all restaurants within a distance of 10 km. That is, for a radius of the first 10 km, we have to compute the number of restaurants. For the next 10 km, we need to compute the number of restaurants and so on. Data modeling for restaurants Firstly, we need to see the aspects of data and model it around a JSON document for Elasticsearch to make sense of the data. A restaurant has a name, its location information, and rating. To store the location information, Elasticsearch has a provision to understand the latitude and longitude information and has features to conduct searches based on it. Hence, it would be best to use this feature. Let's see how we can do this. First, let's see what our document should look like: { "name" : "Tamarind restaurant", "location" : {      "lat" : 1.10,      "lon" : 1.54 } } Now, let's define the schema for the same: curl -X PUT "http://$hostname:9200/restaurants" -d '{    "index": {        "number_of_shards": 1,        "number_of_replicas": 1  },    "analysis":{            "analyzer":{                    "flat" : {                "type" : "custom",                "tokenizer" : "keyword",                "filter" : "lowercase"            }        }    } }'   echo curl -X PUT "http://$hostname:9200/restaurants /restaurant/_mapping" -d '{    "restaurant" : {    "properties" : {        "name" : { "type" : "string" },        "location" : { "type" : "geo_point", "accuracy" : "1km" }    }}   }' Let's now index some documents in the index. An example of this would be the Tamarind restaurant data shown in the previous section. We can index the data as follows: curl -XPOST 'http://localhost:9200/restaurants/restaurant' -d '{    "name": "Tamarind restaurant",    "location": {        "lat": 1.1,        "lon": 1.54    } }' Likewise, we can index any number of documents. For the sake of convenience, we have indexed only a total of five restaurants for this article. The latitude and longitude should be of this format. Elasticsearch also accepts two other formats (geohash and lat_lon), but let's stick to this one. As we have mapped the field location to the type geo_point, Elasticsearch is aware of what this information means and how to act upon it. The nearest hotel problem Let's assume that we are at a particular point where the latitude is 1.234 and the longitude is 2.132. We need to find the nearest restaurants to this location. For this purpose, the function_score query is the best option. We can use the decay (Gauss) functionality of the function score query to achieve this: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "function_score": {      "functions": [        {          "gauss": {            "location": {              "scale": "1km",               "origin": [                1.231,                1.012              ]            }          }        }      ]    } } }' Here, we tell Elasticsearch to give a higher score to the restaurants that are nearby the referral point we gave it. The closer it is, the higher is the importance. Maximum distance covered Now, let's move on to another example of finding restaurants that are within 10 kms from my current position. Those that are beyond 10 kms are of no interest to me. So, it almost makes up to a circle with a radius of 10 km from my current position, as shown in the following map: Our best bet here is using a geo distance filter. It can be used as follows: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "filter": {        "geo_distance": {          "distance": "100km",          "location": {            "lat": 1.232,            "lon": 1.112          }        }      }    } } }' Inside city limits Next, I need to consider only those restaurants that are inside a particular city limit; the rest are of no interest to me. As the city shown in the following map is rectangle in nature, this makes my job easier: Now, to see whether a geo point is inside a rectangle, we can use the bounding box filter. A rectangle is marked when you feed the top-left point and bottom-right point. Let's assume that the city is within the following rectangle with the top-left point as X and Y and the bottom-right point as A and B: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "query": {        "match_all": {}      },      "filter": {        "geo_bounding_box": {          "location": {            "top_left": {              "lat": 2,              "lon": 0            },            "bottom_right": {              "lat": 0,              "lon": 2            }          }        }      }    } } }' Distance values between the current point and each restaurant Now, consider the scenario where you need to find the distance between the user location and each restaurant. How can we achieve this requirement? We can use scripts; the current geo coordinates are passed to the script and then the query to find the distance between each restaurant is run, as in the following code. Here, the current location is given as (1, 2): curl -XPOST 'http://localhost:9200/restaurants/_search?pretty' -d '{ "script_fields": {    "distance": {      "script": "doc['"'"'location'"'"'].arcDistanceInKm(1, 2)"    } }, "fields": [    "name" ], "query": {    "match": {      "name": "chinese"    } } }' We have used the function called arcDistanceInKm in the preceding query, which accepts the geo coordinates and then returns the distance between that point and the locations satisfied by the query. Note that the unit of distance calculated is in kilometers (km). You might have noticed a long list of quotes and double quotes before and after location in the script mentioned previously. This is the standard format and if we don't use this, it would result in returning the format error while processing. The distances are calculated from the current point to the filtered hotels and are returned in the distance field of response, as shown in the following code: { "took" : 3, "timed_out" : false, "_shards" : {    "total" : 1,    "successful" : 1,    "failed" : 0 }, "hits" : {    "total" : 2,    "max_score" : 0.7554128,    "hits" : [ {      "_index" : "restaurants",      "_type" : "restaurant",      "_id" : "AU08uZX6QQuJvMORdWRK",      "_score" : 0.7554128,      "fields" : {        "distance" : [ 112.92927483176413 ],        "name" : [ "Great chinese restaurant" ]      }    }, {      "_index" : "restaurants",      "_type" : "restaurant",      "_id" : "AU08uZaZQQuJvMORdWRM",      "_score" : 0.7554128,      "fields" : {        "distance" : [ 137.61635969665923 ],        "name" : [ "Great chinese restaurant" ]      }    } ] } } Note that the distances measured from the current point to the hotels are direct distances and not road distances. Restaurant out of city limits One of my friends called me and asked me to join him on his journey to the next city. As we were leaving the city, he was particular that he wants to eat at some restaurant off the city limits, but outside the next city. For this, the requirement was translated to any restaurant that is minimum 15 kms and a maximum of 100 kms from the center of the city. Hence, we have something like a donut in which we have to conduct our search, as show in the following map: The area inside the donut is a match, but the area outside is not. For this donut area calculation, we have the geo_distance_range filter to our rescue. Here, we can apply the minimum distance and maximum distance in the fields from and to to populate the results, as shown in the following code: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "query": {    "filtered": {      "query": {        "match_all": {}      },      "filter": {        "geo_distance_range": {          "from": "15km",          "to": "100km",          "location": {            "lat": 1.232,            "lon": 1.112          }        }      }    } } }' Restaurant categorization based on distance In an e-commerce solution, to search restaurants, it's required that you increase the searchable characteristics of the application. This means that if we are able to give a snapshot of results other than the top-10 results, it would add to the searchable characteristics of the search. For example, if we are able to show how many restaurants serve Indian, Thai, or other cuisines, it would actually help the user to get a better idea of the result set. In a similar manner, if we can tell them if the restaurant is near, at a medium distance, or far away, we can really pull a chord in the restaurant search user experience, as shown in the following map: Implementing this is not hard, as we have something called the distance range aggregation. In this aggregation type, we can handcraft the range of distance we are interested in and create a bucket for each of them. We can also define the key name we need, as shown in the following code: curl -XPOST 'http://localhost:9200/restaurants/_search' -d '{ "aggs": {    "distanceRanges": {      "geo_distance": {        "field": "location",        "origin": "1.231, 1.012",        "unit": "meters",        "ranges": [          {            "key": "Near by Locations",            "to": 200          },          {            "key": "Medium distance Locations",            "from": 200,            "to": 2000          },          {            "key": "Far Away Locations",            "from": 2000          }        ]      }    } } }' In the preceding code, we categorized the restaurants under three distance ranges, which are the nearby hotels (less than 200 meters), medium distant hotels (within 200 meters to 2,000 meters), and the far away ones (greater than 2,000 meters). This logic was translated to the Elasticsearch query using which, we received the results as follows: { "took": 44, "timed_out": false, "_shards": {    "total": 1,    "successful": 1,    "failed": 0 }, "hits": {    "total": 5,    "max_score": 0,    "hits": [         ] }, "aggregations": {    "distanceRanges": {      "buckets": [        {          "key": "Near by Locations",          "from": 0,          "to": 200,          "doc_count": 1        },        {          "key": "Medium distance Locations",          "from": 200,          "to": 2000,        "doc_count": 0        },        {          "key": "Far Away Locations",          "from": 2000,          "doc_count": 4        }      ]    } } } In the results, we received how many restaurants are there in each distance range indicated by the doc_count field. Aggregating restaurants based on their nearness In the previous example, we saw the aggregation of restaurants based on their distance from the current point to three different categories. Now, we can consider another scenario in which we classify the restaurants on the basis of the geohash grids that they belong to. This kind of classification can be advantageous if the user would like to get a geographical picture of how the restaurants are distributed. Here is the code for a geohash-based aggregation of restaurants: curl -XPOST 'http://localhost:9200/restaurants/_search?pretty' -d '{ "size": 0, "aggs": {    "DifferentGrids": {      "geohash_grid": {        "field": "location",        "precision": 6      },      "aggs": {        "restaurants": {          "top_hits": {}        }      }    } } }' You can see from the preceding code that we used the geohash aggregation, which is named as DifferentGrids and the precision here, is to be set as 6. The precision field value can be varied within the range of 1 to 12, with 1 being the lowest and 12 being the highest reference of precision. Also, we used another aggregation named restaurants inside the DifferentGrids aggregation. The restaurant aggregation uses the top_hits query to fetch the aggregated details from the DifferentGrids aggregation, which otherwise, would return only the key and doc_count values. So, running the preceding code gives us the following result: {    "took":5,    "timed_out":false,    "_shards":{      "total":1,      "successful":1,      "failed":0    },    "hits":{      "total":5,      "max_score":0,      "hits":[        ]    },    "aggregations":{      "DifferentGrids":{          "buckets":[            {                "key":"s009",               "doc_count":2,                "restaurants":{... }            },            {                "key":"s01n",                "doc_count":1,                "restaurants":{... }            },            {                "key":"s00x",                "doc_count":1,                "restaurants":{... }            },            {                "key":"s00p",                "doc_count":1,                "restaurants":{... }            }          ]      }    } } As we can see from the response, there are four buckets with the key values, which are s009, s01n, s00x, and s00p. These key values represent the different geohash grids that the restaurants belong to. From the preceding result, we can evidently say that the s009 grid contains two restaurants inside it and all the other grids contain one each. A pictorial representation of the previous aggregation would be like the one shown on the following map: Summary We found that Elasticsearch can handle geo point and various geo-specific operations. A few geospecific and geopoint operations that we covered in this article were searching for nearby restaurants (restaurants inside a circle), searching for restaurants within a range (restaurants inside a concentric circle), searching for restaurants inside a city (restaurants inside a rectangle), searching for restaurants inside a polygon, and categorization of restaurants by the proximity. Apart from these, we can use Kibana, a flexible and powerful visualization tool provided by Elasticsearch for geo-based operations. Resources for Article: Further resources on this subject: Elasticsearch Administration [article] Extending ElasticSearch with Scripting [article] Indexing the Data [article]
Read more
  • 0
  • 0
  • 3814

article-image-introduction-mastering-javascript-promises-and-its-implementation-angularjs
Packt
23 Jul 2015
21 min read
Save for later

An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js

Packt
23 Jul 2015
21 min read
In this article by Muzzamil Hussain, the author of the book Mastering JavaScript Promises, introduces us to promises in JavaScript and its implementation in Angular.js. (For more resources related to this topic, see here.) For many of us who are working with JavaScript, we all know that working with JavaScript means you must have to be a master is asynchronous coding but this skill doesn't come easily. You have to understand callbacks and when you learn it, a sense of realization started to bother you that managing callbacks is not a very easy task, and it's really not an effective way of asynchronous programming. Those of you who already been through this experience, promises is not that new; even if you haven't used it in your recent project, but you would really want to go for it. For those of you who neither use any of callbacks or promises, understanding promises or seeking difference between callbacks and promise would be a hard task. Some of you have used promises in JavaScript during the use of popular and mature JavaScript libraries such as Node.js, jQuery, or WinRT. You are already aware of the advantages of promises and how it's helping out in making your work efficient and code look beautiful. For all these three classes of professionals, gathering information on promises and its implementation in different libraries is quite a task and much of the time you spent is on collecting the right information about how you can attach an error handler in promise, what is a deferred object, and how it can pass it on to different function. Possession of right information in the time you need is the best virtue one could ask for. Keeping all these elements in mind, we have written a book named Mastering JavaScript Promises. This book is all about JavaScript and how promises are implemented in some of the most renowned libraries of the world. This book will provide a foundation for JavaScript, and gradually, it will take you through the fruitful journey of learning promises in JavaScript. The composition of chapters in this book are engineered in such a way that it provides knowledge from the novice level to an advance level. The book covers a wide range of topics with both theoretical and practical content in place. You will learn about evolution of JavaScript, the programming models of different kinds, the asynchronous model, and how JavaScript uses it. The book will take you right into the implementation mode with a whole lot of chapters based on promises implementation of WinRT, Node.js, Angular.js, and jQuery. With easy-to-follow example code and simple language, you will absorb a huge amount information on this topic. Needless to say, books on such topics are in itself an evolutionary process, so your suggestions are more than welcome. Here are few extracts from the book to give you a glimpse of what we have in store for you in this book, but most of the part in this section will focus on Angular.js and how promises are implemented in it. Let's start our journey to this article with programming models. Models Models are basically templates upon which the logics are designed and fabricated within a compiler/interpreter of a programming language so that software engineers can use these logics in writing their software logically. Every programming language we use is designed on a particular programming model. Since software engineers are asked to solve a particular problem or to automate any particular service, they adopt programming languages as per the need. There is no set rule that assigns a particular language to create products. Engineers adopt any language based on the need. The asynchronous programming model Within the asynchronous programming model, tasks are interleaved with one another in a single thread of control. This single thread may have multiple embedded threads and each thread may contain several tasks linked up one after another. This model is simpler in comparison to the threaded case, as the programmers always know the priority of the task executing at a given slot of time in memory. Consider a task in which an OS (or an application within OS) uses some sort of a scenario to decide how much time is to be allotted to a task, before giving the same chance to others. The behavior of the OS of taking control from one task and passing it on to another task is called preempting. Promise The beauty of working with JavaScript's asynchronous events is that the program continues its execution, even when it doesn't have any value it needs to work that is in progress. Such scenarios are named as yet known values from unfinished work. This can make working with asynchronous events in JavaScript challenging. Promises are a programming construct that represents a value that is still unknown. Promises in JavaScript enable us to write asynchronous code in a parallel manner to synchronous code. How to implement promises So far, we have learned the concept of promise, its basic ingredients, and some of the basic functions it has to offer in nearly all of its implementations, but how are these implementations using it? Well, it's quite simple. Every implementation, either in the language or in the form of a library, maps the basic concept of promises. It then maps it to a compiler/interpreter or in code. This allows the written code or functions to behave in the paradigm of promise, which ultimately presents its implementations. Promises are now part of the standard package for many languages. The obvious thing is that they have implemented it in their own way as per the need. Implementing promises in Angular.js Promise is all about how async behavior can be applied on a certain part of an application or on the whole. There is a list of many other JavaScript libraries where the concept of promises exists but in Angular.js, it's present in a much more efficient way than any other client-side applications. Promises comes in two flavors in Angular.js, one is $q and the other is Q. What is the difference between them? We will explore it in detail in the following sections. For now, we will look at what promise means to Angular.js. There are many possible ways to implement promises in Angular.js. The most common one is to use the $q parameter, which is inspired by Chris Kowal's Q library. Mainly, Angular.js uses this to provide asynchronous methods' implementations. With Angular.js, the sequence of services is top to bottom starting with $q, which is considered as the top class; within it, many other subclasses are embedded, for example, $q.reject() or $q.resolve(). Everything that is related to promises in Angular.js must follow the $q parameters. Starting with the $q.when() method, it seems like it creates a method immediately rather it only normalizes the value that may or may not create the promise object. The usage of $q.when() is based on the value supplied to it. If the value provided is a promise, $q.when() will do its job and if it's not, a promise value, $q.when() will create it. The schematics of using promises in Angular.js Since Chris Kowal's Q library is the global provider and inspiration of promises callback returns, Angular.js also uses it for its promise implementations. Many of Angular.js services are by nature promise oriented in return type by default. This includes $interval, $http, and $timeout. However, there is a proper mechanism of using promises in Angular.js. Look at the following code and see how promises maps itself within Angular.js: var promise = AngularjsBackground(); promise.then( function(response) {    // promise process }, function(error) {    // error reporting }, function(progress) {    // send progress    }); All of the mentioned services in Angular.js return a single object of promise. They might be different in taking parameters in, but in return all of them respond back in a single promise object with multiple keys. For example, $http.get returns a single object when you supply four parameters named data, status, header, and config. $http.get('/api/tv/serials/sherlockHolmes ') .success(function(data, status, headers, config) {    $scope.movieContent = data; }); If we employ the promises concept here, the same code will be rewritten as: var promise = $http.get('/api/tv/serials/sherlockHolmes ') promise.then( function(payload) {    $scope.serialContent = payload.data; }); The preceding code is more concise and easier to maintain than the one before this, which makes the usage of Angular.js more adaptable to the engineers using it. Promise as a handle for callback The implementation of promise in Angular.js defines your use of promise as a callback handle. The implementations not only define how to use promise for Angular.js, but also what steps one should take to make the services as "promise-return". This states that you do something asynchronously, and once your said job is completed, you have to trigger the then() service to either conclude your task or to pass it to another then() method: /asynchronous _task.then().then().done(). In simpler form, you can do this to achieve the concept of promise as a handle for call backs: angular.module('TVSerialApp', []) .controller('GetSerialsCtrl',    function($log, $scope, TeleService) {      $scope.getserialListing = function(serial) {        var promise =          TeleService.getserial('SherlockHolmes');        promise.then(          function(payload) {            $scope.listingData = payload.data;          },          function(errorPayload) {            $log.error('failure loading serial', errorPayload);        });      }; }) .factory('TeleService', function($http) {    return {      getserial: function(id) {        return $http.get(''/api/tv/serials/sherlockHolmes' + id);      }    } }); Blindly passing arguments and nested promises Whatever service of promise you use, you must be very sure of what you are passing and how this can affect the overall working of your promise function. Blindly passing arguments can cause confusion for the controller as it has to deal with its own results too while handling other requests. Say we are dealing with the $http.get service and you blindly pass too much of load to it. Since it has to deal with its own results too in parallel, it might get confused, which may result in callback hell. However, if you want to post-process the result instead, you have to deal with an additional parameter called $http.error. In this way, the controller doesn't have to deal with its own result, and calls such as 404 and redirects will be saved. You can also redo the preceding scenario by building your own promise and bringing back the result of your choice with the payload that you want with the following code: factory('TVSerialApp', function($http, $log, $q) { return {    getSerial: function(serial) {      var deferred = $q.defer();      $http.get('/api/tv/serials/sherlockHolmes' + serial)        .success(function(data) {          deferred.resolve({            title: data.title,            cost: data.price});        }).error(function(msg, code) {            deferred.reject(msg);            $log.error(msg, code);        });        return deferred.promise;    } } }); By building a custom promise, you have many advents. You can control inputs and output calls, log the error messages, transform the inputs into desired outputs, and share the status by using the deferred.notify(mesg) method. Deferred objects or composed promises Since custom promise in Angular.js can be hard to handle sometimes and can fall into malfunction in the worse case, the promise provides another way to implement itself. It asks you to transform your response within a then method and returns a transformed result to the calling method in an autonomous way. Considering the same code we used in the previous section: this.getSerial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes'+ serial)        .then(                function (response) {                    return {                        title: response.data.title,                        cost: response.data.price                      });                  }); }; The output we yield from the preceding method will be a chained, promised, and transformed. You can again reuse the output for another output, chain it to another promise, or simply display the result. The controller can then be transformed into the following lines of code: $scope.getSerial = function(serial) { service.getSerial(serial) .then(function(serialData) {    $scope.serialData = serialData; }); }; This has significantly reduced the lines of code. Also, this helps us in maintaining the service level since the automechanism of failsafe in then() will help it to be transformed into failed promise and will keep the rest of the code intact. Dealing with the nested calls While using internal return values in the success function, promise code can sense that you are missing one most obvious thing: the error controller. The missing error can cause your code to stand still or get into a catastrophe from which it might not recover. If you want to overcome this, simply throw the errors. How? See the following code: this.getserial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes' + serial)        .then(            function (response) {                return {                    title: response.data.title,                    cost: response.data.price               });            },            function (httpError) {                // translate the error                throw httpError.status + " : " +                    httpError.data;            }); }; Now, whenever the code enters into an error-like situation, it will return a single string, not a bunch of $http statutes or config details. This can also save your entire code from going into a standstill mode and help you in debugging. Also, if you attached log services, you can pinpoint the location that causes the error. Concurrency in Angular.js We all want to achieve maximum output at a single slot of time by asking multiple services to invoke and get results from them. Angular.js provides this functionality via its $q.all service; you can invoke many services at a time and if you want to join all/any of them, you just need then() to get them together in the sequence you want. Let's get the payload of the array first: [ { url: 'myUr1.html' }, { url: 'myUr2.html' }, { url: 'myUr3.html' } ] And now this array will be used by the following code: service('asyncService', function($http, $q) {      return {        getDataFrmUrls: function(urls) {          var deferred = $q.defer();          var collectCalls = [];          angular.forEach(urls, function(url) {            collectCalls.push($http.get(url.url));          });            $q.all(collectCalls)          .then(            function(results) {            deferred.resolve(              JSON.stringify(results))          },          function(errors) {          deferred.reject(errors);          },          function(updates) {            deferred.update(updates);          });          return deferred.promise;        }      }; }); A promise is created by executing $http.get for each URL and is added to an array. The $q.all function takes the input of an array of promises, which will then process all results into a single promise containing an object with each answer. This will get converted in JSON and passed on to the caller function. The result might be like this: [ promiseOneResultPayload, promiseTwoResultPayload, promiseThreeResultPayload ] The combination of success and error The $http returns a promise; you can define its success or error depending on this promise. Many think that these functions are a standard part of promise—but in reality, they are not as they seem to be. Using promise means you are calling then(). It takes two parameters—a callback function for success and a callback function for failure. Imagine this code: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); }; This can be rewritten as: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); };   $http.get("/api/tv/serials/sherlockHolmes") .then(function(response) {    console.log("The tele serial name is :" + response.data); }, function(result) {    console.log("Request failed : " + result); }; One can use either the success or error function depending on the choice of a situation, but there is a benefit in using $http—it's convenient. The error function provides response and status, and the success function provides the response data. This is not considered as a standard part of a promise. Anyone can add their own versions of these functions to promises, as shown in the following code: //my own created promise of success function   promise.success = function(fn) {    promise.then(function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; };   //my own created promise of error function   promise.error = function(fn) {      promise.then(null, function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; }; The safe approach So the real matter of discussion is what to use with $http? Success or error? Keep in mind that there is no standard way of writing promise; we have to look at many possibilities. If you change your code so that your promise is not returned from $http, when we load data from a cache, your code will break if you expect success or error to be there. So, the best way is to use then whenever possible. This will not only generalize the overall approach of writing promise, but also reduce the prediction element from your code. Route your promise Angular.js has the best feature to route your promise. This feature is helpful when you are dealing with more than one promise at a time. Here is how you can achieve routing through the following code: $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) As you can observe, we have two routes: the api route takes us to the index page, with IndexController, and the video route takes us to the movie's page. app.controller('moviesController', function($scope, MovieService) {    $scope.name = null;      MovieService.getName().then(function(name) {        $scope.name = name;    }); }); There is a problem, until the MovieService class gets the name from the backend, the name is null. This means if our view binds to the name, first it's empty, then its set. This is where router comes in. Router resolves the problem of setting the name as null. Here's how we can do it: var getName = function(MovieService) {        return MovieService.getName();    };   $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) After adding the resolve, we can revisit our code for a controller: app.controller('MovieController', function($scope, getName) {      $scope.name = name;   }); You can also define multiple resolves for the route of your promises to get the best possible output: $routeProvider .when('/video', {      templateUrl: '/MovieService.php',      controller: 'MovieServiceController',      // adding one resole here      resolve: {          name: getName,          MovieService: getMovieService,          anythingElse: getSomeThing      }      // adding another resole here        resolve: {          name: getName,          MovieService: getMovieService,          someThing: getMoreSomeThing      } }) An introduction to WinRT Our first lookout for the technology is WinRT. What is WinRT? It is the short form for Windows Runtime. This is a platform provided by Microsoft to build applications for Windows 8+ operating system. It supports application development in C++/ICX, C# (C sharp), VB.NET, TypeScript, and JavaScript. Microsoft adopted JavaScript as one of its prime and first-class tools to develop cross-browser apps and for the development on other related devices. We are now fully aware of what the pros and cons of using JavaScript are, which has brought us here to implement the use of promise. Summary This article/post is just to give an understanding of what we have in our book for you; focusing on just Angular.js doesn't mean we have only one technology covered in the entire book for implementation of promise, it's just to give you an idea about how the flow of information goes from simple to advanced level, and how easy it is to keep on following the context of chapters. Within this book, we have also learned about Node.js, jQuery, and WinRT so that even readers from different experience levels can read, understand, and learn quickly and become an expert in promises. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Installing jQuery [article] Cordova Plugins [article]
Read more
  • 0
  • 0
  • 7143
Modal Close icon
Modal Close icon