Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-templates-web-pages
Packt
13 Oct 2015
13 min read
Save for later

Templates for Web Pages

Packt
13 Oct 2015
13 min read
In this article, by Kai Nacke, author of the book D Web Development, we will learn that every website has some recurring elements, often called a theme. Templates are an easy way to define these elements only once and then reuse them. A template engine is included in vibe.dwith the so-called Diet templates. The template syntax is based on the Jade templates (http://jade-lang.com/), which you might already know about. In this article, you will learn the following: Why are templates useful Key concepts of Diet templates: inheritance, include and blocks How to use filters and how to create your own filter (For more resources related to this topic, see here.) Using templates Let's take a look at the simple HTML5 page with a header, footer, navigation bar and some content in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Demo site</title> <link rel="stylesheet" type="text/css" href="demo.css" /> </head> <body> <header> Header </header> <nav> <ul> <li><a href="link1">Link 1</a></li> <li><a href="link2">Link 2</a></li> <li><a href="link3">Link 3</a></li> </ul> </nav> <article> <h1>Title</h1> <p>Some content here.</p> </article> <footer> Footer </footer> </body> </html> The formatting is done with a CSS file, as shown in the following: body { font-size: 1em; color: black; background-color: white; font-family: Arial; } header { display: block; font-size: 200%; font-weight: bolder; text-align: center; } footer { clear: both; display: block; text-align: center; } nav { display: block; float: left; width: 25%; } article { display: block; float: left; } Despite being simple, this page has elements that you often find on websites. If you create a website with more than one page, then you will use this structure on every page in order to provide a consistent user interface. From the 2nd page, you will violate the Don't Repeat Yourself(DRY) principle: the header and footer are the elements with fixed content. The content of the navigation bar is also fixed but not every item is always displayed. Only the real content of the page (in the article block) changes with every page. Templates solve this problem. A common approach is to define a base template with the structure. For each page, you will define a template that inherits from the base template and adds the new content. Creating your first template In the following sections, you will create a Diet template from the HTML page using different techniques. Turning the HTML page into a Diet template Let's start with a one-to-one translation of the HTML page into a Diet template. The syntax is based on the Jade templates. It looks similar to the following: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body header | Header nav ul li a(href='link1') Link 1 li a(href='link2') Link 2 li a(href='link3') Link 3 article h1 Title p Some content here. footer | Footer The template resembles the HTML page. Here are the basic syntax rules for a template: The first word on a line is an HTML tag Attributes of an HTML tag are written as a comma-separated list surrounded by parenthesis A tag may be followed by plain text that may contain the HTML code Plain text on a new line starts with the pipe symbol Nesting of elements is done by increasing the indentation. If you want to see the result of this template, save the code as index.dt and put it together with the demo.css CSS file in the views folder. The Jade templates have a special syntax for the nested elements. The list item/anchor pair from the preceding code could be written in one line, as follows: li: a(href='link1') Link1 This syntax is currently not supported by vibe.d. Now, you need to create a small application to see the result of the template by following the given steps: Create a new project template with dub, using the following command: $ dub init template vibe.d Save the template as the views/index.dt file. Copy the demo.css CSS file in the public folder. Change the generated source/app.d application to the following: import vibe.d; shared static this() { auto router = new URLRouter; router.get("/", staticTemplate!"index.dt"); router.get("*", serveStaticFiles("public/")); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, router); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Run dub inside the project folder to start the application and then browse to http://127.0.0.1:8080/ to see the resulting page. The application uses a new URLRouter class. This class is used to map a URL to a web page. With the router.get("/", staticTemplate!"index.dt");statement, every request for the base URL is responded with rendering of the index.dt template. The router.get("*", serveStaticFiles("public/")); statement uses a wild card to serve all other requests as static files that are stored in the public folder. Adding inheritance Up to now, the template is only a one-to-one translation of the HTML page. The next step is to split the file into two, layout.dt and index.dt. The layout.dtfile defines the general structure of a page while index.dt inherits from this file and adds new content. The key to template inheritance is the definition of a block. A block has a name and contains some template code. A child template may replace the block, append, or prepend the content to a block. In the following layout.dt file, four blocks are defined: header, navigation, content and footer. For all the blocks, except content, a default text is defined, as follows: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body block header header Header block navigation nav ul li <a href="link1">Link 1</a> li <a href="link2">Link 2</a> li <a href="link3">Link 3</a> block content block footer footer Footer The template in the index.dt file inherits this layout and replaces the block content, as shown here: extends layout block content article h1 Title p Some content here. You can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. You can now add more pages and reuse the layout. It is also possible to change the common elements that you defined in the header, footer and navigation blocks. There is no restriction on the level of inheritance. This allows you to construct very sophisticated template systems. Using include Inheritance is not the only way to avoid repetition of template code. With the include keyword, you insert the content of another file. This allows you to put the reusable template code in separate files. As an example, just put the following navigation in a separate navigation.dtfile: nav     ul         li <a href="link1">Link 1</a>         li <a href="link2">Link 2</a>         li <a href="link3">Link 3</a> The index.dt file uses the include keyword to insert the navigation.dt file, as follows: doctype html html     head        meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body         header Header         include navigation         article             h1 Title             p Some content here.         footer Footer Just as with the inheritance example, you can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. The Jade templates allow you to apply a filter to the included content. This is not yet implemented. Integrating other languages with blocks and filters So far, the templates only used the HTML content. However, a web application usually builds on a bunch of languages, most often integrated in a single document, as follows: CSS styles inside the style element JavaScript code inside the script element Content in a simplified markup language such as Markdown Diet templates have two mechanisms that are used for integration of other languages. If a tag is followed by a dot, then the block is treated as plain text. For example, the following template code: p. Some text And some more text It translates into the following: <p>     Some text     And some more text </p> The same can also be used for scripts and styles. For example, you can use the following script tag with the JavaScript code in it: script(type='text/javascript').     console.log('D is awesome') It translates to the following: <script type="text/javascript"> console.log('D is awesome') </script> An alternative is to use a filter. You specify a filter with a colon that is followed by the filter name. The script example can be written with a filter, as shown in the following: :javascript     console.log('D is awesome') This is translated to the following: <script type="text/javascript">     //<![CDATA[     console.log('D is aewsome')     //]]> </script> The following filters are provided by vibe.d: javascript for JavaScript code css for CSS styles markdown for content written in Markdown syntax htmlescapeto escape HTML symbols The css filter works in the same way as the javascript filter. The markdown filter accepts the text written in the Markdown syntax and translates it into HTML. Markdown is a simplified markup language for web authors. The syntax is available on the internet at http://daringfireball.net/projects/markdown/syntax. Here is our template, this time using the markdown filter for the navigation and the article content: doctype html html     head         meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body        header Header         nav             :markdown                 - [Link 1](link1)                 - [Link 2](link2)                 - [Link 3](link3)         article             :markdown                 Title                 =====                 Some content here.         footer Footer The rendered HTML page is still the same. The advantage is that you have less to type, which is good if you produce a lot of content. The disadvantage is that you have to remember yet another syntax. A normal plain text block can contain HTML tags, as follows: p. Click this <a href="link">link</a> This is rendered as the following: <p>     Click this <a href="link">link</a> </p> There are situations where you want to even treat the HTML tags as plain text, for example, if you want to explain HTML syntax. In this case, you use the htmlescape filter, as follows: p     :htlmescape         Link syntax: <a href="url target">text to display</a> This is rendered as the following: <p>     Link syntax: &lt;a href="url target"&gt;text to display&lt;/a&gt; </p> You can also add your owns filters. The registerDietTextFilter()function is provided by vibe.d to register the new filters. This function takes the name of the filter and a pointer to the filter function. The filter function is called with the text to filter and the indentation level. It returns the filtered text. For example, you can use this functionality for pretty printing of D code, as follows: Create a new project with dub,using the following command: $ dub init filter vibe.d Create the index.dttemplate file in the viewsfolder. Use the new dcodefilter to format the D code, as shown in the following: doctype html head title Code filter example :css .keyword { color: #0000ff; font-weight: bold; } body p You can create your own functions. :dcode T max(T)(T a, T b) { if (a > b) return a; return b; } Implement the filter function in the app.dfile in the sourcefolder. The filter function outputs the text inside a <pre> tag. Identified keywords are put inside the <span class="keyword"> element to allow custom formatting. The whole application is as follows: import vibe.d; string filterDCode(string text, size_t indent) { import std.regex; import std.array; auto dst = appender!string; filterHTMLEscape(dst, text, HTMLEscapeFlags.escapeQuotes); auto regex = regex(r"(^|s)(if|return)(;|s)"); text = replaceAll(dst.data, regex, "$1<span class="keyword">$2</span>$3"); auto lines = splitLines(text); string indent_string = "n"; while (indent-- > 0) indent_string ~= "t"; string ret = indent_string ~ "<pre>"; foreach (ln; lines) ret ~= indent_string ~ ln; ret ~= indent_string ~ "</pre>"; return ret; } shared static this() { registerDietTextFilter("dcode", &filterDCode); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, staticTemplate!"index.dt"); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Compile and run this application to see that the keywords are bold and blue. Summary In this article, we have seen how to create a Diet template using different techniques such as translating the HTML page into a Diet template, adding inheritance, using include, and integrating other languages with blocks and filters. Resources for Article: Further resources on this subject: MODx Web Development: Creating Lists [Article] MODx 2.0: Web Development Basics [Article] Ruby with MongoDB for Web Development [Article]
Read more
  • 0
  • 0
  • 4721

article-image-getting-places
Packt
13 Oct 2015
8 min read
Save for later

Getting Places

Packt
13 Oct 2015
8 min read
In this article by Nafiul Islam, the author of Mastering Pycharm, we'll learn all about navigation. It is divided into three parts. The first part is called Omni, which deals with getting to anywhere from any place. The second is called Macro, which deals with navigating to places of significance. The third and final part is about moving within a file and it is called Micro. By the end of this article, you should be able to navigate freely and quickly within PyCharm, and use the right tool for the job to do so. Veteran PyCharm users may not find their favorite navigation tool mentioned or explained. This is because the methods of navigation described throughout this article will lead readers to discover their own tools that they prefer over others. (For more resources related to this topic, see here.) Omni In this section, we will discuss the tools that PyCharm provides for a user to go from anywhere to any place. You could be in your project directory one second, the next, you could be inside the Python standard library or a class in your file. These tools are generally slow or at least slower than more precise tools of navigation provided. Back and Forward The Back and Forward actions allow you to move your cursor back to the place where it was previously for more than a few seconds or where you've made edits. This information persists throughout sessions, so even if you exit the IDE, you can still get back to the positions that you were in before you quit. This falls into the Omni category because these two actions could potentially get you from any place within a file to any place within a file in your directory (that you have been to) to even parts of the standard library that you've looked into as well as your third-party Python packages. The Back and Forward actions are perhaps two of my most used navigation actions, and you can use Keymap. Or, one can simply click on the Navigate menu to see the keyboard shortcuts: Macro The difference between Macro and Omni is subtle. Omni allows you to go to the exact location of a place, even a place of no particular significance (say, the third line of a documentation string) in any file. Macro, on the other hand, allows you to navigate anywhere of significance, such as a function definition, class declaration, or particular class method. Go to definition or navigate to declaration Go to definition is the old name for Navigate to Declaration in PyCharm. This action, like the one previously discussed, could lead you anywhere—a class inside your project or a third party library function. What this action does is allow you to go to the source file declaration of a module, package, class, function, and so on. Keymap is once again useful in finding the shortcut for this particular action. Using this action will move your cursor to the file where the class or function is declared, may it be in your project or elsewhere. Just place your cursor on the function or class and invoke the action. Your cursor will now be directly where the function or class was declared. There is, however, a slight problem with this. If one tries to go to the declaration of a .so object, such as the datetime module or the select module, what one will encounter is a stub file (discussed in detail later). These are helper files that allow PyCharm to give you the code completion that it does. Modules that are .so files are indicated by a terminal icon, as shown here: Search Everywhere The action speaks for itself. You search for classes, files, methods, and even actions. Universally invoked using double Shift (pressing Shift twice in quick succession), this nifty action looks similar to any other search bar. Search Everywhere searches only inside your project, by default; however, one can also use it to search non-project items as well. Not using this option leads to faster search and a lower memory footprint. Search Everywhere is a gateway to other search actions available in PyCharm. In the preceding screenshot, one can see that Search Everywhere has separate parts, such as Recent Files and Classes. Each of these parts has a shortcut next to their section name. If you find yourself using Search Everywhere for Classes all the time, you might start using the Navigate Class action instead which is much faster. The Switcher tool The Switcher tool allows you to quickly navigate through your currently open tabs, recently opened files as well as all of your panels. This tool is essential since you always navigate between tabs. A star to the left indicates open tabs; everything else is a recently opened or edited file. If you just have one file open, Switcher will show more of your recently opened files. It's really handy this way since almost always the files that you want to go to are options in Switcher. The Project panel The Project panel is what I use to see the structure of my project as well as search for files that I can't find with Switcher. This panel is by far the most used panel of all, and for good reason. The Project panel also supports search; just open it up and start typing to find your file. However, the Project panel can give you even more of an understanding of what your code looks similar to if you have Show Members enabled. Once this is enabled, you can see the classes as well as the declared methods inside your files. Note that search works just like before, meaning that your search is limited to only the files/objects that you can see; if you collapse everything, you won't be able to search either your files or the classes and methods in them. Micro Micro deals with getting places within a file. These tools are perhaps what I end up using the most in my development. The Structure panel The Structure panel gives you a bird's eye view of the file that you are currently have your cursor on. This panel is indispensable when trying to understand a project that one is not familiar with. The yellow arrow indicates the option to show inherited fields and methods. The red arrow indicates the option to show field names, meaning if that it is turned off, you will only see properties and methods. The orange arrow indicates the option to scroll to and from the source. If both are turned on (scroll to and scroll from), where your cursor is will be synchronized with what method, field, or property is highlighted in the structure panel. Inherited fields are grayed out in the display. Ace Jump This is my favorite navigation plugin, and was made by John Lindquist who is a developer at JetBrains (creators of PyCharm). Ace Jump is inspired from the Emacs mode with the same name. It allows you to jump from one place to another within the same file. Before one can use Ace Jump, one has to install the plugin for it. Ace Jump is usually invoked using Ctrl or command + ; (semicolon). You can search for Ace Jump in Keymap as well, and is called Ace Jump. Once invoked, you get a small box in which you can input a letter. Choose a letter from the word that you want to navigate to, and you will see letters on that letter pop up immediately. If we were to hit D, the cursor would move to the position indicated by D. This might seem long winded, but it actually leads to really fast navigation. If we wanted to select the word indicated by the letter, then we'd invoke Ace Jump twice before entering a letter. This turns the Ace Jump box red. Upon hitting B, the named parameter rounding will be selected. Often, we don't want to go to a word, but rather the beginning or the end of a line. In order to do this, just hit invoke Ace Jump and then the left arrow for line beginnings or the right arrow for line endings. In this case, we'd just hit V to jump to the beginning of the line that starts with num_type. This is an example, where we hit left arrow instead of the right one, and we get line-ending options. Summary In this article, I discussed some of the best tools for navigation. This is by no means an exhaustive list. However, these tools will serve as a gateway to more precise tools available for navigation in PyCharm. I generally use Ace Jump, Back, Forward, and Switcher the most when I write code. The Project panel is always open for me, with the most used files having their classes and methods expanded for quick search. Resources for Article: Further resources on this subject: Enhancing Your Blog with Advanced Features [article] Adding a developer with Django forms [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 13086

article-image-swift-power-and-performance
Packt
12 Oct 2015
14 min read
Save for later

Swift Power and Performance

Packt
12 Oct 2015
14 min read
In this article by Kostiantyn Koval, author of the book, Swift High Performance, we will learn about Swift, its performance and optimization, and how to achieve high performance. (For more resources related to this topic, see here.) Swift speed I could guess you are interested in Swift speed and are probably wondering "How fast can Swift be?" Before we even start learning Swift and discovering all the good things about it, let's answer this right here and right now. Let's take an array of 100,000 random numbers, sort in Swift, Objective-C, and C by using a standard sort function from stdlib (sort for Swift, qsort for C, and compare for Objective-C), and measure how much time would it take. In order to sort an array with 100,000 integer elements, the following are the timings: Swift 0.00600 sec C 0.01396 sec Objective-C 0.08705 sec The winner is Swift! Swift is 14.5 times faster that Objective-C and 2.3 times faster than C. In other examples and experiments, C is usually faster than Objective-C and Swift is way faster. Comparing the speed of functions You know how functions and methods are implemented and how they work. Let's compare the performance and speed of global functions and different method types. For our test, we will use a simple add function. Take a look at the following code snippet: func add(x: Int, y: Int) -> Int { return x + y } class NumOperation { func addI(x: Int, y: Int) -> Int class func addC(x: Int, y: Int) -> Int static func addS(x: Int, y: Int) -> Int } class BigNumOperation: NumOperation { override func addI(x: Int, y: Int) -> Int override class func addC(x: Int, y: Int) -> Int } For the measurement and code analysis, we use a simple loop in which we call those different methods: measure("addC") { var result = 0 for i in 0...2000000000 { result += NumOperation.addC(i, y: i + 1) // result += test different method } print(result) } Here are the results. All the methods perform exactly the same. Even more so, their assembly code looks exactly the same, except the name of the function call: Global function: add(10, y: 11) Static: NumOperation.addS(10, y: 11) Class: NumOperation.addC(10, y: 11) Static subclass: BigNumOperation.addS(10, y: 11) Overridden subclass: BigNumOperation.addC(10, y: 11) Even though the BigNumOperation addC class function overrides the NumOperation addC function when you call it directly, there is no need for a vtable lookup. The instance method call looks a bit different: Instance: let num = NumOperation() num.addI(10, y: 11) Subclass overridden instance: let bigNum = BigNumOperation() bigNum.addI() One difference is that we need to initialize a class and create an instance of the object. In our example, this is not so expensive an operation because we do it outside the loop and it takes place only once. The loop with the calling instance method looks exactly the same. As you can see, there is almost no difference in the global function and the static and class methods. The instance method looks a bit different but it doesn't have any major impact on performance. Also, even though it's true for simple use cases, there is a difference between them in more complex examples. Let's take a look at the following code snippet: let baseNumType = arc4random_uniform(2) == 1 ? BigNumOperation.self : NumOperation.self for i in 0...loopCount { result += baseNumType.addC(i, y: i + 1) } print(result) The only difference we incorporated here is that instead of specifying the NumOperation class type in compile time, we randomly returned it at runtime. And because of this, the Swift compiler doesn't know what method should be called at compile time—BigNumOperation.addC or NumOperation.addC. This small change has an impact on the generated assembly code and performance. A summary of the usage of functions and methods Global functions are the simplest and give the best performance. Too many global functions, however, make the code hard to read and reason. Static type methods, which can't be overridden have the same performance as global functions, but they also provide a namespace (its type name), so our code looks clearer and there is no adverse effect on performance. Class methods, which can be overridden could lead to a decrease in performance, and they should be used when you need class inheritance. In other cases, static methods are preferred. The instance method operates on the instance of the object. Use instance methods when you need to operate on the data of that instance. Make methods final when you don't need to override them. This gives an extra tip for the compiler for optimization, and performance could be increased because of it. Intelligent code Because Swift is a static and strongly typed language, it can read, understand, and optimize code very well. It tries to avoid the execution of all unnecessary code. For a better explanation, let's take a look at this simple example: class Object { func nothing() { } } let object = Object() object.nothing() object.nothing() We create an instance of the Object class and call a nothing method. The nothing method is empty, and calling it does nothing. The Swift compiler understands this and removes those method calls. After this, we have only one line of code: let object = Object() The Swift compiler can also remove the objects created that are never used. It reduces memory usage and unnecessary function calls, which also reduces CPU usage. In our example, the object instance is not used after removing the nothing method call and the creation of object can be removed as well. In this way, Swift removes all three lines of code and we end up with no code to execute at all. Objective-C, in comparison, can't do this optimization. Because it has a dynamic runtime, the nothing method's implementation can be changed to do some work at runtime. That's why Objective-C can't remove empty method calls. This optimization might not seem like a big win but let's take a look at another—a bit more complex—example that uses more memory: class Object { let x: Int let y: Int let z: Int init(x: Int) { self.x = x self.y = x * 2 self.z = y * 2 } func nothing() { } } We have added some Int data to our Object class to increase memory usage. Now, the Object instance would use at least 24 bytes (3 * int size; Int uses 4 bytes in the 64 bit architecture). Let's also try to increase the CPU usage by adding more instructions, using a loop: for i in 0...1_000_000 { let object = Object(x: i) object.nothing() object.nothing() } print("Done") Integer literals can use the underscore sign (_) to improve readability. So, 1_000_000_000 is the same as 1000000000. Now, we have 3 million instructions and we would use 24 million bytes (about 24 MB). This is quite a lot for a type of operation that actually doesn't do anything. As you can see, we don't use the result of the loop body. For the loop body, Swift does the same optimization as in previous example and we end up with an empty loop: for i in 0...1_000_000 { } The empty loop can be skipped as well. As a result, we have saved 24 MB of memory usage and 3 million method calls. Dangerous functions There are some functions and instructions that sometimes don't provide any value for the application but the Swift compiler can't skip them because that could have a very negative impact on performance. Console print Printing a statement to the console is usually used for debugging purposes. The print and debugPrint instructions aren't removed from the application in release mode. Let's explore this code: for i in 0...1_000_000 { print(i) } The Swift compiler treats print and debugPrint as valid and important instructions that can't be skipped. Even though this code does nothing, it can't be optimized, because Swift doesn't remove the print statement. As a result, we have 1 million unnecessary instructions. As you can see, even very simple code that uses the print statement could decrease an application's performance very drastically. The loop with the 1_000_000 print statement takes 5 seconds, and that's a lot. It's even worse if you run it in Xcode; it would take up to 50 seconds. It gets all the more worse if you add a print instruction to the nothing method of an Object class from the previous example: func nothing() { print(x + y + z) } In that case, a loop in which we create an instance of Object and call nothing can't be eliminated because of the print instruction. Even though Swift can't eliminate the execution of that code completely, it does optimization by removing the creation instance of Object and calling the nothing method, and turns it into simple loop operation. The compiled code after optimization looks like this: // Initial Source Code for i in 0...1_000 { let object = Object(x: i) object.nothing() object.nothing() } // Optimized Code var x = 0, y = 0, z = 0 for i in 0...1_000_000 { x = i y = x * 2 z = y * 2 print(x + y + z) print(x + y + z) } As you can see, this code is far from perfect and has a lot of instructions that actually don't give us any value. There is a way to improve this code, so the Swift compiler does the same optimization as without print. Removing print logs To solve this performance problem, we have to remove the print statements from the code before compiling it. There are different ways of doing this. Comment out The first idea is to comment out all print statements of the code in release mode: //print("A") This will work but the next time when you want to enable logs, you will need to uncomment that code. This is a very bad and painful practice. But there is a better solution to it. Commented code is bad practice in general. You should be using a source code version control system, such as Git, instead. In this way, you can safely remove the unnecessary code and find it in the history if you need it later. Using a build configuration We can enable print only in debug mode. To do this, we will use a build configuration to conditionally exclude some code. First, we need to add a Swift compiler custom flag. To do this, select a project target and then go to Build Settings | Other Swift Flags. In the Swift Compiler - Custom Flags section and add the –D DEBUG flag for debug mode, like this: After this, you can use the DEBUG configuration flag to enable code only in debug mode. We will define our own print function. It will generate a print statement only in debug mode. In release mode, this function will be empty, and the Swift compiler will successfully eliminate it: func D_print(items: Any..., separator: String = " ", terminator: String = "n") { #if DEBUG print(items, separator: separator, terminator: terminator) #endif } Now, everywhere instead of print, we will use D_print: func nothing() { D_print(x + y + z) } You can also create a similar D_debugPrint function. Swift is very smart and does a lot of optimization, but we also have to make our code clear for people to read and for the compiler to optimize. Using a preprocessor adds complexity to your code. Use it wisely and only in situations when normal if conditions won't work, for instance, in our D_print example. Improving speed There are a few techniques that can simply improve code performance. Let's proceed directly to the first one. final You can create a function and property declaration with the final attribute. Adding the final attribute makes it non-overridable. The subclasses can't override that method or property. When you make a method non-overridable, there is no need to store it in vtable and the call to that function can be performed directly without any function address lookup in vtable: class Animal { final var name: String = "" final func feed() { } } As you have seen, final methods perform faster than non-final methods. Even such small optimization could improve an application's performance. It not only improves performance but also makes the code more secure. This way, you prevent a method from being overridden and prevent unexpected and incorrect behavior. Enabling the Whole Module Optimization setting would achieve very similar optimization results, but it's better to mark a function and property declaration explicitly as final, which would reduce the compiler's work and speed up the compilation. The compilation time for big projects with Whole Module Optimization could be up to 5 minutes in Xcode 7. Inline functions As you have seen, Swift can do optimization and inline some function calls. This way, there is no performance penalty for calling a function. You can manually enable or disable inline functions with the @inline attribute: @inline(__always) func someFunc () { } @inline(never) func someFunc () { } Even though you can manually control inline functions, it's usually better to leave it to the Swift complier to do this. Depending on the optimization settings, the Swift compiler applies different inlining techniques. The use-case for @inline(__always) would be very simple one-line functions that you always want to be inline. Value objects and reference objects There are many benefits of using immutable value types. Value objects make code not only safer and clearer but also faster. They have better speed and performance than reference objects; here is why. Memory allocation A value object can be allocated in the stack memory instead of the heap memory. Reference objects need to be allocated in the heap memory because they can be shared between many owners. Because value objects have only one owner, they can be allocated safely in the stack. Stack memory is way faster than heap memory. The second advantage is that value objects don't need reference counting memory management. As they can have only one owner, there is no such thing as reference counting for value objects. With Automatic Reference Counting (ARC) we don't think much about memory management, and it mostly looks transparent for us. Even though code looks the same when using reference objects and value objects, ARC adds extra retain and release method calls for reference objects. Avoiding Objective-C In most cases, Objective-C, with its dynamic runtime, performs slower than Swift. The interoperability between Swift and Objective-C is done so seamlessly that sometimes we may use Objective-C types and its runtime in the Swift code without knowing it. When you use Objective-C types in Swift code, Swift actually uses the Objective-C runtime for method dispatch. Because of that, Swift can't do the same optimization as for pure Swift types. Lets take a look at a simple example: for _ in 0...100 { _ = NSObject() } Let's read this code and make some assumptions about how the Swift compiler would optimize it. The NSObject instance is never used in the loop body, so we could eliminate the creation of an object. After that, we will have an empty loop; this can be eliminated as well. So, we remove all of the code from execution, but actually no code gets eliminated. This happens because Objective-C types use dynamic runtime method dispatch, called message sending. All standard frameworks, such as Foundation and UIKit, are written in Objective-C, and all types such as NSDate, NSURL, UIView, and UITableView use the Objective-C runtime. They do not perform as fast as Swift types, but we get all of these frameworks available for usage in Swift, and this is great. There is no way to remove the Objective-C dynamic runtime dispatch from Objective-C types in Swift, so the only thing we can do is learn how to use them wisely. Summary In this article, we covered many powerful features of Swift related to Swift's performance and gave some tips on how to solve performance-related issues. Resources for Article: Further resources on this subject: Flappy Swift[article] Profiling an app[article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 14276

article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 14147

article-image-distributing-your-html5-game
Marco Stagni
12 Oct 2015
11 min read
Save for later

Distributing your HTML5 Game

Marco Stagni
12 Oct 2015
11 min read
So, you've just completed your awesome project, and your game is ready to be played: it features tons of awesome characters and effects, a beautiful plot, an incredible soundtrack, and it sparkles at night. The only problem you're facing is: how should I distribute my game? How do I reach my players? Fortunately, this is an awesome world, and there are plenty of opportunities for you to reach millions of players. But before you are even able to reach the world, you need to package your game. For "packaging" your game, I will cover both the integration inside of a web page, and the actual packaging inside of a standalone application. In the first case, the only thing you need to do is be able to host your HTML page wherever you want, making it accessible for everybody in the world. Then, you will spread the world through every channel you know, and hopefully your website will be reached by a lot of interested people. If you used an HTML5 Game Engine (something like this ), then you should have a lot of tools to easily embed your game inside of an HTML page. Even if you're using a much more evolved game engine, such as Unity3D, you will always be able to incorporate the game into your website (of course, the only thing you need is to make sure that your chosen game engine should support this feature). In the Unity3D case, the user is prompted to download (or give access to, if already downloaded) the Unity3D Web Plugin, which allows the user to play the game with no problems at all. If you don't own a website, or you don't want to use your personal web space to distribute your product, there are a lot of available alternatives out there. Using an external service So, as I said, there are a lot of services that give you the ability to publish and distribute your game, both in standalone application, and as a "normal" HTML page. I will introduce and explain the most famous ones, since they're the most important platforms, and they're full of potential players. Itch.io Itch.io is a world famous platform and marketplace for indie games. It has a complete web interface, and unlike Steam, itch.io doesn't provide a standalone client, and users have to download game files directly from the platform. It accepts any kind of games, like mobile, desktop and web games. Itch.io can be considered as a game marketplace, since every developer is able to set a minimum price for his/her product: every player has the ability to choose the amount they thinks that game deserves. This can be seen as downside of this platform, but you have to consider that you can set a minimum price, and giving this ability to players cash spread the knowledge of your game more easily. Every developer can freely upload his/her content with no restrictions (except usual restrictions, like the ones concerning copyright infringment), with no entrance fee and with a fast profile creation procedure. Itch.io takes a 10% fee on developers' revenue (which is an absolutely amazing ratio) and allows developers to self manage their products, even allowing pre-orders if your game is still in development. Another awesome feature is that you don't need an account to play hosted games. This makes Itch.io a very interesting platform for those interested in selling a game. Another alternative, yet very similar to Itch.io, is gamejolt.com. GameJolt has basically the same features as Itch.io, although players are forced to create an account in order to play. An interesting feature of Game Jolt is that shares advertisements revenue with developers. Gog.com Gog.com is a more restrictive platform, since it only accepts a few games each month, but if you're one of the lucky guys whose work has been accepted, you will get a lot of promotion from the platform itself: your new release will be publicly shown on social media channels, and a great number of players will surely reach your game. This platform tries to help developers build their own community with on-site forums: every game has its own space inside the platform, where players and developers can share thoughts and discuss topics such as new releases or bugs to be fixed. Another interesting feature is that Gog.com helps you reach the end of your development path: they give you the possibility of an advance in royalties, to help your game reach its final stage. Finally, we can talk about the most famous and popular game distribution service of all time, Steam. Steam Steam is famous. Almost very player in the world knows it, and even if you don't download anything from the platform, its name is always recognizable. Steam provides a huge platform where developers and teams from all over the world can upload their works (both games and softwares), to get an income. Steam splits the revenue with a 70/30 model, which is considered a pretty fair ratio. The actual process of game uploading is called Steam GreenLight. Its main feature is being the most famous and biggest game marketplace of the world, and it provides its own client, available for Mac, Windows and Linux. The only difference from itch.io, gog.com and gamejolt.com is that only standalone applications are submittable. Building a standalone application Since this is still a beautiful world with a lot of possibilities, we can now build our own standalone game in a very easy way. At the end of the process I'm about to explain, you will obtain a single program that will contain your game, and you will be able to submit it to Steam or your favorite marketplace. The technology we're about to use is called "node-webkit", and you can find a lot of useful resources almost everywhere on the web about it. What is node-webkit? Node-webkit is like a browser that only loads your HTML content. You can think about it as a very small and super simplified version of Google Chrome, with the same amazing features. What makes node-webkit an incredible piece of software is that it's based on Chromium and Webkit, and seamlessly fully integrates Node.js, so you can access Node.js while interacting with your browser (this means that almost every node.js module you already use is available for you in node-webkit). What is more important for you, is that node-webkit is cross-platform: this means you can build your game for Windows, OS X and Linux very easily. Creating your first node-webkit application So, you now know that node-webkit is a powerful tool, and you will certainly be able to deploy your game on Steam GreenLight. The question is, how do you build a game? I will now show you what is a simple node-webkit application structure, and how to build your game for each platform. Project structure Node-webkit requires a very simple structure. The only requirements are a "package.json" file and your "index.html". So, if your project structure contains a index.html file, with all of its static assets (scripts, stylesheets, images), you have to provide a "package.json" near the index page. A very basic structure for this file should be something like this:    { "name":"My Awesome Game", "description":"My very first game, built with amazing software", "author":"Walter White", "main":"index.html", "version":"1.0.0", "nodejs":true, "window":{ "title":"Awesomeness 3D", "width":1280, "height":720, "toolbar":false, "position":"center" } } It's very easy to understand what this package.json file says: it's telling node-webkit how to render our window, which file is the main one, and provides a few fields describing our application (its name, its author, its description and so on). The node-webkit wiki provides a very detailed description of the right way to write your package.json file, here. When you're still in debugging mode, setting the "toolbar" option to "true" is a great deal: you will gain access to the Chrome Developers Tool. So, now you have your completed game. To start the packaging process, you have to zip all your files inside a .nw package, like this:    zip -r -X app.nw . -x *.git* This will recursively zip your game inside a package called "app.nw", removing all the git related files (I assume you're using git, if you're not using it you can simplify the command and run zip -r -X app.nw .). We now need to download the node-webkit executable, in order to create our own executable. Every platform needs its own path to be completed, and each one will be explained separately. OS X Since the OS X version of node-webkit is an .app package, the operations you need to follow are very simple. First of all, right-click the .app file, and select Show Package Contents. What you have to do right now, is to copy and paste your "app.nw" package inside Contents/Resources . If you now try to launch the .app file, your game will start immediately. You're now free to customize the application you created: you can change the Info.plist file to match the properties of your game. Be careful of what you change, or you might be forced to start from the beginning. You can user your favorite text editor. You can also update the application icon by replacing the nw.icns file inside Contents/Resources. Your game is now ready to run on Mac OS X. Linux Linux is also very easy to setup. First of all, after you download the Linux version of node-webkit, you have to unzip all of the files and copy your "node-webkit" package (your "app.nw") inside the root of the folder you just unzipped. Now, simply prompt using this command: cat nw app.nw > app chmod +x app And launch it using: ./app To distribute your linux package, zip the executable along with a few required files: nw.pak libffmpegsumo.so credits.html The last one is legally required due to multiple third-party open source libraries used by node-webkit. Windows The last one, Windows. First of all, download the Windows version of node-webkit and unzip it. Copy your app.nw file inside the folder, which contains the nw.exe file. Using the Windows command line tool (CMD), prompt this command: copy /b nw.exe+app.nw app.exe You can now launch your created .exe file to see your game running. To distribute your game in the easiest way, you can zip your executable with a few required files: nw.pak ffmpegsumo.dll icudtl.dat libEGL.dll libGLESv2.dll Your Windows package is ready to be distributed. You can use a powerful tool available to create Windows Installers, which is WixEdit. Conclusions Now you have your own executable, ready to be downloaded and played by millions of players. Your game is available for almost every platform (I won't cover the details on how to prepare your game for mobile, and since it's a pretty detailed topic, it will probably need another post), both online and offline. It's always been completely up to you decide where your game will live, but now you have a ton of multiple choices: you can decide to upload your HTML content to an online platform, or to your own website, pack it in a standalone application, and distribute it via any channel you know to whoever you desire. Your other options is to upload it to a marketplace like Steam. This is pretty much everything you need to know in order to get started. The purpose of this post was to give you a set of information to help you decide which distribution way is the most suitable for you: there are still a lot of alternatives, and the ones described here are probably the most famous. I hope your games will reach and engage every player in this world, and I wish you all the success you deserve. About the Author Marco Stagni is a Italian frontend and mobile developer, with a Bachelor's Degree in Computer Engineering. He's completely in love with JavaScript, and he's trying to push his knowledge of the language in every possible direction. After a few years as frontend and Android developer, working both with Italian Startups and Web Agencies, he's now deepening his knowledge about Game Programming. His efforts are currently aimed to the completion of his biggest project: a Javascript Game Engine, built on top of THREE.js and Physijs (the project is still in alpha version, but already downloadable via http://npmjs.org/package/wage. You can follow him also on twitter @marcoponds or on github at http://github.com/marco-ponds. Marco is a big NBA fan.
Read more
  • 0
  • 0
  • 6279

article-image-creating-graph-application-python-neo4j-gephi-linkuriousjs
Greg Roberts
12 Oct 2015
13 min read
Save for later

Creating a graph application with Python, Neo4j, Gephi & Linkurious.js

Greg Roberts
12 Oct 2015
13 min read
I love Python, and to celebrate Packt Python week, I’ve spent some time developing an app using some of my favorite tools. The app is a graph visualization of Python and related topics, as well as showing where all our content fits in. The topics are all StackOverflow tags, related by their co-occurrence in questions on the site. The app is available to view at http://gregroberts.github.io/ and in this blog, I’m going to discuss some of the techniques I used to construct the underlying dataset, and how I turned it into an online application. Graphs, not charts Graphs are an incredibly powerful tool for analyzing and visualizing complex data. In recent years, many different graph database engines have been developed to make use of this novel manner of representing data. These databases offer many benefits over traditional, relational databases because of how the data is stored and accessed. Here at Packt, I use a Neo4j graph to store and analyze data about our business. Using the Cypher query language, it’s easy to express complicated relations between different nodes succinctly. It’s not just the technical aspect of graphs which make them appealing to work with. Seeing the connections between bits of data visualized explicitly as in a graph helps you to see the data in a different light, and make connections that you might not have spotted otherwise. This graph has many uses at Packt, from customer segmentation to product recommendations. In the next section, I describe the process I use to generate recommendations from the database. Make the connection For product recommendations, I use what’s known as a hybrid filter. This considers both content based filtering (product x and y are about the same topic) and collaborative filtering (people who bought x also bought y). Each of these methods has strengths and weaknesses, so combining them into one algorithm provides a more accurate signal. The collaborative aspect is straightforward to implement in Cypher. For a particular product, we want to find out which other product is most frequently bought alongside it. We have all our products and customers stored as nodes, and purchases are stored as edges. Thus, the Cypher query we want looks like this: MATCH (n:Product {title:’Learning Cypher’})-[r:purchased*2]-(m:Product) WITH m.title AS suggestion, count(distinct r)/(n.purchased+m.purchased) AS alsoBought WHERE m<>n RETURN* ORDER BY alsoBought DESC and will very efficiently return the most commonly also purchased product. When calculating the weight, we divide by the total units sold of both titles, so we get a proportion returned. We do this so we don’t just get the titles with the most units; we’re effectively calculating the size of the intersection of the two titles’ audiences relative to their overall audience size. The content side of the algorithm looks very similar: MATCH (n:Product {title:’Learning Cypher’})-[r:is_about*2]-(m:Product) WITH m.title AS suggestion, count(distinct r)/(length(n.topics)+length(m.topics)) AS alsoAbout WHERE m<>n RETURN * ORDER BY alsoAbout DESC Implicit in this algorithm is knowledge that a title is_about a topic of some kind. This could be done manually, but where’s the fun in that? In Packt’s domain there already exists a huge, well moderated corpus of technology concepts and their usage: StackOverflow. The tagging system on StackOverflow not only tells us about all the topics developers across the world are using, it also tells us how those topics are related, by looking at the co-occurrence of tags in questions. So in our graph, StackOverflow tags are nodes in their own right, which represent topics. These nodes are connected via edges, which are weighted to reflect their co-occurrence on StackOverflow: edge_weight(n,m) = (Number of questions tagged with both n & m)/(Number questions tagged with n or m) So, to find topics related to a given topic, we could execute a query like this: MATCH (n:StackOverflowTag {name:'Matplotlib'})-[r:related_to]-(m:StackOverflowTag) RETURN n.name, r.weight, m.name ORDER BY r.weight DESC LIMIT 10 Which would return the following: | n.name | r.weight | m.name ----+------------+----------+-------------------- 1 | Matplotlib | 0.065699 | Plot 2 | Matplotlib | 0.045678 | Numpy 3 | Matplotlib | 0.029667 | Pandas 4 | Matplotlib | 0.023623 | Python 5 | Matplotlib | 0.023051 | Scipy 6 | Matplotlib | 0.017413 | Histogram 7 | Matplotlib | 0.015618 | Ipython 8 | Matplotlib | 0.013761 | Matplotlib Basemap 9 | Matplotlib | 0.013207 | Python 2.7 10 | Matplotlib | 0.012982 | Legend There are many, more complex relationships you can define between topics like this, too. You can infer directionality in the relationship by looking at the local network, or you could start constructing Hyper graphs using the extensive StackExchange API. So we have our topics, but we still need to connect our content to topics. To do this, I’ve used a two stage process. Step 1 – Parsing out the topics We take all the copy (words) pertaining to a particular product as a document representing that product. This includes the title, chapter headings, and all the copy on the website. We use this because it’s already been optimized for search, and should thus carry a fair representation of what the title is about. We then parse this document and keep all the words which match the topics we’ve previously imported. #...code for fetching all the copy for all the products key_re = 'W(%s)W' % '|'.join(re.escape(i) for i in topic_keywords) for i in documents tags = re.findall(key_re, i[‘copy’]) i['tags'] = map(lambda x: tag_lookup[x],tags) Having done this for each product, we have a bag of words representing each product, where each word is a recognized topic. Step 2 – Finding the information From each of these documents, we want to know the topics which are most important for that document. To do this, we use the tf-idf algorithm. tf-idf stands for term frequency, inverse document frequency. The algorithm takes the number of times a term appears in a particular document, and divides it by the proportion of the documents that word appears in. The term frequency factor boosts terms which appear often in a document, whilst the inverse document frequency factor gets rid of terms which are overly common across the entire corpus (for example, the term ‘programming’ is common in our product copy, and whilst most of the documents ARE about programming, this doesn’t provide much discriminating information about each document). To do all of this, I use python (obviously) and the excellent scikit-learn library. Tf-idf is implemented in the class sklearn.feature_extraction.text.TfidfVectorizer. This class has lots of options you can fiddle with to get more informative results. import sklearn.feature_extraction.text as skt tagger = skt.TfidfVectorizer(input = 'content', encoding = 'utf-8', decode_error = 'replace', strip_accents = None, analyzer = lambda x: x, ngram_range = (1,1), max_df = 0.8, min_df = 0.0, norm = 'l2', sublinear_tf = False) It’s a good idea to use the min_df & max_df arguments of the constructor so as to cut out the most common/obtuse words, to get a more informative weighting. The ‘analyzer’ argument tells it how to get the words from each document, in our case, the documents are already lists of normalized words, so we don’t need anything additional done. #create vectors of all the documents vectors = tagger.fit_transform(map(lambda x: x['tags'],rows)).toarray() #get back the topic names to map to the graph t_map = tagger.get_feature_names() jobs = [] for ind, vec in enumerate(vectors): features = filter(lambda x: x[1]>0, zip(t_map,vec)) doc = documents[ind] for topic, weight in features: job = ‘’’MERGE (n:StackOverflowTag {name:’%s’}) MERGE (m:Product {id:’%s’}) CREATE UNIQUE (m)-[:is_about {source:’tf_idf’,weight:%d}]-(n) ’’’ % (topic, doc[‘id’], weight) jobs.append(job) We then execute all of the jobs using Py2neo’s Batch functionality. Having done all of this, we can now relate products to each other in terms of what topics they have in common: MATCH (n:Product {isbn10:'1783988363'})-[r:is_about]-(a)-[q:is_about]-(m:Product {isbn10:'1783289007'}) WITH a.name as topic, r.weight+q.weight AS weight RETURN topic ORDER BY weight DESC limit 6 Which returns: | topic ---+------------------ 1 | Machine Learning 2 | Image 3 | Models 4 | Algorithm 5 | Data 6 | Python Huzzah! I now have a graph into which I can throw any piece of content about programming or software, and it will fit nicely into the network of topics we’ve developed. Take a breath So, that’s how the graph came to be. To communicate with Neo4j from Python, I use the excellent py2neo module, developed by Nigel Small. This module has all sorts of handy abstractions to allow you to work with nodes and edges as native Python objects, and then update your Neo instance with any changes you’ve made. The graph I’ve spoken about is used for many purposes across the business, and has grown in size and scope significantly over the last year. For this project, I’ve taken from this graph everything relevant to Python. I started by getting all of our content which is_about Python, or about a topic related to python: titles = [i.n for i in graph.cypher.execute('''MATCH (n)-[r:is_about]-(m:StackOverflowTag {name:'Python'}) return distinct n''')] t2 = [i.n for i in graph.cypher.execute('''MATCH (n)-[r:is_about]-(m:StackOverflowTag)-[:related_to]-(m:StackOverflowTag {name:'Python'}) where has(n.name) return distinct n''')] titles.extend(t2) then hydrated this further by going one or two hops down each path in various directions, to get a large set of topics and content related to Python. Visualising the graph Since I started working with graphs, two visualisation tools I’ve always used are Gephi and Sigma.js. Gephi is a great solution for analysing and exploring graphical data, allowing you to apply a plethora of different layout options, find out more about the statistics of the network, and to filter and change how the graph is displayed. Sigma.js is a lightweight JavaScript library which allows you to publish beautiful graph visualizations in a browser, and it copes very well with even very large graphs. Gephi has a great plugin which allows you to export your graph straight into a web page which you can host, share and adapt. More recently, Linkurious have made it their mission to bring graph visualization to the masses. I highly advise trying the demo of their product. It really shows how much value it’s possible to get out of graph based data. Imagine if your Customer Relations team were able to do a single query to view the entire history of a case or customer, laid out as a beautiful graph, full of glyphs and annotations. Linkurious have built their product on top of Sigma.js, and they’ve made available much of the work they’ve done as the open source Linkurious.js. This is essentially Sigma.js, with a few changes to the API, and an even greater variety of plugins. On Github, each plugin has an API page in the wiki and a downloadable demo. It’s worth cloning the repository just to see the things it’s capable of! Publish It! So here’s the workflow I used to get the Python topic graph out of Neo4j and onto the web. Use Py2neo to graph the subgraph of content and topics pertinent to Python, as described above Add to this some other topics linked to the same books to give a fuller picture of the Python “world” Add in topic-topic edges and product-product edges to show the full breadth of connections observed in the data Export all the nodes and edges to csv files Import node and edge tables into Gephi. The reason I’m using Gephi as a middle step is so that I can fiddle with the visualisation in Gephi until it looks perfect. The layout plugin in Sigma is good, but this way the graph is presentable as soon as the page loads, the communities are much clearer, and I’m not putting undue strain on browsers across the world! The layout of the graph has been achieved using a number of plugins. Instead of using the pre-installed ForceAtlas layouts, I’ve used the OpenOrd layout, which I feel really shows off the communities of a large graph. There’s a really interesting and technical presentation about how this layout works here. Export the graph into gexf format, having applied some partition and ranking functions to make it more clear and appealing. Now it’s all down to Linkurious and its various plugins! You can explore the source code of the final page to see all the details, but here I’ll give an overview of the different plugins I’ve used for the different parts of the visualisation: First instantiate the graph object, pointing to a container (note the CSS of the container, without this, the graph won’t display properly: <style type="text/css"> #container { max-width: 1500px; height: 850px; margin: auto; background-color: #E5E5E5; } </style> … <div id="container"></div> … <script> s= new sigma({ container: 'container', renderer: { container: document.getElementById('container'), type: 'canvas' }, settings: { … } }); sigma.parsers.gexf - used for (trivially!) importing a gexf file into a sigma instance sigma.parsers.gexf( 'static/data/Graph1.gexf', s, function(s) { //callback executed once the data is loaded, use this to set up any aspects of the app which depend on the data }); sigma.plugins.filter - Adds the ability to very simply hide nodes/edges based on a callback function which returns a boolean. This powers the filtering widgets on the page. <input class="form-control" id="min-degree" type="range" min="0" max="0" value="0"> … function applyMinDegreeFilter(e) { var v = e.target.value; $('#min-degree-val').textContent = v; filter .undo('min-degree') .nodesBy( function(n, options) { return this.graph.degree(n.id) >= options.minDegreeVal; },{ minDegreeVal: +v }, 'min-degree' ) .apply(); }; $('#min-degree').change(applyMinDegreeFilter); sigma.plugins.locate - Adds the ability to zoom in on a single node or collection of nodes. Very useful if you’re filtering a very large initial graph function locateNode (nid) { if (nid == '') { locate.center(1); } else { locate.nodes(nid); } }; sigma.renderers.glyphs - Allows you to add custom glyphs to each node. Useful if you have many types of node. Outro This application has been a very fun little project to build. The improvements to Sigma wrought by Linkurious have resulted in an incredibly powerful toolkit to rapidly generate graph based applications with a great degree of flexibility and interaction potential. None of this would have been possible were it not for Python. Python is my right (left, I’m left handed) hand which I use for almost everything. Its versatility and expressiveness make it an incredibly robust Swiss army knife in any data-analysts toolkit.
Read more
  • 0
  • 0
  • 29669
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-exploring-windows-powershell-50
Packt
12 Oct 2015
16 min read
Save for later

Exploring Windows PowerShell 5.0

Packt
12 Oct 2015
16 min read
In this article by Chendrayan Venkatesan, the author of the book Windows PowerShell for .NET Developers, we will cover the following topics: Basics of Desired State Configuration (DSC) Parsing structured objects using PowerShell Exploring package management Exploring PowerShell Get-Module Exploring other enhanced features (For more resources related to this topic, see here.) Windows PowerShell 5.0 has many significant benefits, to know more features about its features refer to the following link: http://go.microsoft.com/fwlink/?LinkID=512808 A few highlights of Windows PowerShell 5.0 are as follows: Improved usability Backward compatibility Class and Enum keywords are introduced Parsing structured objects are made easy using ConvertFrom string command We have some new modules introduced in Windows PowerShell 5.0, such as Archive, Package Management (this was formerly known as OneGet) and so on ISE supported transcriptions Using PowerShell Get-Module cmdlet, we can find, install, and publish modules Debug at runspace can be done using Microsoft.PowerShell.Utility module Basics of Desired State Configuration Desired State Configuration also known as DSC is a new management platform in Windows PowerShell. Using DSC, we can deploy and manage configuration data for software servicing and manage the environment. DSC can be used to streamline datacenters and this was introduced along with Windows Management Framework 4.0 and it heavily extended into Windows Management Framework 5.0. Few highlights of DSC in April 2015 Preview are as follows: New cmdlets are introduced in WMF 5.0 Few DSC commands are updated and remarkable changes are made to the configuration management platform in PowerShell 5.0 DSC resources can be built using class, so no need of MOF file It's not mandatory to know PowerShell to learn DSC but it's a great added advantage. Similar to function we can also use configuration keyword but it has a huge difference because in DSC everything is declarative, which is a cool thing in Desired State Configuration. So before beginning this exercise, I created a DSCDemo lab machine in Azure cloud with Windows Server 2012 and it's available out of the box. So, the default PowerShell version is 4.0. For now let's create and define a simple configuration, which creates a file in the local host. Yeah! A simple New-Item command can do that but it's an imperative cmdlet and we need to write a program to tell the computer to create it, if it does not exist. Structure of the DSC configuration is as follows: Configuration Name { Node ComputerName { ResourceName <String> { } } } To create a simple text file with contents, we use the following code: Configuration FileDemo { Node $env:COMPUTERNAME { File FileDemo { Ensure = 'Present' DestinationPath = 'C:TempDemo.txt' Contents = 'PowerShell DSC Rocks!' Force = $true } } } Look at the following screenshot: Following are the steps represented in the preceding figure: Using the Configuration keyword, we are defining a configuration with the name FileDemo—it's a friendly name. Inside the Configuration block we created a Node block and also a file on the local host. File is the resource name. FileDemo is a friendly name of a resource and it's also a string. Properties of the file resource. This creates MOF file—we call this similar to function. But wait, here a code file is not yet created. We just created a MOF file. Look at the MOF file structure in the following image: We can manually edit the MOF and use it on another machine that has PS 4.0 installed on it. It's not mandatory to use PowerShell for generating MOF, if you are comfortable with PowerShell, you can directly write the MOF file. To explore the available DSC resources you can execute the following command: Get-DscResource The output is illustrated in the following image: Following are the steps represented in the preceding figure: Shows you how the resources are implemented. Binary, Composite, PowerShell, and so on. In the preceding example, we created a DSC Configuration that's FileDemo and that is listed as Composite. Name of the resource. Module name the resource belongs to. Properties of the resource. To know the Syntax of a particular DSC resource we can try the following code: Get-DscResource -Name Service -Syntax The output is illustrated in the following figure ,which shows the resource syntax in detail: Now, let's see how DSC works and its three different phases: The authoring phase. The staging phase. The "Make it so" phase. The authoring phase In this phase we will create a DSC Configuration using PowerShell and this outputs a MOF file. We saw a FileDemo example to create a configuration is considered to be an authoring phase. The staging phase In this phase the declarative MOF will be staged and it's as per node. DSC has a push and pull model, where push is simply pushing the configuration to target nodes. The custom providers need to be manually placed in target machines whereas in pull mode, we need to build an IIS Server that will have MOF for target nodes and this is well defined by the OData interface. In pull mode, the custom providers are downloaded to target system. The "Make it so" phase This is the phase for enacting the configuration, that is applying the configuration on the target nodes. Before we summarize the basics of DSC, let's see a few more DSC Commands. We can do this by executing the following command: Get-Command -Noun DSC* The output is as follows: We are using a PowerShell 4.0 stable release and not 5.0, so the version will not be available. Local Configuration Manager (LCM) is the engine for DSC and it runs on all nodes. LCM is responsible to call the configuration resources that are included in a DSC configuration script. Try executing Get-DscLocalConfigurationManager cmdlet to explore its properties. To Apply the LCM settings on target nodes we can use Set-DscLocalConfigurationManager cmdlet. Use case of classes in WMF 5.0 Using classes in PowerShell makes IT professionals, system administrators, and system engineers to start learning development in WMF. It's time for us to switch back to Windows PowerShell 5.0 because the Class keyword is supported from version 5.0 onwards. Why do we need to write class in PowerShell? Is there any special need? May be we will answer this in this section but this is one reason why I prefer to say that, PowerShell is far more than a scripting language. When the Class keyword was introduced, it mainly focused on creating DSC resources. But using class we can create objects like in any other object oriented programming language. The documentation that reads New-Object is not supported. But it's revised now. Indeed it supports the New-Object. The class we create in Windows PowerShell is a .NET framework type. How to create a PowerShell Class? It's easy, just use the Class keyword! The following steps will help you to create a PowerShell class. Create a class named ClassName {}—this is an empty class. Define properties in the class as Class ClassName {$Prop1 , $prop2} Instantiate the class as $var = [ClassName]::New() Now check the output of $var: Class ClassName { $Prop1 $Prop2 } $var = [ClassName]::new() $var Let's now have a look at how to create a class and its advantages. Let us define the properties in class: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' } $var = New-Object Catalog $var The following image shows the output of class, its members, and setting the property value: Now, by changing the property value, we get the following output: Now let's create a method with overloads. In the following example we have created a method name SetInformation that accepts two arguments $mdl and $mfgr and these are of string type. Using $var.SetInformation command with no parenthesis will show the overload definitions of the method. The code is as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation #Output OverloadDefinitions ------------------- void SetInformation(string mdl, string mfgr) Let's set the model and manufacturer using set information, as follows: Class Catalog { #Properties $Model = 'Fujitsu' $Manufacturer = 'Life Book S Series' SetInformation([String]$mdl,[String]$mfgr) { $this.Manufacturer = $mfgr $this.Model = $mdl } } $var = New-Object -TypeName Catalog $var.SetInformation('Surface' , 'Microsoft') $var The output is illustrated in following image: Inside the PowerShell class we can use PowerShell cmdlets as well. The following code is just to give a demo of using PowerShell cmdlet. Class allows us to validate the parameters as well. Let's have a look at the following example: Class Order { [ValidateSet("Red" , "Blue" , "Green")] $color [ValidateSet("Audi")] $Manufacturer Book($Manufacturer , $color) { $this.color = $color $this.Manufacturer = $Manufacturer } } The parameter $Color and $Manufacturer has ValidateSet property and has a set of values. Now let's use New-Object and set the property with an argument which doesn't belong to this set, shown as follows: $var = New-Object Order $var.color = 'Orange' Now, we get the following error: Exception setting "color": "The argument "Orange" does not belong to the set "Red,Blue,Green" specified by the ValidateSet attribute. Supply an argument that is in the set and then try the command again." Let's set the argument values correctly to get the result using Book method, as follows: $var = New-Object Order $var.Book('Audi' , 'Red') $var The output is illustrated in the following figure: Constructors A constructor is a special type of method that creates new objects. It has the same name as the class and the return type is void. Multiple constructors are supported, but each one takes different numbers and types of parameters. In the following code, let's see the steps to create a simple constructor in PowerShell that simply creates a user in the active directory. Class ADUser { $identity $Name ADUser($Idenity , $Name) { New-ADUser -SamAccountName $Idenity -Name $Name $this.identity = $Idenity $this.Name = $Name } } $var = [ADUser]::new('Dummy' , 'Test Case User') $var We can also hide the properties in PowerShell class, for example let's create two properties and hide one. In theory, it just hides the property but we can use the property as follows: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var The preceding code is illustrated in the following figure: Additionally, we can carry out operations, such as Get and Set, as shown in the following code: Class Hide { [String]$Name Hidden $ID } $var = [Hide]::new() $var.Id = '23' $var.Id This returns output as 23. To explore more about class use help about_Classes -Detailed. Parsing structured objects using PowerShell In Windows PowerShell 5.0 a new cmdlet ConvertFrom-String has been introduced and it's available in Microsoft.PowerShell.Utility. Using this command, we can parse the structured objects from any given string content. To see information, use help command with ConvertFrom-String -Detailed command. The help has an incorrect parameter as PropertyName. Copy paste will not work, so use help ConvertFrom-String –Parameter * and read the parameter—it's actually PropertyNames. Now, let's see an example of using ConvertFrom-String. Let us examine a scenario where a team has a custom code which generates log files for daily health check-up reports of their environment. Unfortunately, the tool delivered by the vendor is an EXE file and no source code is available. The log file format is as follows: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" There are many ways to manipulate this record but let's see how PowerShell cmdlet ConvertFrom-String helps us. Using the following code, we will simply extract the Type, EventID, and Server: "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 Lync" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server Following figure shows the output of the code we just saw: Okay, what's interesting in this? It's cool because now your output is a PSCustom object and you can manipulate it as required. "Error 4356 Lync" , "Warning 6781 SharePoint" , "Information 5436 Exchange", "Error 3432 SharePoint" , "Warning 4356 SharePoint" , "Information 5432 Exchange" | ConvertFrom-String -PropertyNames Type , EventID, Server | ? {$_.Type -eq 'Error'} An output in Lync and SharePoint has some error logs that needs to be taken care of on priority. Since, requirement varies you can use this cmdlet as required. ConvertFrom-String has a delimiter parameter, which helps us to manipulate the strings as well. In the following example let's use the –Delimiter parameter that removes white space and returns properties, as follows: "Chen V" | ConvertFrom-String -Delimiter "s" -PropertyNames "FirstName" , "SurName" This results FirstName and SurName – FirstName as Chen and SurName as V In the preceding example, we walked you through using template file to manipulate the string as we need. To do this we need to use the parameter –Template Content. Use help ConvertFrom-String –Parameter Template Content Before we begin we need to create a template file. To do this let's ping a web site. Ping www.microsoft.com and the output returned is, as shown: Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from 2.21.47.138: bytes=32 time=37ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=35ms TTL=51 Reply from 2.21.47.138: bytes=32 time=36ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms Now, we have the information in some structure. Let's extract IP and bytes; to do this I replaced the IP and Bytes as {IP*:2.21.47.138} Pinging e10088.dspb.akamaiedge.net [2.21.47.138] with 32 bytes of data: Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=37ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=36ms TTL=51 Reply from {IP*:2.21.47.138}: bytes={[int32]Bytes:32} time=35ms TTL=51 Ping statistics for 2.21.47.138: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 35ms, Maximum = 37ms, Average = 35ms ConvertFrom-String has a debug parameter using which we can debug our template file. In the following example let's see the debugging output: ping www.microsoft.com | ConvertFrom-String -TemplateFile C:TempTemplate.txt -Debug As we mentioned earlier PowerShell 5.0 is a Preview release and has few bugs. Let's ignore those for now and focus on the features, which works fine and can be utilized in environment. Exploring package management In this topic, we will walk you through the features of package management, which is another great feature of Windows Management Framework 5.0. This was introduced in Windows 10 and was formerly known as OneGet. Using package management we can automate software discovery, installation of software, and inventorying. Do not think about Software Inventory Logging (SIL) for now. As we know, in Windows Software Installation, technology has its own way of doing installations, for example MSI type, MSU type, and so on. This is a real challenge for IT professionals and developers, to think about the unique automation of software installation or deployment. Now, we can do it using package management module. To begin with, let's see the package management Module using the following code: Get-Module -Name PackageManagement The output is illustrated as follows: Yeah, well we got an output that is a binary module. Okay, how to know the available cmdlets and their usage? PowerShell has the simplest way to do things, as shown in the following code: Get-Module -Name PackageManagement The available cmdlets are shown in the following image: Package Providers are the providers connected to package management (OneGet) and package sources are registered for providers. To view the list of providers and sources we use the following cmdlets: Now, let's have a look at the available packages—in the following example I am selecting the first 20 packages, for easy viewing: Okay, we have 20 packages so using Install-Package cmdlet, let us now install WindowsAzurePowerShell on our Windows 2012 Server. We need to ensure that the source are available prior to any installation. To do this just execute the cmdlet Get-PackageSource. If the chocolatey source didn't come up in the output, simply execute the following code—do not change any values. This code will install chocolatey package manager on your machine. Once the installation is done we need to restart the PowerShell: Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) Find-Package -Name WindowsAzurePowerShell | Install-Package -Verbose The command we just saw shows the confirmation dialog for chocolatey, which is the package source, as shown in the following figure: Click on Yes and install the package. Following are the steps represented in the figure that we just saw: Installs the prerequisites. Creates a temporary folder. Installation successful. Windows Server 2012 has .NET 4.5 in the box by default, so the verbose turned up as False for .NET 4.5, which says PowerShell not installed but WindowsAzurePowerShell is installed successfully. If you are trying to install the same package and the same version that is available on your system – the cmdlet will skip the installation. Find-Package -Name PowerShell Here | Install-Package -Verbose VERBOSE: Skipping installed package PowerShellHere 0.0.3 Explore all the package management cmdlets and automate your software deployments. Exploring PowerShell Get-Module PowerShell Get-Module is a module available in Windows PowerShell 5.0 preview. Following are few more modules: Search through modules in the gallery with Find-Module Save modules to your system from the gallery with Save-Module Install modules from the gallery with Install-Module Update your modules to the latest version with Update-Module Add your own custom repository with Register-PSRepository The following screenshot shows the additional cmdlets that are available: This will allow us to find a module from PowerShell gallery and install it in our environment. PS gallery is a repository of modules. Using Find-Module cmdlet we get a list of module available in the PS gallery. Pipe and install the required module, alternatively we can save the module and examine it before installation, to do this use Save-Module cmdlet. The following screenshot illustrates the installation and deletion of the xJEA module: We can also publish module in the PS gallery, which will be available over the internet to others. This is not a great module. All it does is get user-information from an active directory for the same account name—creates a function and saves it as PSM1 in module folder. In order to publish the module in PS gallery, we need to ensure that the module has manifest. Following are the steps to publish your module: Create a PSM1 file. Create a PSD1 file that is a manifest module (also known as data file). Get your NuGet API key from the PS gallery link shared above. Publish your module using the Publish-PSModule cmdlet. Following figure shows modules that are currently published: Following figure shows the commands to publish modules: Summary In this article, we saw that Windows PowerShell 5.0 preview has got a lot more significant features, such as enhancement in PowerShell DSC, cmdlets improvements and new cmdlets, ISE support transcriptions, support class, and using class. We can create Custom DSC resources with easy string manipulations, A new Network Switch module is introduced using which we can automate and manage Microsoft signed network switches. Resources for Article: Further resources on this subject: Windows Phone 8 Applications[article] The .NET Framework Primer[article] Unleashing Your Development Skills with PowerShell [article]
Read more
  • 0
  • 0
  • 51494

article-image-securing-your-data
Packt
12 Oct 2015
6 min read
Save for later

Securing Your Data

Packt
12 Oct 2015
6 min read
In this article by Tyson Cadenhead, author of Socket.IO Cookbook, we will explore several topics related to security in Socket.IO applications. These topics will cover the gambit, from authentication and validation to how to use the wss:// protocol for secure WebSockets. As the WebSocket protocol opens innumerable opportunities to communicate more directly between the client and the server, people often wonder if Socket.IO is actually as secure as something such as the HTTP protocol. The answer to this question is that it depends entirely on how you implement it. WebSockets can easily be locked down to prevent malicious or accidental security holes, but as with any API interface, your security is only as tight as your weakest link. In this article, we will cover the following topics: Locking down the HTTP referrer Using secure WebSockets (For more resources related to this topic, see here.) Locking down the HTTP referrer Socket.IO is really good at getting around cross-domain issues. You can easily include the Socket.IO script from a different domain on your page, and it will just work as you may expect it to. There are some instances where you may not want your Socket.IO events to be available on every other domain. Not to worry! We can easily whitelist only the http referrers that we want so that some domains will be allowed to connect and other domains won't. How To Do It… To lock down the HTTP referrer and only allow events to whitelisted domains, follow these steps: Create two different servers that can connect to our Socket.IO instance. We will let one server listen on port 5000 and the second server listen on port 5001: var express = require('express'), app = express(), http = require('http'), socketIO = require('socket.io'), server, server2, io; app.get('/', function (req, res) { res.sendFile(__dirname + '/index.html'); }); server = http.Server(app); server.listen(5000); server2 = http.Server(app); server2.listen(5001); io = socketIO(server); When the connection is established, check the referrer in the headers. If it is a referrer that we want to give access to, we can let our connection perform its tasks and build up events as normal. If a blacklisted referrer, such as the one on port 5001 that we created, attempts a connection, we can politely decline and perhaps throw an error message back to the client, as shown in the following code: io.on('connection', function (socket) { switch (socket.request.headers.referer) { case 'http://localhost:5000/': socket.emit('permission.message', 'Okay, you're cool.'); break; default: returnsocket.emit('permission.message', 'Who invited you to this party?'); break; } }); On the client side, we can listen to the response from the server and react as appropriate using the following code: socket.on('permission.message', function (data) { document.querySelector('h1').innerHTML = data; }); How It Works… The referrer is always available in the socket.request.headers object of every socket, so we will be able to inspect it there to check whether it was a trusted source. In our case, we will use a switch statement to whitelist our domain on port 5000, but we could really use any mechanism at our disposal to perform the task. For example, if we need to dynamically whitelist domains, we can store a list of them in our database and search for it when the connection is established. Using secure WebSockets WebSocket communications can either take place over the ws:// protocol or the wss:// protocol. In similar terms, they can be thought of as the HTTP and HTTPS protocols in the sense that one is secure and one isn't. Secure WebSockets are encrypted by the transport layer, so they are safer to use when you handle sensitive data. In this recipe, you will learn how to force our Socket.IO communications to happen over the wss:// protocol for an extra layer of encryption. Getting Ready… In this recipe, we will need to create a self-signing certificate so that we can serve our app locally over the HTTPS protocol. For this, we will need an npm package called Pem. This allows you to create a self-signed certificate that you can provide to your server. Of course, in a real production environment, we would want a true SSL certificate instead of a self-signed one. To install Pem, simply call npm install pem –save. As our certificate is self-signed, you will probably see something similar to the following screenshot when you navigate to your secure server: Just take a chance by clicking on the Proceed to localhost link. You'll see your application load using the HTTPS protocol. How To Do It… To use the secure wss:// protocol, follow these steps: First, create a secure server using the built-in node HTTPS package. We can create a self-signed certificate with the pem package so that we can serve our application over HTTPS instead of HTTP, as shown in the following code: var https = require('https'), pem = require('pem'), express = require('express'), app = express(), socketIO = require('socket.io'); // Create a self-signed certificate with pem pem.createCertificate({ days: 1, selfSigned: true }, function (err, keys) { app.get('/', function(req, res){ res.sendFile(__dirname + '/index.html'); }); // Create an https server with the certificate and key from pem var server = https.createServer({ key: keys.serviceKey, cert: keys.certificate }, app).listen(5000); vario = socketIO(server); io.on('connection', function (socket) { var protocol = 'ws://'; // Check the handshake to determine if it was secure or not if (socket.handshake.secure) { protocol = 'wss://'; } socket.emit('hello.client', { message: 'This is a message from the server. It was sent using the ' + protocol + ' protocol' }); }); }); In your client-side JavaScript, specify secure: true when you initialize your WebSocket as follows: var socket = io('//localhost:5000', { secure: true }); socket.on('hello.client', function (data) { console.log(data); }); Now, start your server and navigate to https://localhost:5000. Proceed to this page. You should see a message in your browser developer tools that shows, This is a message from the server. It was sent using the wss:// protocol. How It Works… The protocol of our WebSocket is actually set automatically based on the protocol of the page that it sits on. This means that a page that is served over the HTTP protocol will send the WebSocket communications over ws:// by default, and a page that is served by HTTPS will default to using the wss:// protocol. However, by setting the secure option to true, we told the WebSocket to always serve through wss:// no matter what. Summary In this article, we gave you an overview of the topics related to security in Socket.IO applications. Resources for Article: Further resources on this subject: Using Socket.IO and Express together[article] Adding Real-time Functionality Using Socket.io[article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 1538

article-image-basics-jupyter-notebook-python
Packt Editorial Staff
11 Oct 2015
28 min read
Save for later

Basics of Jupyter Notebook and Python

Packt Editorial Staff
11 Oct 2015
28 min read
In this article by Cyrille Rossant, coming from his book, Learning IPython for Interactive Computing and Data Visualization - Second Edition, we will see how to use IPython console, Jupyter Notebook, and we will go through the basics of Python. Originally, IPython provided an enhanced command-line console to run Python code interactively. The Jupyter Notebook is a more recent and more sophisticated alternative to the console. Today, both tools are available, and we recommend that you learn to use both. [box type="note" align="alignleft" class="" width=""]The first chapter of the book, Chapter 1, Getting Started with IPython, contains all installation instructions. The main step is to download and install the free Anaconda distribution at https://www.continuum.io/downloads (the version of Python 3 64-bit for your operating system).[/box] Launching the IPython console To run the IPython console, type ipython in an OS terminal. There, you can write Python commands and see the results instantly. Here is a screenshot: IPython console The IPython console is most convenient when you have a command-line-based workflow and you want to execute some quick Python commands. You can exit the IPython console by typing exit. [box type="note" align="alignleft" class="" width=""]Let's mention the Qt console, which is similar to the IPython console but offers additional features such as multiline editing, enhanced tab completion, image support, and so on. The Qt console can also be integrated within a graphical application written with Python and Qt. See http://jupyter.org/qtconsole/stable/ for more information.[/box] Launching the Jupyter Notebook To run the Jupyter Notebook, open an OS terminal, go to ~/minibook/ (or into the directory where you've downloaded the book's notebooks), and type jupyter notebook. This will start the Jupyter server and open a new window in your browser (if that's not the case, go to the following URL: http://localhost:8888). Here is a screenshot of Jupyter's entry point, the Notebook dashboard: The Notebook dashboard [box type="note" align="alignleft" class="" width=""]At the time of writing, the following browsers are officially supported: Chrome 13 and greater; Safari 5 and greater; and Firefox 6 or greater. Other browsers may work also. Your mileage may vary.[/box] The Notebook is most convenient when you start a complex analysis project that will involve a substantial amount of interactive experimentation with your code. Other common use-cases include keeping track of your interactive session (like a lab notebook), or writing technical documents that involve code, equations, and figures. In the rest of this section, we will focus on the Notebook interface. [box type="note" align="alignleft" class="" width=""]Closing the Notebook server To close the Notebook server, go to the OS terminal where you launched the server from, and press Ctrl + C. You may need to confirm with y.[/box] The Notebook dashboard The dashboard contains several tabs which are as follows: Files: shows all files and notebooks in the current directory Running: shows all kernels currently running on your computer Clusters: lets you launch kernels for parallel computing A notebook is an interactive document containing code, text, and other elements. A notebook is saved in a file with the .ipynb extension. This file is a plain text file storing a JSON data structure. A kernel is a process running an interactive session. When using IPython, this kernel is a Python process. There are kernels in many languages other than Python. [box type="note" align="alignleft" class="" width=""]We follow the convention to use the term notebook for a file, and Notebook for the application and the web interface.[/box] In Jupyter, notebooks and kernels are strongly separated. A notebook is a file, whereas a kernel is a process. The kernel receives snippets of code from the Notebook interface, executes them, and sends the outputs and possible errors back to the Notebook interface. Thus, in general, the kernel has no notion of the Notebook. A notebook is persistent (it's a file), whereas a kernel may be closed at the end of an interactive session and it is therefore not persistent. When a notebook is re-opened, it needs to be re-executed. In general, no more than one Notebook interface can be connected to a given kernel. However, several IPython consoles can be connected to a given kernel. The Notebook user interface To create a new notebook, click on the New button, and select Notebook (Python 3). A new browser tab opens and shows the Notebook interface as follows: A new notebook Here are the main components of the interface, from top to bottom: The notebook name, which you can change by clicking on it. This is also the name of the .ipynb file. The Menu bar gives you access to several actions pertaining to either the notebook or the kernel. To the right of the menu bar is the Kernel name. You can change the kernel language of your notebook from the Kernel menu. The Toolbar contains icons for common actions. In particular, the dropdown menu showing Code lets you change the type of a cell. Following is the main component of the UI: the actual Notebook. It consists of a linear list of cells. We will detail the structure of a cell in the following sections. Structure of a notebook cell There are two main types of cells: Markdown cells and code cells, and they are described as follows: A Markdown cell contains rich text. In addition to classic formatting options like bold or italics, we can add links, images, HTML elements, LaTeX mathematical equations, and more. A code cell contains code to be executed by the kernel. The programming language corresponds to the kernel's language. We will only use Python in this book, but you can use many other languages. You can change the type of a cell by first clicking on a cell to select it, and then choosing the cell's type in the toolbar's dropdown menu showing Markdown or Code. Markdown cells Here is a screenshot of a Markdown cell: A Markdown cell The top panel shows the cell in edit mode, while the bottom one shows it in render mode. The edit mode lets you edit the text, while the render mode lets you display the rendered cell. We will explain the differences between these modes in greater detail in the following section. Code cells Here is a screenshot of a complex code cell: Structure of a code cell This code cell contains several parts, as follows: The Prompt number shows the cell's number. This number increases every time you run the cell. Since you can run cells of a notebook out of order, nothing guarantees that code numbers are linearly increasing in a given notebook. The Input area contains a multiline text editor that lets you write one or several lines of code with syntax highlighting. The Widget area may contain graphical controls; here, it displays a slider. The Output area can contain multiple outputs, here: Standard output (text in black) Error output (text with a red background) Rich output (an HTML table and an image here) The Notebook modal interface The Notebook implements a modal interface similar to some text editors such as vim. Mastering this interface may represent a small learning curve for some users. Use the edit mode to write code (the selected cell has a green border, and a pen icon appears at the top right of the interface). Click inside a cell to enable the edit mode for this cell (you need to double-click with Markdown cells). Use the command mode to operate on cells (the selected cell has a gray border, and there is no pen icon). Click outside the text area of a cell to enable the command mode (you can also press the Esc key). Keyboard shortcuts are available in the Notebook interface. Type h to show them. We review here the most common ones (for Windows and Linux; shortcuts for Mac OS X may be slightly different). Keyboard shortcuts available in both modes Here are a few keyboard shortcuts that are always available when a cell is selected: Ctrl + Enter: run the cell Shift + Enter: run the cell and select the cell below Alt + Enter: run the cell and insert a new cell below Ctrl + S: save the notebook Keyboard shortcuts available in the edit mode In the edit mode, you can type code as usual, and you have access to the following keyboard shortcuts: Esc: switch to command mode Ctrl + Shift + -: split the cell Keyboard shortcuts available in the command mode In the command mode, keystrokes are bound to cell operations. Don't write code in command mode or unexpected things will happen! For example, typing dd in command mode will delete the selected cell! Here are some keyboard shortcuts available in command mode: Enter: switch to edit mode Up or k: select the previous cell Down or j: select the next cell y / m: change the cell type to code cell/Markdown cell a / b: insert a new cell above/below the current cell x / c / v: cut/copy/paste the current cell dd: delete the current cell z: undo the last delete operation Shift + =: merge the cell below h: display the help menu with the list of keyboard shortcuts Spending some time learning these shortcuts is highly recommended. References Here are a few references: Main documentation of Jupyter at http://jupyter.readthedocs.org/en/latest/ Jupyter Notebook interface explained at http://jupyter-notebook.readthedocs.org/en/latest/notebook.html A crash course on Python If you don't know Python, read this section to learn the fundamentals. Python is a very accessible language and is even taught to school children. If you have ever programmed, it will only take you a few minutes to learn the basics. Hello world Open a new notebook and type the following in the first cell: In [1]: print("Hello world!") Out[1]: Hello world! Here is a screenshot: "Hello world" in the Notebook [box type="note" align="alignleft" class="" width=""]Prompt string Note that the convention chosen in this article is to show Python code (also called the input) prefixed with In [x]: (which shouldn't be typed). This is the standard IPython prompt. Here, you should just type print("Hello world!") and then press Shift + Enter.[/box] Congratulations! You are now a Python programmer. Variables Let's use Python as a calculator. In [2]: 2 * 2 Out[2]: 4 Here, 2 * 2 is an expression statement. This operation is performed, the result is returned, and IPython displays it in the notebook cell's output. [box type="note" align="alignleft" class="" width=""]Division In Python 3, 3 / 2 returns 1.5 (floating-point division), whereas it returns 1 in Python 2 (integer division). This can be source of errors when porting Python 2 code to Python 3. It is recommended to always use the explicit 3.0 / 2.0 for floating-point division (by using floating-point numbers) and 3 // 2 for integer division. Both syntaxes work in Python 2 and Python 3. See http://python3porting.com/differences.html#integer-division for more details.[/box] Other built-in mathematical operators include +, -, ** for the exponentiation, and others. You will find more details at https://docs.python.org/3/reference/expressions.html#the-power-operator. Variables form a fundamental concept of any programming language. A variable has a name and a value. Here is how to create a new variable in Python: In [3]: a = 2 And here is how to use an existing variable: In [4]: a * 3 Out[4]: 6 Several variables can be defined at once (this is called unpacking): In [5]: a, b = 2, 6 There are different types of variables. Here, we have used a number (more precisely, an integer). Other important types include floating-point numbers to represent real numbers, strings to represent text, and booleans to represent True/False values. Here are a few examples: In [6]: somefloat = 3.1415 sometext = 'pi is about' # You can also use double quotes. print(sometext, somefloat) # Display several variables. Out[6]: pi is about 3.1415 Note how we used the # character to write comments. Whereas Python discards the comments completely, adding comments in the code is important when the code is to be read by other humans (including yourself in the future). String escaping String escaping refers to the ability to insert special characters in a string. For example, how can you insert ' and ", given that these characters are used to delimit a string in Python code? The backslash is the go-to escape character in Python (and in many other languages too). Here are a few examples: In [7]: print("Hello "world"") print("A list:n* item 1n* item 2") print("C:pathonwindows") print(r"C:pathonwindows") Out[7]: Hello "world" A list: * item 1 * item 2 C:pathonwindows C:pathonwindows The special character n is the new line (or line feed) character. To insert a backslash, you need to escape it, which explains why it needs to be doubled as . You can also disable escaping by using raw literals with a r prefix before the string, like in the last example above. In this case, backslashes are considered as normal characters. This is convenient when writing Windows paths, since Windows uses backslash separators instead of forward slashes like on Unix systems. A very common error on Windows is forgetting to escape backslashes in paths: writing "C:path" may lead to subtle errors. You will find the list of special characters in Python at https://docs.python.org/3.4/reference/lexical_analysis.html#string-and-bytes-literals. Lists A list contains a sequence of items. You can concisely instruct Python to perform repeated actions on the elements of a list. Let's first create a list of numbers as follows: In [8]: items = [1, 3, 0, 4, 1] Note the syntax we used to create the list: square brackets [], and commas , to separate the items. The built-in function len() returns the number of elements in a list: In [9]: len(items) Out[9]: 5 [box type="note" align="alignleft" class="" width=""]Python comes with a set of built-in functions, including print(), len(), max(), functional routines like filter() and map(), and container-related routines like all(), any(), range(), and sorted(). You will find the full list of built-in functions at https://docs.python.org/3.4/library/functions.html.[/box] Now, let's compute the sum of all elements in the list. Python provides a built-in function for this: In [10]: sum(items) Out[10]: 9 We can also access individual elements in the list, using the following syntax: In [11]: items[0] Out[11]: 1 In [12]: items[-1] Out[12]: 1 Note that indexing starts at 0 in Python: the first element of the list is indexed by 0, the second by 1, and so on. Also, -1 refers to the last element, -2, to the penultimate element, and so on. The same syntax can be used to alter elements in the list: In [13]: items[1] = 9 items Out[13]: [1, 9, 0, 4, 1] We can access sublists with the following syntax: In [14]: items[1:3] Out[14]: [9, 0] Here, 1:3 represents a slice going from element 1 included (this is the second element of the list) to element 3 excluded. Thus, we get a sublist with the second and third element of the original list. The first-included/last-excluded asymmetry leads to an intuitive treatment of overlaps between consecutive slices. Also, note that a sublist refers to a dynamic view of the original list, not a copy; changing elements in the sublist automatically changes them in the original list. Python provides several other types of containers: Tuples are immutable and contain a fixed number of elements: In [15]: my_tuple = (1, 2, 3) my_tuple[1] Out[15]: 2 Dictionaries contain key-value pairs. They are extremely useful and common: In [16]: my_dict = {'a': 1, 'b': 2, 'c': 3} print('a:', my_dict['a']) Out[16]: a: 1 In [17]: print(my_dict.keys()) Out[17]: dict_keys(['c', 'a', 'b']) There is no notion of order in a dictionary. However, the native collections module provides an OrderedDict structure that keeps the insertion order (see https://docs.python.org/3.4/library/collections.html). Sets, like mathematical sets, contain distinct elements: In [18]: my_set = set([1, 2, 3, 2, 1]) my_set Out[18]: {1, 2, 3} A Python object is mutable if its value can change after it has been created. Otherwise, it is immutable. For example, a string is immutable; to change it, a new string needs to be created. A list, a dictionary, or a set is mutable; elements can be added or removed. By contrast, a tuple is immutable, and it is not possible to change the elements it contains without recreating the tuple. See https://docs.python.org/3.4/reference/datamodel.html for more details. Loops We can run through all elements of a list using a for loop: In [19]: for item in items: print(item) Out[19]: 1 9 0 4 1 There are several things to note here: The for item in items syntax means that a temporary variable named item is created at every iteration. This variable contains the value of every item in the list, one at a time. Note the colon : at the end of the for statement. Forgetting it will lead to a syntax error! The statement print(item) will be executed for all items in the list. Note the four spaces before print: this is called the indentation. You will find more details about indentation in the next subsection. Python supports a concise syntax to perform a given operation on all elements of a list, as follows: In [20]: squares = [item * item for item in items] squares Out[20]: [1, 81, 0, 16, 1] This is called a list comprehension. A new list is created here; it contains the squares of all numbers in the list. This concise syntax leads to highly readable and Pythonic code. Indentation Indentation refers to the spaces that may appear at the beginning of some lines of code. This is a particular aspect of Python's syntax. In most programming languages, indentation is optional and is generally used to make the code visually clearer. But in Python, indentation also has a syntactic meaning. Particular indentation rules need to be followed for Python code to be correct. In general, there are two ways to indent some text: by inserting a tab character (also referred to as t), or by inserting a number of spaces (typically, four). It is recommended to use spaces instead of tab characters. Your text editor should be configured such that the Tab key on the keyboard inserts four spaces instead of a tab character. In the Notebook, indentation is automatically configured properly; so you shouldn't worry about this issue. The question only arises if you use another text editor for your Python code. Finally, what is the meaning of indentation? In Python, indentation delimits coherent blocks of code, for example, the contents of a loop, a conditional branch, a function, and other objects. Where other languages such as C or JavaScript use curly braces to delimit such blocks, Python uses indentation. Conditional branches Sometimes, you need to perform different operations on your data depending on some condition. For example, let's display all even numbers in our list: In [21]: for item in items: if item % 2 == 0: print(item) Out[21]: 0 4 Again, here are several things to note: An if statement is followed by a boolean expression. If a and b are two integers, the modulo operand a % b returns the remainder from the division of a by b. Here, item % 2 is 0 for even numbers, and 1 for odd numbers. The equality is represented by a double equal sign == to avoid confusion with the assignment operator = that we use when we create variables. Like with the for loop, the if statement ends with a colon :. The part of the code that is executed when the condition is satisfied follows the if statement. It is indented. Indentation is cumulative: since this if is inside a for loop, there are eight spaces before the print(item) statement. Python supports a concise syntax to select all elements in a list that satisfy certain properties. Here is how to create a sublist with only even numbers: In [22]: even = [item for item in items if item % 2 == 0] even Out[22]: [0, 4] This is also a form of list comprehension. Functions Code is typically organized into functions. A function encapsulates part of your code. Functions allow you to reuse bits of functionality without copy-pasting the code. Here is a function that tells whether an integer number is even or not: In [23]: def is_even(number): """Return whether an integer is even or not.""" return number % 2 == 0 There are several things to note here: A function is defined with the def keyword. After def comes the function name. A general convention in Python is to only use lowercase characters, and separate words with an underscore _. A function name generally starts with a verb. The function name is followed by parentheses, with one or several variable names called the arguments. These are the inputs of the function. There is a single argument here, named number. No type is specified for the argument. This is because Python is dynamically typed; you could pass a variable of any type. This function would work fine with floating point numbers, for example (the modulo operation works with floating point numbers in addition to integers). The body of the function is indented (and note the colon : at the end of the def statement). There is a docstring wrapped by triple quotes """. This is a particular form of comment that explains what the function does. It is not mandatory, but it is strongly recommended to write docstrings for the functions exposed to the user. The return keyword in the body of the function specifies the output of the function. Here, the output is a Boolean, obtained from the expression number % 2 == 0. It is possible to return several values; just use a comma to separate them (in this case, a tuple of Booleans would be returned). Once a function is defined, it can be called like this: In [24]: is_even(3) Out[24]: False In [25]: is_even(4) Out[25]: True Here, 3 and 4 are successively passed as arguments to the function. Positional and keyword arguments A Python function can accept an arbitrary number of arguments, called positional arguments. It can also accept optional named arguments, called keyword arguments. Here is an example: In [26]: def remainder(number, divisor=2): return number % divisor The second argument of this function, divisor, is optional. If it is not provided by the caller, it will default to the number 2, as shown here: In [27]: remainder(5) Out[27]: 1 There are two equivalent ways of specifying a keyword argument when calling a function. They are as follows: In [28]: remainder(5, 3) Out[28]: 2 In [29]: remainder(5, divisor=3) Out[29]: 2 In the first case, 3 is understood as the second argument, divisor. In the second case, the name of the argument is given explicitly by the caller. This second syntax is clearer and less error-prone than the first one. Functions can also accept arbitrary sets of positional and keyword arguments, using the following syntax: In [30]: def f(*args, **kwargs): print("Positional arguments:", args) print("Keyword arguments:", kwargs) In [31]: f(1, 2, c=3, d=4) Out[31]: Positional arguments: (1, 2) Keyword arguments: {'c': 3, 'd': 4} Inside the function, args is a tuple containing positional arguments, and kwargs is a dictionary containing keyword arguments. Passage by assignment When passing a parameter to a Python function, a reference to the object is actually passed (passage by assignment): If the passed object is mutable, it can be modified by the function If the passed object is immutable, it cannot be modified by the function Here is an example: In [32]: my_list = [1, 2] def add(some_list, value): some_list.append(value) add(my_list, 3) my_list Out[32]: [1, 2, 3] The add() function modifies an object defined outside it (in this case, the object my_list); we say this function has side-effects. A function with no side-effects is called a pure function: it doesn't modify anything in the outer context, and it deterministically returns the same result for any given set of inputs. Pure functions are to be preferred over functions with side-effects. Knowing this can help you spot out subtle bugs. There are further related concepts that are useful to know, including function scopes, naming, binding, and more. Here are a couple of links: Passage by reference at https://docs.python.org/3/faq/programming.html#how-do-i-write-a-function-with-output-parameters-call-by-reference Naming, binding, and scope at https://docs.python.org/3.4/reference/executionmodel.html Errors Let's discuss errors in Python. As you learn, you will inevitably come across errors and exceptions. The Python interpreter will most of the time tell you what the problem is, and where it occurred. It is important to understand the vocabulary used by Python so that you can more quickly find and correct your errors. Let's see the following example: In [33]: def divide(a, b): return a / b In [34]: divide(1, 0) Out[34]: --------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-2-b77ebb6ac6f6> in <module>() ----> 1 divide(1, 0) <ipython-input-1-5c74f9fd7706> in divide(a, b) 1 def divide(a, b): ----> 2 return a / b ZeroDivisionError: division by zero Here, we defined a divide() function, and called it to divide 1 by 0. Dividing a number by 0 is an error in Python. Here, a ZeroDivisionError exception was raised. An exception is a particular type of error that can be raised at any point in a program. It is propagated from the innards of the code up to the command that launched the code. It can be caught and processed at any point. You will find more details about exceptions at https://docs.python.org/3/tutorial/errors.html, and common exception types at https://docs.python.org/3/library/exceptions.html#bltin-exceptions. The error message you see contains the stack trace, the exception type, and the exception message. The stack trace shows all function calls between the raised exception and the script calling point. The top frame, indicated by the first arrow ---->, shows the entry point of the code execution. Here, it is divide(1, 0), which was called directly in the Notebook. The error occurred while this function was called. The next and last frame is indicated by the second arrow. It corresponds to line 2 in our function divide(a, b). It is the last frame in the stack trace: this means that the error occurred there. Object-oriented programming Object-oriented programming (OOP) is a relatively advanced topic. Although we won't use it much in this book, it is useful to know the basics. Also, mastering OOP is often essential when you start to have a large code base. In Python, everything is an object. A number, a string, or a function is an object. An object is an instance of a type (also known as class). An object has attributes and methods, as specified by its type. An attribute is a variable bound to an object, giving some information about it. A method is a function that applies to the object. For example, the object 'hello' is an instance of the built-in str type (string). The type() function returns the type of an object, as shown here: In [35]: type('hello') Out[35]: str There are native types, like str or int (integer), and custom types, also called classes, that can be created by the user. In IPython, you can discover the attributes and methods of any object with the dot syntax and tab completion. For example, typing 'hello'.u and pressing Tab automatically shows us the existence of the upper() method: In [36]: 'hello'.upper() Out[36]: 'HELLO' Here, upper() is a method available to all str objects; it returns an uppercase copy of a string. A useful string method is format(). This simple and convenient templating system lets you generate strings dynamically, as shown in the following example: In [37]: 'Hello {0:s}!'.format('Python') Out[37]: Hello Python The {0:s} syntax means "replace this with the first argument of format(), which should be a string". The variable type after the colon is especially useful for numbers, where you can specify how to display the number (for example, .3f to display three decimals). The 0 makes it possible to replace a given value several times in a given string. You can also use a name instead of a position—for example 'Hello {name}!'.format(name='Python'). Some methods are prefixed with an underscore _; they are private and are generally not meant to be used directly. IPython's tab completion won't show you these private attributes and methods unless you explicitly type _ before pressing Tab. In practice, the most important thing to remember is that appending a dot . to any Python object and pressing Tab in IPython will show you a lot of functionality pertaining to that object. Functional programming Python is a multi-paradigm language; it notably supports imperative, object-oriented, and functional programming models. Python functions are objects and can be handled like other objects. In particular, they can be passed as arguments to other functions (also called higher-order functions). This is the essence of functional programming. Decorators provide a convenient syntax construct to define higher-order functions. Here is an example using the is_even() function from the previous Functions section: In [38]: def show_output(func): def wrapped(*args, **kwargs): output = func(*args, **kwargs) print("The result is:", output) return wrapped The show_output() function transforms an arbitrary function func() to a new function, named wrapped(), that displays the result of the function, as follows: In [39]: f = show_output(is_even) f(3) Out[39]: The result is: False Equivalently, this higher-order function can also be used with a decorator, as follows: In [40]: @show_output def square(x): return x * x In [41]: square(3) Out[41]: The result is: 9 You can find more information about Python decorators at https://en.wikipedia.org/wiki/Python_syntax_and_semantics#Decorators and at http://www.thecodeship.com/patterns/guide-to-python-function-decorators/. Python 2 and 3 Let's finish this section with a few notes about Python 2 and Python 3 compatibility issues. There are still some Python 2 code and libraries that are not compatible with Python 3. Therefore, it is sometimes useful to be aware of the differences between the two versions. One of the most obvious differences is that print is a statement in Python 2, whereas it is a function in Python 3. Therefore, print "Hello" (without parentheses) works in Python 2 but not in Python 3, while print("Hello") works in both Python 2 and Python 3. There are several non-mutually exclusive options to write portable code that works with both versions: futures: A built-in module supporting backward-incompatible Python syntax 2to3: A built-in Python module to port Python 2 code to Python 3 six: An external lightweight library for writing compatible code Here are a few references: Official Python 2/3 wiki page at https://wiki.python.org/moin/Python2orPython3 The Porting to Python 3 book, by CreateSpace Independent Publishing Platform at http://www.python3porting.com/bookindex.html 2to3 at https://docs.python.org/3.4/library/2to3.html six at https://pythonhosted.org/six/ futures at https://docs.python.org/3.4/library/__future__.html The IPython Cookbook contains an in-depth recipe about choosing between Python 2 and 3, and how to support both. Going beyond the basics You now know the fundamentals of Python, the bare minimum that you will need in this book. As you can imagine, there is much more to say about Python. Following are a few further basic concepts that are often useful and that we cannot cover here, unfortunately. You are highly encouraged to have a look at them in the references given at the end of this section: range and enumerate pass, break, and, continue, to be used in loops Working with files Creating and importing modules The Python standard library provides a wide range of functionality (OS, network, file systems, compression, mathematics, and more) Here are some slightly more advanced concepts that you might find useful if you want to strengthen your Python skills: Regular expressions for advanced string processing Lambda functions for defining small anonymous functions Generators for controlling custom loops Exceptions for handling errors with statements for safely handling contexts Advanced object-oriented programming Metaprogramming for modifying Python code dynamically The pickle module for persisting Python objects on disk and exchanging them across a network Finally, here are a few references: Getting started with Python: https://www.python.org/about/gettingstarted/ A Python tutorial: https://docs.python.org/3/tutorial/index.html The Python Standard Library: https://docs.python.org/3/library/index.html Interactive tutorial: http://www.learnpython.org/ Codecademy Python course: http://www.codecademy.com/tracks/python Language reference (expert level): https://docs.python.org/3/reference/index.html Python Cookbook, by David Beazley and Brian K. Jones, O'Reilly Media (advanced level, highly recommended if you want to become a Python expert) Summary In this article, we have seen how to launch the IPython console and Jupyter Notebook, the different aspects of the Notebook and its user interface, the structure of the notebook cell, keyboard shortcuts that are available in the Notebook interface, and the basics of Python. Introduction to Data Analysis and Libraries Hand Gesture Recognition Using a Kinect Depth Sensor The strange relationship between objects, functions, generators and coroutines
Read more
  • 0
  • 0
  • 126551

article-image-how-to-develop-for-wearable-tech-with-pebblejs
Eugene Safronov
09 Oct 2015
7 min read
Save for later

How to Develop for Today's Wearable Tech with Pebble.js

Eugene Safronov
09 Oct 2015
7 min read
Pebble is a smartwatch that pairs with both Android and iPhone devices via Bluetooth. It has an e-paper display with LED backlight, accelerometer and compass sensors. On top of that battery lasts up to a week between charges. From the beginning Pebble team embraced the Developer community which resulted in powerful SDK. Although a primary language for apps development is C, there is a room for JavaScript developers as well. PebbleKit JS The PebbleKit JavaScript framework expands the ability of Pebble app to run JavaScript logic on the phone. It allows fast access to data location from the phone and has API for getting data from the web. Unfortunately app development still requires programming in C. I could recommend some great articles on how to get started. Pebble.js Pebble.js, in contrast to PebbleKit JS, allows developers to create watchapp using only JavaScript code. It is simple yet powerful enough for creating watch apps that fetch and display data from various web services or remotely control other smartdevices. The downside of that approach is connected to the way how Pebble.js works. It is built on top of the standard Pebble SDK and PebbleKit JS. It consists of a C app that runs on the watch and interacts with the phone in order to process user actions. The Pebble.js library provides an API to build user interface and then remotely controls the C app to display it. As a consequence of the described approach watchapp couldn't function without a connection to the phone. On a side note, I would mention that library is open source and still in beta, so breaking API changes are possible. First steps There are 2 options getting started with Pebble.js: Install Pebble SDK on your local machine. This option allows you to customize Pebble.js. Create a CloudPebble account and work with your appwatch projects online. It is the easiest way to begin Pebble development. CloudPebble The CloudPebble environment allows you to write, build and deploy your appwatch applications both on a simulator and a physical device. Everything is stored in the cloud so no headache with compilers, virtual machines or python dependencies (in my case installation of boost-python end up with errors on MacOS). Hello world As an introduction let's build the Hello World application with Pebble.js. Create a new project in CloudPebble: Then write the following code in the app.js file: // Import the UI elements var UI = require('ui'); // Create a simple Card var card = new UI.Card({ title: 'Hello World', body: 'This is your first Pebble app!' }); // Display to the user card.show(); Start the code on Pebble watch or simulator and you will get the same screen as below: StackExchange profile Getting some data from web services like weather or news is easy with an Ajax library call. For example let's construct an app view of your StackExchange profile: var UI = require('ui'); var ajax = require('ajax'); // Create a Card with title and subtitle var card = new UI.Card({ title:'Profile', subtitle:'Fetching...' }); // Display the Card card.show(); // Construct URL // https://api.stackexchange.com/docs/me#order=desc&sort=reputation&filter=default&site=stackoverflow&run=true var API_TOKEN = 'put api token here'; var API_KEY = 'secret key'; var URL = 'https://api.stackexchange.com/2.2/me?key=' + API_KEY + '&order=desc&sort=reputation&access_token=' + API_TOKEN + '&filter=default'; // Make the request ajax( { url: URL, type: 'json' }, function(data) { // Success! console.log('Successfully fetched StackOverflow profile'); var profile = data.items[0]; var badges = 'Badges: ' + profile.badge_counts.gold + ' ' + profile.badge_counts.silver + ' ' + profile.badge_counts.bronze; // Show to user card.subtitle('Rep: ' + profile.reputation); card.body(badges + 'nDaily change:' + profile.reputation_change_day + 'nWeekly change:' + profile.reputation_change_week + 'nMonthly change:' + profile.reputation_change_month); }, function(error) { // Failure! console.log('Failed fetching Stackoverflow data: ' + JSON.stringify(error)); } ); Egg timer Lastly I would like to create a small real life watchapp. I will demonstrate how to compose a Timer app for boiling eggs. Let's start with the building blocks that we need: Window is the basic building block in Pebble application. It allows you to add different elements and specify a position and size for them. A menu is a type of Window that displays a standard Pebble menu. Vibe allows you to trigger vibration on the wrist. It will signal that eggs are boiled. Egg size screen Users are able to select eggs size from options: Medium Large Extra-large var UI = require('ui'); var menu = new UI.Menu({ sections: [{ title: 'Egg size', items: [{ title: 'Medium', }, { title: 'Large' }, { title: 'Extra-large' }] }] }); menu.show(); Timer selection screen On the next step user selects timer duration. It depends whether he wants soft-boiled or hard-boiled eggs. The second level menu for medium size looks like: var mdMenu = new UI.Menu({ sections: [{ title: 'Egg timer', items: [{ title: 'Runny', subtitle: '2m' }, { title: 'Soft', subtitle: '3m' }, { title: 'Firm', subtitle: '4m' }, { title: 'Hard', subtitle: '5m' }] }] }); // open second level menu from the main menu.on('select', function(e) { if (e.itemIndex === 0){ mdMenu.show(); } else if (e.itemIndex === 1){ lgMenu.show(); } else { xlMenu.show(); } }); Timer screen When timer duration is selected we start a countdown. mdMenu.on('select', onTimerSelect); lgMenu.on('select', onTimerSelect); xlMenu.on('select', onTimerSelect); // timeouts mapping from header to seconds var timeouts = { '2m': 120, '3m': 180, '4m': 240, '5m': 300, '6m': 360, '7m': 420 }; function onTimerSelect(e){ var timeout = timeouts[e.item.subtitle]; timer(timeout); } The final bit of the watchapp is to display a timer, show a message and notify the user with vibration on the wrist when time is elapsed. var readyMessage = new UI.Card({ title: 'Done', body: 'Your eggs are ready!' }); function timer(timerInSec){ var intervalId = setInterval(function(){ timerInSec--; // notify with double vibration if (timerInSec == 1){ Vibe.vibrate('double'); } if (timerInSec > 0){ timerText.text(getTimeString(timerInSec)); } else { readyMessage.show(); timerWindow.hide(); clearInterval(intervalId); // notify with long vibration Vibe.vibrate('long'); } }, 1000); var timerWindow = new UI.Window(); var timerText = new UI.Text({ position: new Vector2(0, 50), size: new Vector2(144, 30), font: 'bitham-42-light', text: getTimeString(timerInSec), textAlign: 'center' }); timerWindow.add(timerText); timerWindow.show(); timerWindow.on('hide', function(){ clearInterval(intervalId); }); } // format remaining time into 00:00 string function getTimeString(timeInSec){ var minutes = parseInt(timeInSec / 60); var seconds = timeInSec % 60; return minutes + ':' + (seconds < 10 ? ('0' + seconds) : seconds); } Conclusion You can do much more with Pebble.js: Get accelerometer values Display complex UI mixing geometric elements, text and images Animate elements on the screen Use the GPS and LocalStorage on the phone Timeline API is coming Pebble.js is best suited for quick prototyping and applications that require access to the Internet. The unfortunate part of the JavaScript written applications is the requirement of a constant connection to the phone. Usually Pebble.js apps need more power and respond slower than a similar native app. Useful links Pebble.js tutorial Pebble blog Pebble.js docs Egg timer code About the author Eugene Safronov is a software engineer with a proven record of delivering high quality software. He has an extensive experience building successful teams and adjusting development processes to the project’s needs. His primary focuses are Web (.NET, node.js stacks) and cross-platform mobile development (native and hybrid). He can be found on Twitter @sejoker.
Read more
  • 0
  • 0
  • 5080
article-image-asynchronous-communication-between-components
Packt
09 Oct 2015
12 min read
Save for later

Asynchronous Communication between Components

Packt
09 Oct 2015
12 min read
In this article by Andreas Niedermair, the author of the book Mastering ServiceStack, we will see the communication between asynchronous components. The recent release of .NET has added several new ways to further embrace asynchronous and parallel processing by introducing the Task Parallel Library (TPL) and async and await. (For more resources related to this topic, see here.) The need for asynchronous processing has been there since the early days of programming. Its main concept is to offload the processing to another thread or process to release the calling thread from waiting and it has become a standard model since the rise of GUIs. In such interfaces only one thread is responsible for drawing the GUI, which must not be blocked in order to remain available and also to avoid putting the application in a non-responding state. This paradigm is a core point in distributed systems, at some point, long running operations are offloaded to a separate component, either to overcome blocking or to avoid resource bottlenecks using dedicated machines, which also makes the processing more robust against unexpected application pool recycling and other such issues. A synonym for "fire-and-forget" is "one-way", which is also reflected by the design of static routes of ServiceStack endpoints, where the default is /{format}/oneway/{service}. Asynchronism adds a whole new level of complexity to our processing chain, as some callers might depend on a return value. This problem can be overcome by adding callback or another event to your design. Messaging or in general a producer consumer chain is a fundamental design pattern, which can be applied within the same process or inter-process, on the same or a cross machine to decouple components. Consider the following architecture: The client issues a request to the service, which processes the message and returns a response. The server is known and is directly bound to the client, which makes an on-the-fly addition of servers practically impossible. You'd need to reconfigure the clients to reflect the collection of servers on every change and implement a distribution logic for requests. Therefore, a new component is introduced, which acts as a broker (without any processing of the message, except delivery) between the client and service to decouple the service from the client. This gives us the opportunity to introduce more services for heavy load scenarios by simply registering a new instance to the broker, as shown in the following figure:. I left out the clustering (scaling) of brokers and also the routing of messages on purpose at this stage of introduction. In many cross process scenarios a database is introduced as a broker, which is constantly polled by services (and clients, if there's a response involved) to check whether there's a message to be processed or not. Adding a database as a broker and implementing your own logic can be absolutely fine for basic systems, but for more advanced scenarios it lacks some essential features, which Messages Queues come shipped with. Scalability: Decoupling is the biggest step towards a robust design, as it introduces the possibility to add more processing nodes to your data flow. Resilience: Messages are guaranteed to be delivered and processed as automatic retrying is available for non-acknowledged (processed) messages. If the retry count is exceeded, failed messages are stored in a Dead Letter Queue (DLQ) to be inspected later and are requeued after fixing the issue that caused the failure. In case of a partial failure of your infrastructure, clients can still produce messages that get delivered and processed as soon as there is even a single consumer back online. Pushing instead of polling: This is where asynchronism comes into play, as clients do not constantly poll for messages but instead it gets pushed by the broker when there's a new message in their subscribed queue. This minimizes the spinning and wait time, when the timer ticks only for 10 seconds. Guaranteed order: Most Message Queues offer a guaranteed order of the processing under defined conditions (mostly FIFO). Load balancing: With multiple services registered for messages, there is an inherent load balancing so that the heavy load scenarios can be handled better. In addition to this round-robin routing there are other routing logics, such as smallest-mailbox, tail-chopping, or random routing. Message persistence: Message Queues can be configured to persist their data to disk and even survive restarts of the host on which they are running. To overcome the downtime of the Message Queue you can even setup a cluster to offload the demand to other brokers while restarting a single node. Built-in priority: Message Queues usually have separate queues for different messages and even provide a separate in queue for prioritized messages. There are many more features, such as Time to live, security and batching modes, which we will not cover as they are outside the scope of ServiceStack. In the following example we will refer to two basic DTOs: public class Hello : ServiceStack.IReturn<HelloResponse> { public string Name { get; set; } } public class HelloResponse { public string Result { get; set; } } The Hello class is used to send a Name to a consumer that generates a message, which will be enqueued in the Message Queue as well. RabbitMQ RabbitMQ is a mature broker built on top of the Advanced Message Queuing Protocol (AMQP), which makes it possible to solve even more complex scenarios, as shown here: The messages will survive restarts of the RabbitMQ service and the additional guaranty of delivery is accomplished by depending upon an acknowledgement of the receipt (and processing) of the message, by default it is done by ServiceStack for typical scenarios. The client of this Message Queue is located in the ServiceStack.RabbitMq object's NuGet package (it uses the official client in the RabbitMQ.Client package under the hood). You can add additional protocols to RabbitMQ, such as Message Queue Telemetry Transport (MQTT) and Streaming Text Oriented Messaging Protocol (STOMP), with plugins to ease Interop scenarios. Due to its complexity, we will focus on an abstracted interaction with the broker. There are many books and articles available for a deeper understanding of RabbitMQ. A quick overview of the covered scenarios is available at https://www.rabbitmq.com/getstarted.html. The method of publishing a message with RabbitMQ does not differ much from RedisMQ: using ServiceStack; using ServiceStack.RabbitMq; using (var rabbitMqServer = new RabbitMqServer()) { using (var messageProducer = rabbitMqServer.CreateMessageProducer()) { var hello = new Hello { Name = "Demo" }; messageProducer.Publish(hello); } } This will create a Helloobject and publish it to the corresponding queue in RabbitMQ. To retrieve this message, we need to register a handler, as shown here: using System; using ServiceStack; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); rabbitMqServer.RegisterHandler<Hello>(message => { var hello = message.GetBody(); var name = hello.Name; var result = "Hello {0}".Fmt(name); result.Print(); return null; }); rabbitMqServer.Start(); "Listening for hello messages".Print(); Console.ReadLine(); rabbitMqServer.Dispose(); This registers a handler for Hello objects and prints a message to the console. In favor of a straightforward example we are omitting all the parameters with default values of the constructor of RabbitMqServer, which will connect us to the local instance at port 5672. To change this, you can either provide a connectionString parameter (and optional credentials) or use a RabbitMqMessageFactory object to customize the connection. Setup Setting up RabbitMQ involves a bit of effort. At first you need to install Erlang from http://www.erlang.org/download.html, which is the runtime for RabbitMQ due to its functional and concurrent nature. Then you can grab the installer from https://www.rabbitmq.com/download.html, which will set RabbitMQ up and running as a service with a default configuration. Processing chain Due to its complexity, the processing chain with any mature Message Queue is different from what you know from RedisMQ. Exchanges are introduced in front of queues to route the messages to their respective queues according to their routing keys: The default exchange name is mx.servicestack (defined in ServiceStack.Messaging.QueueNames.Exchange) and is used in any Publish to call an IMessageProducer or IMessageQueueClient object. With IMessageQueueClient.Publish you can inject a routing key (queueName parameter), to customize the routing of a queue. Failed messages are published to the ServiceStack.Messaging.QueueNames.ExchangeDlq (mx.servicestack.dlq) and routed to queues with the name mq:{type}.dlq. Successful messages are published to ServiceStack.Messaging.QueueNames.ExchangeTopic (mx.servicestack.topic) and routed to the queue mq:{type}.outq. Additionally, there's also a priority queue to the in-queue with the name mq:{type}.priority. If you interact with RabbitMQ on a lower level, you can directly publish to queues and leave the routing via an exchange out of the picture. Each queue has features to define whether the queue is durable, deletes itself after the last consumer disconnected, or which exchange is to be used to publish dead messages with which routing key. More information on the concepts, different exchange types, queues, and acknowledging messages can be found at https://www.rabbitmq.com/tutorials/amqp-concepts.html. Replying directly back to the producer Messages published to a queue are dequeued in FIFO mode, hence there is no guarantee if the responses are delivered to the issuer of the initial message or not. To force a response to the originator you can make use of the ReplyTo property of a message: using System; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); var messageQueueClient = rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); var hello = new Hello { Name = "reply to originator" }; messageQueueClient.Publish(new Message<Hello>(hello) { ReplyTo = queueName }); var message = messageQueueClient.Get<HelloResponse>(queueName); var helloResponse = message.GetBody(); This code is more or less identical to the RedisMQ approach, but it does something different under the hood. The messageQueueClient.GetTempQueueName object creates a temporary queue, whose name is generated by ServiceStack.Messaging.QueueNames.GetTempQueueName. This temporary queue does not survive a restart of RabbitMQ, and gets deleted as soon as the consumer disconnects. As each queue is a separate Erlang process, you may encounter the process limits of Erlang and the maximum amount of file descriptors of your OS. Broadcasting a message In many scenarios a broadcast to multiple consumers is required, for example if you need to attach multiple loggers to a system it needs a lower level of implementation. The solution to this requirement is to create a fan-out exchange that will forward the message to all the queues instead of one connected queue, where one queue is consumed exclusively by one consumer, as shown: using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var fanoutExchangeName = string.Concat(QueueNames.Exchange, ".", ExchangeType.Fanout); var rabbitMqServer = new RabbitMqServer(); var messageProducer= (RabbitMqProducer) rabbitMqServer.CreateMessageProducer(); var channel = messageProducer.Channel; channel.ExchangeDeclare(exchange: fanoutExchangeName, type: ExchangeType.Fanout, durable: true, autoDelete: false, arguments: null); With the cast to RabbitMqProducer we have access to lower level actions, we need to declare and exchange this with the name mx.servicestack.fanout, which is durable and does not get deleted. Now, we need to bind a temporary and an exclusive queue to the exchange: var messageQueueClient = (RabbitMqQueueClient) rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); channel.QueueBind(queue: queueName, exchange: fanoutExchangeName, routingKey: QueueNames<Hello>.In); The call to messageQueueClient.GetTempQueueName() creates a temporary queue, which lives as long as there is just one consumer connected. This queue is bound to the fan-out exchange with the routing key mq:Hello.inq, as shown here: To publish the messages we need to use the RabbitMqProducer object (messageProducer): var hello = new Hello { Name = "Broadcast" }; var message = new Message<Hello>(hello); messageProducer.Publish(queueName: QueueNames<Hello>.In, message: message, exchange: fanoutExchangeName); Even though the first parameter of Publish is named queueName, it is propagated as the routingKey to the underlying PublishMessagemethod call. This will publish the message on the newly generated exchange with mq:Hello.inq as the routing key: Now, we need to encapsulate the handling of the message as: var messageHandler = new MessageHandler<Hello>(rabbitMqServer, message => { var hello = message.GetBody(); var name = hello.Name; name.Print(); return null; }); The MessageHandler<T> class is used internally in all the messaging solutions and looks for retries and replies. Now, we need to connect the message handler to the queue. using System; using System.IO; using System.Threading.Tasks; using RabbitMQ.Client; using RabbitMQ.Client.Exceptions; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var consumer = new RabbitMqBasicConsumer(channel); channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer); Task.Run(() => { while (true) { BasicGetResult basicGetResult; try { basicGetResult = consumer.Queue.Dequeue(); } catch (EndOfStreamException) { // this is ok return; } catch (OperationInterruptedException) { // this is ok return; } var message = basicGetResult.ToMessage<Hello>(); messageHandler.ProcessMessage(messageQueueClient, message); } }); This creates a RabbitMqBasicConsumer object, which is used to consume the temporary queue. To process messages we try to dequeuer from the Queue property in a separate task. This example does not handle the disconnects and reconnects from the server and does not integrate with the services (however, both can be achieved). Integrate RabbitMQ in your service The integration of RabbitMQ in a ServiceStack service does not differ overly from RedisMQ. All you have to do is adapt to the Configure method of your host. using Funq; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; public override void Configure(Container container) { container.Register<IMessageService>(arg => new RabbitMqServer()); container.Register<IMessageFactory>(arg => new RabbitMqMessageFactory()); var messageService = container.Resolve<IMessageService>(); messageService.RegisterHandler<Hello> (this.ServiceController.ExecuteMessage); messageService.Start(); } The registration of an IMessageService is needed for the rerouting of the handlers to your service; and also, the registration of an IMessageFactory is relevant if you want to publish a message in your service with PublishMessage. Summary In this article the messaging pattern was introduced along with all the available clients of existing Message Queues. Resources for Article: Further resources on this subject: ServiceStack applications[article] Web API and Client Integration[article] Building a Web Application with PHP and MariaDB – Introduction to caching [article]
Read more
  • 0
  • 0
  • 6125

article-image-creating-city-information-app-customized-table-views
Packt
08 Oct 2015
19 min read
Save for later

Creating a City Information App with Customized Table Views

Packt
08 Oct 2015
19 min read
In this article by Cecil Costa, the author of Swift 2 Blueprints, we will cover the following: Project overview Setting it up The first scene Displaying cities information (For more resources related to this topic, see here.) Project overview The idea of this app is to give users information about cities such as the current weather, pictures, history, and cities that are around. How can we do it? Firstly, we have to decide on how the app is going to suggest a city to the user. Of course, the most logical city would be the city where the user is located, which means that we have to use the Core Location framework to retrieve the device's coordinates with the help of GPS. Once we have retrieved the user's location, we can search for cities next to it. To do this, we are going to use a service from http://www.geonames.org/. Other information that will be necessary is the weather. Of course, there are a lot of websites that can give us information on the weather forecast, but not all of them offer an API to use it for your app. In this case, we are going to use the Open Weather Map service. What about pictures? For pictures, we can use the famous Flickr. Easy, isn't it? Now that we have the necessary information, let's start with our app. Setting it up Before we start coding, we are going to register the needed services and create an empty app. First, let's create a user at geonames. Just go to http://www.geonames.org/login with your favorite browser, sign up as a new user, and confirm it when you receive a confirmation e-mail. It may look like everything has been done, however, you still need to upgrade your account to use the API services. Don't worry, it's free! So, open http://www.geonames.org/manageaccount and upgrade your account. Don't use the user demo provided by geonames, even for development. This user exceeds its daily quota very frequently. With geonames, we can receive information on cities by their coordinates, but we don't have the weather forecast and pictures. For weather forecasts, open http://openweathermap.org/register and register a new user and API. Lastly, we need a service for the cities' pictures. In this case, we are going to use Flickr. Just create a Yahoo! account and create an API key at https://www.flickr.com/services/apps/create/. While creating a new app, try to investigate the services available for it and their current status. Unfortunately, the APIs change a lot like their prices, their terms, and even their features. Now, we can start creating the app. Open Xcode, create a new single view application for iOS, and call it Chapter 2 City Info. Make sure that Swift is the main language like the following picture: The first task here is to add a library to help us work with JSON messages. In this case, a library called SwiftyJSON will solve our problem. Otherwise, it would be hard work to navigate through the NSJSONSerialization results. Download the SwiftyJSON library from https://github.com/SwiftyJSON/SwiftyJSON/archive/master.zip, then uncompress it, and copy the SwiftyJSON.swift file in your project. Another very common way of installing third party libraries or frameworks would be to use CocoaPods, which is commonly known as just PODs. This is a dependency manager, which downloads the desired frameworks with their dependencies and updates them. Check https://cocoapods.org/ for more information. Ok, so now it is time to start coding. We will create some functions and classes that should be common for the whole program. As you know, many functions return NSError if something goes wrong. However, sometimes, there are errors that are detected by the code, like when you receive a JSON message with an unexpected struct. For this reason, we are going to create a class that creates custom NSError. Once we have it, we will add a new file to the project (command + N) called ErrorFactory.swift and add the following code: import Foundation class ErrorFactory {{ static let Domain = "CityInfo" enum Code:Int { case WrongHttpCode = 100, MissingParams = 101, AuthDenied = 102, WrongInput = 103 } class func error(code:Code) -> NSError{ let description:String let reason:String let recovery:String switch code { case .WrongHttpCode: description = NSLocalizedString("Server replied wrong code (not 200, 201 or 304)", comment: "") reason = NSLocalizedString("Wrong server or wrong api", comment: "") recovery = NSLocalizedString("Check if the server is is right one", comment: "") case .MissingParams: description = NSLocalizedString("There are some missing params", comment: "") reason = NSLocalizedString("Wrong endpoint or API version", comment: "") recovery = NSLocalizedString("Check the url and the server version", comment: "") case .AuthDenied: description = NSLocalizedString("Authorization denied", comment: "") reason = NSLocalizedString("User must accept the authorization for using its feature", comment: "") recovery = NSLocalizedString("Open user auth panel.", comment: "") case .WrongInput: description = NSLocalizedString("A parameter was wrong", comment: "") reason = NSLocalizedString("Probably a cast wasn't correct", comment: "") recovery = NSLocalizedString("Check the input parameters.", comment: "") } return NSError(domain: ErrorFactory.Domain, code: code.rawValue, userInfo: [ NSLocalizedDescriptionKey: description, NSLocalizedFailureReasonErrorKey: reason, NSLocalizedRecoverySuggestionErrorKey: recovery ]) } } The previous code shows the usage of NSError that requires a domain, which is a string that differentiates the error type/origin and avoids collisions in the error code. The error code is just an integer that represents the error that occurred. We used an enumeration based on integer values, which makes it easier for the developer to remember and allows us to convert its enumeration to an integer easily with the rawValue property. The third argument of an NSError initializer is a dictionary that contains messages, which can be useful to the user (actually to the developer). Here, we have three keys: NSLocalizedDescriptionKey: This contains a basic description of the error NSLocalizedFailureReasonErrorKey: This explains what caused the error NSLocalizedRecoverySuggestionErrorKey: This shows what is possible to avoid this error As you might have noticed, for these strings, we used a function called NSLocalizedString, which will retrieve the message in the corresponding language if it is set to the Localizable.strings file. So, let's add a new file to our app and call it Helpers.swift; click on it for editing. URLs have special character combinations that represent special characters, for example, a whitespace in a URL is sent as a combination of %20 and a open parenthesis is sent with the combination of %28. The stringByAddingPercentEncodingWithAllowedCharacters string method allows us to do this character conversion. If you need more information on the percent encoding, you can check the Wikipedia entry at https://en.wikipedia.org/wiki/Percent-encoding. As we are going to work with web APIs, we will need to encode some texts before we send them to the corresponding server. Type the following function to convert a dictionary into a string with the URL encoding: func toUriEncoded(params: [String:String]) -> String { var records = [String]() for (key, value) in params { let valueEncoded = value.stringByAddingPercentEncodingWithAllowedCharacters(.URLHostAllowedCharacterSet()) records.append("(key)=(valueEncoded!)") } return "&".join(records) } Another common task is to call the main queue. You might have already used a code like dispatch_async(dispatch_get_main_queue(), {() -> () in … }), however, it is too long. We can reduce it by calling it something like M{…}. So, here is the function for it: func M(((completion: () -> () ) { dispatch_async(dispatch_get_main_queue(), completion) } A common task is to request for JSON messages. To do so, we just need to know the endpoint, the required parameters, and the callback. So, we can start with this function as follows: func requestJSON(urlString:String, params:[String:String] = [:], completion:(JSON, NSError?) -> Void){ let fullUrlString = "(urlString)?(toUriEncoded(params))" if let url = NSURL(string: fullUrlString) { NSURLSession.sharedSession().dataTaskWithURL(url) { (data, response, error) -> Void in if error != nil { completion(JSON(NSNull()), error) return } var jsonData = data! var jsonString = NSString(data: jsonData, encoding: NSUTF8StringEncoding)! Here, we have to add a tricky code, because the Flickr API is always returned with a callback function called jsonFlickrApi while wrapping the corresponding JSON. This callback must be removed before the JSON text is parsed. So, we can fix this issue by adding the following code: // if it is the Flickr response we have to remove the callback function jsonFlickrApi() // from the JSON string if (jsonString as String).characters.startsWith("jsonFlickrApi(".characters) { jsonString = jsonString.substringFromIndex("jsonFlickrApi(".characters.count) let end = (jsonString as String).characters.count - 1 jsonString = jsonString.substringToIndex(end) jsonData = jsonString.dataUsingEncoding(NSUTF8StringEncoding)! } Now, we can complete this function by creating a JSON object and calling the callback: let json = JSON(data:jsonData) completion(json, nil) }.resume() }else { completion(JSON(NSNull()), ErrorFactory.error(.WrongInput)) } } At this point, the app has a good skeleton. It means that, from now on, we can code the app itself. The first scene Create a project group (command + option + N) for the view controllers and move the ViewController.swift file (created by Xcode) to this group. As we are going to have more than one view controller, it is also a good idea to rename it to InitialViewController.swift: Now, open this file and rename its class from ViewController to InitialViewController: class InitialViewController: UIViewController { Once the class is renamed, we need to update the corresponding view controller in the storyboard by: Clicking on the storyboard. Selecting the view controller (the only one we have till now). Going to the Identity inspector by using the command+ option + 3 combination. Here, you can update the class name to the new one. Pressing enter and confirming that the module name is automatically updated from None to the product name. The following picture demonstrates where you should do this change and how it should be after the change: Great! Now, we can draw the scene. Firstly, let's change the view background color. To do it, select the view that hangs from the view controller. Go to the Attribute Inspector by pressing command+ option + 4, look for background color, and choose other, as shown in the following picture: When the color dialog appears, choose the Color Sliders option at the top and select the RGB Sliders combo box option. Then, you can change the color as per your choice. In this case, let's set it to 250 for the three colors: Before you start a new app, create a mockup of every scene. In this mockup, try to write down the color numbers for the backgrounds, fonts, and so on. Remember that Xcode still doesn't have a way to work with styles as websites do with CSS, meaning that if you have to change the default background color, for example, you will have to repeat it everywhere. On the storyboard's right-hand side, you have the Object Library, which can be easily accessed with the command + option + control + 3 combination. From there, you can search for views, view controllers, and gestures, and drag them to the storyboard or scene. The following picture shows a sample of it: Now, add two labels, a search bar, and a table view. The first label should be the app title, so let's write City Info on it. Change its alignment to center, the font to American Typewriter, and the font size to 24. On the other label, let's do the same, but write Please select your city and its font size should be 18. The scene must result in something similar to the following picture: Do we still need to do anything else on this storyboard scene? The answer is yes. Now it is time for the auto layout, otherwise the scene components will be misplaced when you start the app. There are different ways to add auto layout constraints to a component. An easy way of doing it is by selecting the component by clicking on it like the top label. With the control key pressed, drag it to the other component on which the constraint will be based like the main view. The following picture shows a sample of a constraint being created from a table to the main view: Another way is by selecting the component and clicking on the left or on the middle button, which are to the bottom-right of the interface builder screen. The following picture highlights these buttons: Whatever is your favorite way of adding constraints, you will need the following constraints and values for the current scene: City Info label Center X equals to the center of superview (main view), value 0 City Info label top equals to the top layout guide, value 0 Select your city label top vertical spacing of 8 to the City Info label Select your city label alignment center X to superview, value 0 Search bar top value 8 to select your city label Search bar trailing and leading space 0 to superview Table view top space (space 0) to the search bar Table view trailing and leading space 0 to the search bar Table view bottom 0 to superview Before continuing, it is a good idea to check whether the layout suits for every resolution. To do it, open the assistant editor with command + option + .return and change its view to Preview: Here, you can have a preview of your screen on the device. You can also rotate the screens by clicking on the icon with a square and a arched arrow over it: Click on the plus sign to the bottom-left of the assistant editor to add more screens: Once you are happy with your layout, you can move on to the next step. Although the storyboard is not yet done, we are going to leave it for a while. Click on the InitialViewController.swift file. Let's start receiving information on where the device is using the GPS. To do it, import the Core Location framework and set the view controller as a delegate: import CoreLocation class InitialViewController: UIViewController, CLLocationManagerDelegate { After this, we can set the core location manager as a property and initialize it on viewDidLoadMethod. Type the following code to set locationManager and initialize InitialViewController: var locationManager = CLLocationManager() override func viewDidLoad() { super.viewDidLoad() locationManager.delegate = self locationManager.desiredAccuracy = kCLLocationAccuracyThreeKilometers locationManager.distanceFilter = 3000 if locationManager.respondsToSelector(Selector("requestWhenInUseAuthorization")) { locationManager.requestWhenInUseAuthorization() } locationManager.startUpdatingLocation() } After initializing the location manager, we have to check whether the GPS is working or not by implementing the didUpdateLocations method. Right now, we are going to print the last location and nothing more: func locationManager(manager: CLLocationManager!, didUpdateLocations locations: [CLLocation]!){ let lastLocation = locations.last! print(lastLocation) } Now, we can test the app. However, we still need to perform one more step. Go to your Info.plist file by pressing command + option + J and the file name. Add a new entry with the NSLocationWhenInUseUsageDescription key and change its type to String and its value to This app needs to know your location. This last step is mandatory since iOS 8. Press play and check whether you have received a coordinate, but not very frequently. Displaying cities information The next step is to create a class to store the information received from the Internet. In this case, we can do it in a straightforward manner by copying the JSON object properties in our class properties. Create a new group called Models and, inside it, a file called CityInfo.swift. There you can code CityInfo as follows: class CityInfo { var fcodeName:String? var wikipedia:String? var geonameId: Int! var population:Int? var countrycode:String? var fclName:String? var lat : Double! var lng: Double! var fcode: String? var toponymName:String? var name:String! var fcl:String? init?(json:JSON){){){ // if any required field is missing we must not create the object. if let name = json["name"].string,,, geonameId = json["geonameId"].int, lat = json["lat"].double, lng = json["lng"].double { self.name = name self.geonameId = geonameId self.lat = lat self.lng = lng }else{ return nil } self.fcodeName = json["fcodeName"].string self.wikipedia = json["wikipedia"].string self.population = json["population"].int self.countrycode = json["countrycode"].string self.fclName = json["fclName"].string self.fcode = json["fcode"].string self.toponymName = json["toponymName"].string self.fcl = json["fcl"].string } } Pay attention that our initializer has a question mark on its header; this is called a failable initializer. Traditional initializers always return a new instance of the newly requested object. However, with failable initializers, you can return a new instance or a nil value, indicating that the object couldn't be constructed. In this initializer, we used an object of the JSON type, which is a class that belongs to the SwiftyJSON library/framework. You can easily access its members by using brackets with string indices to access the members of a json object, like json ["field name"], or using brackets with integer indices to access elements of a json array. Doesn't matter, the way you have to use the return type, it will always be a JSON object, which can't be directly assigned to the variables of another built-in types, such as integers, strings, and so on. Casting from a JSON object to a basic type can be done by accessing properties with the same name as the destination type, such as .string for casting to string objects, .int for casting to int objects, .array or an array of JSON objects, and so on. Now, we have to think about how this information is going to be displayed. As we have to display this information repeatedly, a good way to do so would be with a table view. Therefore, we will create a custom table view cell for it. Go to your project navigator, create a new group called Cells, and add a new file called CityInfoCell.swift. Here, we are going to implement a class that inherits from UITableViewCell. Note that the whole object can be configured just by setting the cityInfo property: import UIKit class CityInfoCell:UITableViewCell { @IBOutlet var nameLabel:UILabel! @IBOutlet var coordinates:UILabel! @IBOutlet var population:UILabel! @IBOutlet var infoImage:UIImageView! private var _cityInfo:CityInfo! var cityInfo:CityInfo { get { return _cityInfo } set (cityInfo){ self._cityInfo = cityInfo self.nameLabel.text = cityInfo.name if let population = cityInfo.population { self.population.text = "Pop: (population)" }else { self.population.text = "" } self.coordinates.text = String(format: "%.02f, %.02f", cityInfo.lat, cityInfo.lng) if let _ = cityInfo.wikipedia { self.infoImage.image = UIImage(named: "info") } } } } Return to the storyboard and add a table view cell from the object library to the table view by dragging it. Click on this table view cell and add three labels and one image view to it. Try to organize it with something similar to the following picture: Change the labels font family to American Typewriter, and the font size to 16 for the city name and 12 for the population and the location label..Drag the info.png and noinfo.png images to your Images.xcassets project. Go back to your storyboard and set the image to noinfo in the UIImageView attribute inspector, as shown in the following screenshot: As you know, we have to set the auto layout constraints. Just remember that the constraints will take the table view cell as superview. So, here you have the constraints that need to be set: City name label leading equals 0 to the leading margin (left) City name label top equals 0 to the super view top margin City name label bottom equals 0 to the super view bottom margin City label horizontal space 8 to the population label Population leading equals 0 to the superview center X Population top equals to -8 to the superview top Population trailing (right) equals 8 to the noinfo image Population bottom equals 0 to the location top Population leading equals 0 to the location leading Location height equals to 21 Location trailing equals 8 to the image leading Location bottom equals 0 to the image bottom Image trailing equals 0 to the superview trailing margin Image aspect ratio width equals 0 to the image height Image bottom equals -8 to the superview bottom Image top equals -8 to the superview top Has everything been done for this table view cell? Of course not. We still need to set its class and connect each component. Select the table view cell and change its class to CityInfoCell: As we are here, let's do a similar task that is to change the cell identifier to cityinfocell. This way, we can easily instantiate the cell from our code: Now, you can connect the cell components with the ones we have in the CityInfoCell class and also connect the table view with the view controller: @IBOutlet var tableView: UITableView!! There are different ways to connect a view with the corresponding property. An easy way is to open the assistant view with the command + option + enter combination, leaving the storyboard on the left-hand side and the Swift file on the right-hand side. Then, you just need to drag the circle that will appear on the left-hand side of the @IBOutlet or the @IBAction attribute and connect with the corresponding visual object on the storyboard. After this, we need to set the table view delegate and data source, and also the search bar delegate with the view controller. It means that the InitialViewController class needs to have the following header. Replace the current InitialViewController header with: class InitialViewController: UIViewController, CLLocationManagerDelegate, UITableViewDataSource, UITableViewDelegate, UISearchBarDelegate { Connect the table view and search bar delegate and the data source with the view controller by control dragging from the table view to the view controller's icon at the top of the screen, as shown in the following screenshot: Summary In this article, you learned how to create custom NSError, which is the traditional way of reporting that something went wrong. Every time a function returns NSError, you should try to solve the problem or report what has happened to the user. We could also appreciate the new way of trapping errors with try and catch a few times. This is a new feature on Swift 2, but it doesn't mean that it will replace NSError. They will be used in different situations. Resources for Article: Further resources on this subject: Nodes[article] Network Development with Swift[article] Playing with Swift [article]
Read more
  • 0
  • 0
  • 3573

article-image-linux-shell-scripting
Packt
08 Oct 2015
22 min read
Save for later

Linux Shell Scripting

Packt
08 Oct 2015
22 min read
This article is by Ganesh Naik, the author of the book Learning Linux Shell Scripting, published by Packt Publication (https://www.packtpub.com/networking-and-servers/learning-linux-shell-scripting). Whoever works with Linux will come across the Shell as first program to work with. The Graphical User Interface (GUI) usage has become very popular due to ease of use. Those who want to take advantage of the power of Linux will use the Shell program by default. Shell is a program, which gives the user direct interaction with the operating system. Let's understand the stages in the evolution of the Linux operating system. Linux was developed as a free and open source substitute for UNIX OS. The chronology can be as follows: The UNIX operating system was developed by Ken Thomson and Dennis Ritchie in 1969. It was released in 1970. They rewrote the UNIX using C language in 1972. In 1991, Linus Torvalds developed the Linux Kernel for the free operating system. (For more resources related to this topic, see here.) Comparison of shells Initially, the UNIX OS used a shell program called Bourne Shell. Then eventually, many more shell programs were developed for different flavors of UNIX. The following is brief information about different shells: Sh: Bourne Shell Csh: C Shell Ksh: Korn Shell Tcsh: Enhanced C Shell Bash: GNU Bourne Again Shell Zsh: extension to Bash, Ksh, and Tcsh Pdksh: extension to KSH A brief comparison of various shells is presented in the following table: Feature Bourne C TC Korn Bash Aliases no yes yes yes yes Command-line editing no no yes yes yes Advanced pattern matching no no no yes yes Filename completion no yes yes yes yes Directory stacks (pushd and popd) no yes yes no yes History no yes yes yes yes Functions yes no no yes yes Key binding no no yes no yes Job control no yes yes yes yes Spelling correction no no yes no yes Prompt formatting no no yes no yes What we see here is that, generally, the syntax of all these shells are 95% similar. Tasks done by shell Whenever we type any text in the shell terminal, it is the responsibility of shell to execute the command properly. The activities done by shell are as follows: Reading text and parsing the entered command Evaluating meta-characters such as wildcards, special characters, or history characters Process io-redirection, pipes, and background processing Signal handling Initialize programs for execution Working in shell Let's get started by opening the terminal, and we will familiarize ourselves with the Bash Shell environment. Open the Linux terminal and type in: $ echo $SHELL /bin/bash The preceding output in terminal says that current shell is /bin/bash such as BASH shell. $ bash --version GNU bash, version 2.05.0(1)-release (i386-redhat-linux-gnu) Our first script – Hello World We will now write our first shell script called hello.sh. You can use any editor of your choice such as vi, gedit, nano, and similar. I prefer to use the vi editor. Create a new file hello.sh as follows: #!/bin/bash # This is comment line echo "Hello World" ls date Save the newly created file. The #!/bin/bash line is called the shebang line. The combination of the characters # and ! is called the magic number. The shell uses this to call the intended shell such as /bin/bash in this case. This should always be the first line in a shell script. The next few lines in the shell script are self-explanatory: Any line starting with # will be treated as comment line. Exception to this would be the 1st line with #!/bin/bash echo command will print Hello World on screen ls will display directory content on console date command will show current date and time We can execute the newly created file giving following command: Technique one:$ bash hello.sh Technique two:$ chmod +x hello.sh By running the preceding command, we are adding executable permission to our newly created file. You will learn more about file permission in following sections of this same chapter. $ ./hello.sh By running the preceding command, we are executing hello.sh as executable file. By technique one, we passed filename as an argument to bash shell. The output of executing hello.sh will be as follows: Hello World hello.sh Sun Jan 18 22:53:06 IST 2015 Compiler and interpreter – difference in process In any program development, the following are the two options: Compilation: Using a compiler-based language, such as C, C++, Java and other similar languages Interpreter: Using interpreter-based languages, such as Bash shell scripting When we use compiler-based language, we compile the complete source code, and as a result of compilation, we get a binary executable file. We then execute the binary to check the performance of our program. On the contrary, when we develop shell script, such as an interpreter-based program, every line of the program is input to bash shell. The lines of Shell Script are executed one by one sequentially. Even if the second line of a script has an error, the first line will be executed by the shell interpreter. Command substitution In keyboard, there is one interesting key backward quote such as `. This key is normally situated below the Esc key. If we place text between two successive back quotes, then echo will execute those as commands instead of processing them as plane text. Alternate syntax for $(command) that uses the backtick character `: $(command) or `command` Let's see an example: $ echo "Today is date" Output: Today is date Now modify the command as follows: $ echo "Today is `date`" Or: $ echo "Today is $(date)" Output: Today is Fri Mar 20 15:55:58 IST 2015 Pipes Pipes are used for inter-process communication: $ command_1 | command_2 In this case, the output of command_1 will be send as input to command_2: A simple example is as follows: $ who | wc The preceding simple command will be doing three different activities. First, it will copy output of who command to temporary file. Then, wc will read temporary file and will display the result. Finally, the temporary file will be deleted. Understanding variables Let's learn about creating variables in shell. Declaring variables in Linux is very easy. We just need to use variable names and initialize it with required content: $ person="Ganesh Naik" To get the content of variable, we need to prefix $ before variable: $ echo person person $ echo $person Ganesh Naik The unset command can be used to delete a variable: $ a=20 $ echo $a $ unset a Command unset will clear or remove the variable from shell environment as well: $ person="Ganesh Naik" $ echo $person $ set The set command will show all variables declared in shell. Exporting variable By using export command, we are making variables available in child process or subshell. But if we export variables in child process, the variables will not be available in parent process. We can view environment variables by either of the following command: $ env $ printenv Whenever we create a shell script and execute it, a new shell process is created and shell script runs in that process. Any exported variable values are available to the new shell or to any subprocess. We can export any variable by either of the following: $ export NAME $ declare -x NAME Interactive shell scripts – reading user input The read command is a shell built-in command for reading data from a file or a keyboard. The read command receives the input from a keyboard or a file, till it receives a newline character. Then it converts the newline character in null character. Read a value and store it in the variable: read VARIABLE echo $VARIABLE This will receive text from keyboard. The received text will be stored in shell variable VARIABLE. Working with command-line arguments Command line arguments are required for the following reasons: It informs the utility or command which file or group of files to process (reading/writing of files) Command line arguments tell the command/utility which option to use Check the following command line: $ my_program arg1 arg2 arg3 If my_command is a bash shell script, then we can access every command line positional parameters inside the script as following: $0 would contain "my_program" # Command $1 would contain "arg1" # First parameter $2 would contain "arg2" # Second parameter $3 would contain "arg3" # Third parameter Let's create a param.sh script as follows: #!/bin/bash echo "Total number of parameters are = $#" echo "Script name = $0" echo "First Parameter is $1" echo "Second Parameter is $2" echo "Third Parameter is $3" echo "Fourth Parameter is $4" echo "Fifth Parameter is $5" echo "All parameters are = $*" Run the script as follows: $ param.sh London Washington Delhi Dhaka Paris Output: Total number of parameters are = 5 Command is = ./hello.sh First Parameter is London Second Parameter is Washington Third Parameter is Delhi Fourth Parameter is Dhaka Fifth Parameter is Paris All parameters are = London Washington Delhi Dhaka Paris Understanding set Many a times we may not pass arguments on the command line; but, we may need to set parameters internally inside the script. We can declare parameters by the set command as follows: $ set USA Canada UK France $ echo $1 USA $ echo $2 Canada $ echo $3 UK $ echo $4 France Working with arrays Array is a list of variables. For example, we can create array FRUIT, which will contain many fruit names. The array does not have a limit on how many variables it may contain. It can contain any type of data. The first element in an array will have the index value as 0. $ FRUITS=(Mango Banana Apple) $ echo ${FRUITS[*]} Mango Banana Apple $ echo $FRUITS[*] Mango[*] $ echo ${FRUITS[2]} Apple $ FRUITS[3]=Orange $ echo ${FRUITS[*]} Mango Banana Apple Orange Debugging – Tracing execution (option -x) The -x option, short for xtrace, or execution trace, tells the shell to echo each command after performing the substitution steps. This will enable us to see the value of variables and commands. We can trace execution of shell script as follows: $ bash –x hello.sh Instead of the preceding way, we can enable debugging by the following way also: #!/bin/bash -x Let's test the earlier script as follows $ bash –x hello.sh Output: $ bash –x hello.sh + echo Hello student Hello student ++ date + echo The date is Fri May 1 00:18:52 IST 2015 The date is Fri May 1 00:18:52 IST 2015 + echo Your home shell is /bin/bash Your home shell is /bin/bash + echo Good-bye student Good-bye student Summary of various debugging options Option Description -n Syntax error checking. No commands execution. -v Runs in verbose mode. The shell will echo each command prior to executing the command. -x Tracing execution. The shell will echo each command after performing the substitution steps. Thus, we will see the value of variables and commands. Checking exit status of commands Automation using shell scripts involves checking, if the earlier command executed successfully or failed, if a file is present or not, and similar. You will learn various constructs such as if, case, and similar, where you will need to check certain conditions if they are true or false. Accordingly, our script should conditionally execute various commands. Let's enter the following command: $ ls Using bash shell, we can check if the preceding command executed successfully or failed as follows: $ echo $? The preceding command will return 0, if the command ls executed successfully. The result will be non-zero such as 1 or 2 or any other non-zero number, if the command has failed. Bash shell stores the status of the last command execution in the variable. If we need to check the status of the last command execution, then we should check the content of the variable. Understanding test command Let's learn the following example to check the content or value of the expressions. $ test $name = Ganesh $ echo $? 0 if success and 1 if failure. In the preceding example, we want to check if the content of the variable name is the same as Ganesh. To check this, we used the test command. The test will store the results of comparison in the variable. We can use the following syntax for the preceding test command. In this case, we used [ ] instead of the test command. We enclosed the expression to be evaluated in square brackets. $ [ $name = Ganesh ] # Brackets replace the test command $ echo $? 0 String comparison options for the test command The following is the summary of various options for string comparison using test. Test operator Tests true if -n string True if the length of the string is non-zero. -z string True if the length of the string is zero string1 != string2 True if the strings are not equal. string1 == string2 string1 = string2 True if the strings are equal. string1 > string2 True if string1 sorts after string2 lexicographically. string1 < string2 True if string1 sorts before string2 lexicographically. Suppose we want to check, whether the length of the string is non-zero, then we can check it as follows: test –n $string Or [ –n $string ] echo $? If the result is 0, then we can conclude that the string length is non-zero. If the content of ? is non-zero, then the string has length 0. Numerical comparison operators for the test command The following is the summary of various options for numerical comparison using test. Test operator Tests true if [integer_1 –eq integer_2] integer_1 is equal to integer_2 [integer_1 –ne integer_2] integer_1 is not equal to integer_2 [integer_1 –gt integer_2] integer_1 is greater than integer_2 [integer_1 –ge integer_2] integer_1 is greater than or equal to integer_2 [integer_1 –ge integer_2] integer_1 is less than integer_2 [integer_1 –le integer_2] integer_1 is less than or equal to integer_2 Let's write shell script for learning various numerical test operator usage: #!/bin/bash num1=10 num2=30 echo $(($num1 < $num2)) # compare for less than [ $num1 -lt $num2 ] # compare for less than echo $? [ $num1 -ne $num2 ] # compare for not equal echo $? [ $num1 -eq $num2 ] # compare for equal to echo $? File test options for the test command The following are the various options for file handling operations using the test command. Test operator Tests true if –b file_name This checks if the file is a Block special file –c file_name This checks if the file is a Character special file –d file_name This checks if the Directory is existing –e file_name This checks if the File exist –f file_name This checks if the file is a Regular file and not a directory –G file_name This checks if the file is existing and is owned by the effective group ID –g file_name This checks if file has a Set-group-ID set –k file_name This checks if the file has a Sticky bit set –L file_name This checks if the file is a symbolic link –p file_name This checks if the file is a named pipe –O file_name This checks if the file exists and is owned by the effective user ID –r file_name This checks if the file is readable –S file_name This checks if the file is a socket –s file_name This checks if the file has nonzero size –t fd This checks if the file has fd (file descriptor) and is opened on a terminal –u file_name This checks if the file has a Set-user-ID bit set –w file_name This checks if the file is writable –x file_name This checks if the file is executable File testing binary operators The following are various options for binary file operations using test. Test Operator Tests True If [ file_1 –nt file_2 ] This checks if the file is newer than file2 [ file_1 –ot file_2 ] This check if file1 is older than file2. [ file_1 –ef file_2 ] This check if file1 and file2 have the same device or inode numbers. Let's write the script for testing basic file attributes such as whether it is a file or folder and whether it has file size bigger than 0 #!/bin/bash # Check if file is Directory [ -d work ] echo $? # Check that is it a File [ -f test.txt ] echo $? # Check if File has size greater than 0 [ -s test.txt ] echo $? Logical test operators The following are the various options for logical operations using test. Test Operator Tests True If [ string_1 –a string_1 ] Both string_1 and string_2 are true [ string_1 –o string_2 ] Either string_1 or string_2 is true [ ! string_1 ] Not a string_1 match [[ pattern_1 && pattern_2 ]] Both pattern_1 and pattern_2 are true [[ pattern_1 || pattern_2 ]] Either pattern_1 or pattern_2 is true [[ ! pattern ]] Not a pattern match Reference: Bash reference manual—http://www.gnu.org/software/bash/ Conditional constructs – if else We use if command for checking the pattern or command status, and accordingly, we can make certain decisions to execute script or commands. The syntax of if conditional is as follows: if command then command command fi From the preceding syntax, we can clearly understand the working of the if conditional construct. Initially, the if statement will execute the block of command. If the result of command execution is true or 0, then all the commands which are enclosed between then and fi will be executed. If the status of command execution after if is false or non-zero, then all the commands after then will be ignored and the control of execution will directly go to fi. The simple example for checking the status of last command executed using if construct is as follows: #!/bin/bash if [ $? -eq 0 ] then echo "Command was successful." else echo "Command was successful." fi Whenever we run any command, the exit status of the command will be stored in the ? variable. The if construct will be very useful in checking the status of the last command. Switching case Apart from simple decision making with if, it is also possible to process multiple decision-making operations using command case. In a case statement, the expression contained in a variable is compared with a number of expressions, and for each expression matched, a command is executed. A case statement has the following structure: case variable in value1) command(s) ;; value2) command(s) ;; *) command(s) ;; esac For illustrating switch case scripting example, we will write script as follows. We will ask the user to enter any one of the number from range 1 to 9. We will check the entered number by case command. If the user enters any other number, then we will display the error by giving the Invalid key message. #!/bin/bash echo "Please enter number from 1 to 4" read number case $number in 1) echo "ONE" ;; 2) echo "TWO" ;; 3) echo "Three" ;; 4) echo "FOUR" ;; *) echo "SOME ANOTHER NUMBER" ;; esac Output: Please enter any number from 1 to 4 2 TWO Looping with for command For iterative operations, bash shell uses three types of loops: for, while, and until. By using the for looping command, we can execute set of commands, for a finite number of times, for every item in a list. In the for loop command, the user-defined variable is specified. In the for command, after the in keyword, a list of values can be specified. The user-defined variable will get the value from that list, and all the statements between do and done get executed, until it reaches the end of the list. The purpose of the for loop is to process a list of elements. It has the following syntax. The simple script with for loop could be as follows: for command in clear date cal do sleep 1 $command Done In the preceding script, the commands clear, date, and cal will be called one after the other. The command sleep for one second will be called before every command. If we need to loop continuously or infinitely, then following is the syntax: for ((;;)) do command done Let's write simple script as follows. In this script, we will print variable var 10 times. #!/bin/bash for var in {1..10} do echo $var done Exiting from the current loop iteration with continue With the help of the continue command, it is possible to exit from a current iteration from loop and to resume the next iteration of the loop. We use commands for, while, or until for the loop iterations. The following is the script with the for loop with the continue command, to skip a certain part of loop commands: #!/bin/bash for x in 1 2 3 do echo before $x continue 1 echo after $x done exit 0 Exiting from a loop with break In previous section, we discussed about how continue can be used to exit from the current iteration of a loop. The break command is another way to introduce a new condition within a loop. Unlike continue, however, it causes the loop to be terminated altogether if the condition is met. In the following script, we are checking the directory content. If directory is found, then we exit the loop and display the message that the first directory is found. #!/bin/bash rm -rf sample* echo > sample_1 echo > sample_2 mkdir sample_3 echo > sample_4 for file in sample* do if [ -d "$file" ]; then break; fi done echo The first directory is $file rm -rf sample* exit 0 Working with loop using do while Similar to the for command, while is also the command for loop operations. The condition or expression next to while is evaluated. If it is a success or 0, then the commands inside do and done are executed. The purpose of a loop is to test a certain condition and execute a given command while the condition is true (while loop) or until the condition becomes true (until loop). In the following script, we are printing numbers 1 to 10 on the screen by using the while loop. #!/bin/bash declare -i x x=0 while [ $x -le 10 ] do echo $x x=$((x+1)) done Using until The until command is similar to the while command. The given statements in loop are executed, as long as it evaluates the condition as true. As soon as the condition becomes false, then the loop is exited. Syntax: until command do command(s) done In the following script, we are printing numbers 0 to 9 on the screen. When the value of variable x becomes 10, then the until loop stops executing. #!/bin/bash x=0 until [ $x -eq 10 ] do echo $x x=`expr $x + 1` done Understanding functions In the real-word scripts, we break down big tasks or scripts in smaller logical tasks. This modularization of scripts helps in better development and understanding of code. The smaller logical block of script can be written as a function. Functions can be defined on command line or inside scripts. Syntax for defining functions on command line is as follows: functionName { command_1; command_2; . . . } Let's write a very simple function for illustrating the preceding syntax. $ hello() { echo 'Hello world!' ; } We can use the preceding defined function as follows: $ hello Output: Hello world! Functions should be defined at the beginning of script. Command source and '.' Normally, whenever we enter any command, a new process is created. If we want to make functions from scripts to be made available in the current shell, then we need a technique that will run script in the current shell instead of creating a new shell environment. The solution to this problem is using the source or . commands. The commands source and . can be used to run shell script in the current shell instead of creating new process. This helps in declaring function and variables in current shell. The syntax is as follows: $ source filename [arguments] Or: $ . filename [arguments] $ source functions.sh Or: $ . functions.sh If we pass command line arguments, those will be handled inside script as $1, $2 …and similar: $ source functions.sh arg1 arg2 or $ . /path/to/functions.sh arg1 arg2 The source command does not create new shell; but runs the shell scripts in current shell, so that all the variables and functions will be available in current shell for usage. System startup, inittab, and run levels When we power on the Linux system, the shell scripts are run one after another and the Linux system is initialized. These scripts start various services, daemons, start databases, mount discs, and many more applications are started. Even during the shutting down of system, certain shell scripts are executed so that important system data and information can be saved to the disk and applications are properly shut down. These are called as boot, startup, and shutdown scripts. These scripts are copied during the installation of the Linux operating system in our computer. As a developer or administrator, understanding these scripts may help you in understating and debugging the Linux system, and if required, you can customize these scripts. During the system startup, as per run level, various scripts are called to initialize the basic operating system. Once the basic operating system in initialized, the user login process starts. This process is explained in the following topics. System-wide settings scripts In the /etc/ folder, the following files are related to user level initialization: /etc/profile: Few distributions will have additional folder such as /etc/profile.d/. All the scripts from the profile.d folder will be executed. /etc/bash.bashrc: Scripts in the /etc/ folder will be called for all the users. Particular user specific initialization scripts are located the in HOME folder of each user. These are as follows: $HOME/.bash_profile: This contains user specific bash environment default settings. This script is called during login process. $HOME/.bash_login: Second user environment initialization script, called during the login process. $HOME/.profile: If present, this script internally calls the .bashrc script file. $HOME/.bashrc: This is an interactive shell or terminal initialization script. If we customize .bashrc such as added new alias commands or declare new function or environment variables, then we should execute .bashrc to take its effect. Summary In this article, you have learned about basic the Linux Shell Scripting along with the shell environment, creating and using variables, command line arguments, various debugging techniques, decision-making techniques and various looping techniques while testing for numeric, strings, logical and file handling-related operations, and the about writing functions. You also learned about system initialization and various initializing script and about how to customizing them. This article has been a very short introduction to Linux shell scripting. For complete information with detailed explanation and numerous sample scripts, you can refer to the book at https://www.packtpub.com/networking-and-servers/learning-linux-shell-scripting. Resources for Article: Further resources on this subject: CoreOS – Overview and Installation[article] Getting Started[article] An Introduction to the Terminal [article]
Read more
  • 0
  • 0
  • 7298
article-image-first-principle-and-useful-way-think
Packt
08 Oct 2015
8 min read
Save for later

First Principle and a Useful Way to Think

Packt
08 Oct 2015
8 min read
In this article, by Timothy Washington, author of the book Clojure for Finance, we will cover the following topics: Modeling the stock price activity Function evaluation First-Class functions Lazy evaluation Basic Clojure functions and immutability Namespace modifications and creating our first function (For more resources related to this topic, see here.) Modeling the stock price activity There are many types of banks. Commercial entities (large stores, parking areas, hotels, and so on) that collect and retain credit card information, are either quasi banks, or farm out credit operations to bank-like processing companies. There are more well-known consumer banks, which accept demand deposits from the public. There are also a range of other banks such as commercial banks, insurance companies and trusts, credit unions, and in our case, investment banks. As promised, this article will slowly build up a set of lagging price indicators that follow a moving stock price time series. In order to do that, I think it's useful to touch on stock markets, and to crudely model stock price activity. A stock (or equity) market, is a collection of buyers and sellers trading economic assets (usually companies). The stock (or shares) of those companies can be equities listed on an exchange (New York Stock Exchange, London Stock Exchange, and others), or may be those traded privately. In this exercise, we will do the following: Crudely model the stock price movement, which will give us a test bed for writing our lagging price indicators Introduce some basic features of the Clojure language Function evaluation The Clojure website has a cheatsheet (http://clojure.org/cheatsheet) with all of the language's core functions. The first function we'll look at is rand, a function that randomly gives you a number within a given range. So in your edgar/ project, launch a repl with the lein repl shell command. After a few seconds, you will enter repl (Read-Eval-Print-Loop). Again, Clojure functions are executed by being placed in the first position of a list. The function's arguments are placed directly afterwards. In your repl, evaluate (rand 35) or (rand 99) or (rand 43.21) or any number you fancy Run it many times to see that you can get any different floating point number, within 0 and the upper bound of the number you provided First-Class functions The next functions we'll look at are repeatedly and fn. repeatedly is a function that takes another function and returns an infinite (or length n if supplied) lazy sequence of calls to the latter function. This is our first encounter of a function that can take another function. We'll also encounter functions that return other functions. Described as First-Class functions, this falls out of lambda calculus and is one of the central features of functional programming. As such, we need to wrap our previous (rand 35) call in another function. fn is one of Clojure's core functions, and produces an anonymous, unnamed function. We can now supply this function to repeatedly. In your repl, if you evaluate (take 25 (repeatedly (fn [] (rand 35)))), you should see a long list of floating point numbers with the list's tail elided. Lazy evaluation We only took the first 25 of the (repeatedly (fn [] (rand 35))) result list, because the list (actually a lazy sequence) is infinite. Lazy evaluation (or laziness) is a common feature in functional programming languages. Being infinite, Clojure chooses to delay evaluating most of the list until it's needed by some other function that pulls out some values. Laziness benefits us by increasing performance and letting us more easily construct control flow. We can avoid needless calculation, repeated evaluations, and potential error conditions in compound expressions. Let's try to pull out some values with the take function. take itself, returns another lazy sequence, of the first n items of a collection. Evaluating (take 25 (repeatedly (fn [] (rand 35)))) will pull out the first 25 repeatedly calls to rand which generates a float between 0 and 35. Basic Clojure functions and immutability There's many operations we can perform over our result list (or lazy sequence). One of the main approaches of functional programming is to take a data structure, and perform operations over top of it to produce a new data structure, or some atomic result (a string, number, and so on). This may sound inefficient at first. But most FP languages employ something called immutability to make these operations efficient. Immutable data structures are the ones that cannot change once they've been created. This is feasible as most immutable, FP languages use some kind of structural data sharing between an original and a modified version. The idea is that if we run evaluate (conj [1 2 3] 4), the resulting [1 2 3 4] vector shares the original vector of [1 2 3]. The only additional resource that's assigned is for any novelty that's been introduced to the data structure (the 4). There's a more detailed explanation of (for example) Clojure's persistent vectors here: conj: This conjoins an element to a collection—the collection decides where. So conjoining an element to a vector (conj [1 2 3] 4) versus conjoining an element to a list (conj '(1 2 3) 4) yield different results. Try it in your repl. map: This passes a function over one or many lists, yielding another list. (map inc [1 2 3]) increments each element by 1. reduce (or left fold): This passes a function over each element, accumulating one result. (reduce + (take 100 (repeatedly (fn [] (rand 35))))) sums the list. filter: This constrains the input by some condition. >=: This is a conditional function, which tests whether the first argument is greater than or equal to the second function. Try (>= 4 9) and (>= 9 1). fn: This is a function that creates a function. This unnamed or anonymous function can have any instructions you choose to put in there. So if we only want numbers above 12, we can put that assertion in a predicate function. Try entering the below expression into your repl: (take 25 (filter (fn [x] (>= x 12)) (repeatedly (fn [] (rand 35))))) Modifying the namespaces and creating our first function We now have the basis for creating a function. It will return a lazy infinite sequence of floating point numbers, within an upper and lower bound. defn is a Clojure function, which takes an anonymous function, and binds a name to it in a given namespace. A Clojure namespace is an organizational tool for mapping human-readable names to things like functions, named data structures and such. Here, we're going to bind our function to the name generate-prices in our current namespace. You'll notice that our function is starting to span multiple lines. This will be a good time to author the code in your text editor of choice. I'll be using Emacs: Open your text editor, and add this code to the file called src/edgar/core.clj. Make sure that (ns edgar.core) is at the top of that file. After adding the following code, you can then restart repl. (load "edgaru/core") uses the load function to load the Clojure code in your in src/edgaru/core.clj: (defn generate-prices [lower-bound upper-bound] (filter (fn [x] (>= x lower-bound)) (repeatedly (fn [] (rand upper-bound))))) The Read-Eval-Print-Loop In our repl, we can pull in code in various namespaces, with the require function. This applies to the src/edgar/core.clj file we've just edited. That code will be in the edgar.core namespace: In your repl, evaluate (require '[edgar.core :as c]). c is just a handy alias we can use instead of the long name. You can then generate random prices within an upper and lower bound. Take the first 10 of them like this (take 10 (c/generate-prices 12 35)). You should see results akin to the following output. All elements should be within the range of 12 to 35: (29.60706184716407 12.507593971664075 19.79939384292759 31.322074615579716 19.737852534147326 25.134649707849572 19.952195022152488 12.94569843904663 23.618693004455086 14.695872710062428) There's a subtle abstraction in the preceding code that deserves attention. (require '[edgar.core :as c]) introduces the quote symbol. ' is the reader shorthand for the quote function. So the equivalent invocation would be (require (quote [edgar.core :as c])). Quoting a form tells the Clojure reader not to evaluate the subsequent expression (or form). So evaluating '(a b c) returns a list of three symbols, without trying to evaluate a. Even though those symbols haven't yet been assigned, that's okay, because that expression (or form) has not yet been evaluated. But that begs a larger question. What is reader? Clojure (and all Lisps) are what's known as homoiconic. This means that Clojure code is also data. And data can be directly output and evaluated as code. The reader is the thing that parses our src/edgar/core.clj file (or (+ 1 1) input from the repl prompt), and produces the data structures that are evaluated. read and eval are the 2 essential processes by which Clojure code runs. The evaluation result is printed (or output), usually to the standard output device. Then we loop the process back to the read function. So, when the repl reads, your src/edgar/two.clj file, it's directly transforming that text representation into data and evaluating it. A few things fall out of that. For example, it becomes simpler for Clojure programs to directly read, transform and write out other Clojure programs. The implications of that will become clearer when we look at macros. But for now, know that there are ways to modify or delay the evaluation process, in this case by quoting a form. Summary In this article, we learned about basic features of the Clojure language and how to model the stock price activity. Besides these, we also learned function evaluation, First-Class functions, the lazy evaluation method, namespace modifications and creating our first function. Resources for Article: Further resources on this subject: Performance by Design[article] Big Data[article] The Observer Pattern [article]
Read more
  • 0
  • 0
  • 1628

article-image-integrating-elasticsearch-hadoop-ecosystem
Packt
07 Oct 2015
14 min read
Save for later

Integrating Elasticsearch with the Hadoop ecosystem

Packt
07 Oct 2015
14 min read
In this article by Vishal Shukla, author of the book Elasticsearch for Hadoop, we will take a look at how ES-Hadoop can integrate with Pig and Spark with ease. Elasticsearch is great in getting insights into the indexed data. The Hadoop ecosystem does a great job in making Hadoop easily usable for different users by providing a comfortable interface. Some of the examples are Hive and Pig. Apart from these, Hadoop integrates well with other computing engines and platforms, such as Spark and Cascading. (For more resources related to this topic, see here.) Pigging out Elasticsearch For many use cases, Pig is one of the easiest ways to fiddle around with the data in the Hadoop ecosystem. Pig wins when it comes to ease of use and simple syntax for designing data flow pipelines without getting into complex programming. Assuming that you know Pig, we will cover how to move the data to and from Elasticsearch. If you don't know Pig yet, never mind. You can still carry on with the steps, and by the end of the article, you will at least know how to use Pig to perform data ingestion and reading with Elasticsearch. Setting up Apache Pig for Elasticsearch Let's start by setting up Apache Pig. At the time of writing this article, the latest Pig version available is 0.15.0. You can use the following steps to set up the same version: First, download the Pig distribution using the following command: $ sudo wget –O /usr/local/pig.tar.gz http://mirrors.sonic.net/apache/pig/pig-0.15.0/pig-0.15.0.tar.gz Then, extract Pig to the desired location and rename it to a convenient name: $ cd /userusr/local $ sudo tar –xvf pig.tar.gz $ sudo mv pig-0.15.0 pig Now, export the required environment variables by appending the following two lines in the /home/eshadoop/.bashrc file: export PIG_HOME=/usr/local/pig export PATH=$PATH:$PIG_HOME/bin You can either log out and relogin to see the newly set environment variables or source the environment configuration with the following command: $ source ~/.bashrc Now, start the job history server daemon with the following command: $ mr-jobhistory-daemon.sh start historyserver You should see the Pig console with the following command: $ pig grunt> It's easy to forget to start the job history daemon once you restart your machine or VM. You may make this daemon run on start up, or you need to ensure this manually. Now, we have Pig up and running. In order to use Pig with Elasticsearch, we must ensure that the ES-Hadoop JAR file is available in the Pig classpath. Let's take the ES-Hadoop JAR file and and import it to HDFS using the following steps: First, download the ES-Hadoop JAR used to develop the examples in this article, as shown in the following command: $ wget http://central.maven.org/maven2/org/elasticsearch/elasticsearch-hadoop/2.1.1/elasticsearch-hadoop-2.1.1.jar Then, move the downloaded JAR to a convenient name as follows: $ sudo mkdir /opt/lib Now, import the JAR to HDFS: $ hadoop fs –mkdir /lib $ hadoop fs –put elasticsearch-hadoop-2.1.1.jar /lib/elasticsearch-hadoop-2.1.1.jar Throughout this article, we will use a crime dataset that is tailored from the open dataset provided at https://data.cityofchicago.org/. This tailored dataset can be downloaded from http://www.packtpub.com/support, where all the code files required for this article are available. Once you have downloaded the dataset, import it to HDFS at /ch07/crime_data.csv. Importing data to Elasticsearch Let's import the crime dataset to Elasticsearch using Pig with ES-Hadoop. This provides the EsStorage class as Pig Storage. In order to use the EsStorage class, you need to have a registered ES-Hadoop JAR with Pig. You can register the JAR located in the local filesystem, HDFS, or other shared filesystems. The REGISTER command registers a JAR file that contains UDFs (User-defined functions) with Pig, as shown in the following code: grunt> REGISTER hdfs://localhost:9000/lib/elasticsearch-hadoop-2.1.1.jar; Then, load the CSV data file as a relation with the following code: grunt> SOURCE = load '/ch07/crimes_dataset.csv' using PigStorage(',') as (id:chararray, caseNumber:chararray, date:datetime, block:chararray, iucr:chararray, primaryType:chararray, description:chararray, location:chararray, arrest:boolean, domestic:boolean, lat:double,lon:double); This command reads the CSV fields and maps each token in the data to the respective field in the preceding command. The resulting relation, SOURCE, represents a relation with the Bag data structure that contains multiple Tuples. Now, generate the target Pig relation that has the structure that matches closely to the target Elasticsearch index mapping, as shown in the following code: grunt> TARGET = foreach SOURCE generate id, caseNumber, date, block, iucr, primaryType, description, location, arrest, domestic, TOTUPLE(lon, lat) AS geoLocation; Here, we need the nested object with the geoLocation name in the target Elasticsearch document. We can achieve this with a Tuple to represent the lat and lon fields. TOTUPLE() helps us to create this tuple. We then assigned the geoLocation alias for this tuple. Let's store the TARGET relationto the Elasticsearch index with the following code: grunt> STORE TARGET INTO 'esh_pig/crimes' USING org.elasticsearch.hadoop.pig.EsStorage('es.http.timeout = 5m', 'es.index.auto.create = true', 'es.mapping.names=arrest:isArrest, domestic:isDomestic', 'es.mapping.id=id'); We can specify the target index and type to store indexed documents. The EsStorage class can accept multiple Elasticsearch configurations.es.mapping.names maps the Pig field name to Elasticsearch document's field name. You can use Pig's field id to assign a custom _id value for the Elasticsearch document using the es.mapping.id option. Similarly, you can set the _ttl and _timestamp metadata fields as well. Pig uses just one reducer in the default configuration. It is recommended to change this behavior to have a parallelism that matches the number of shards available, as shown in the following command: grunt> SET default_parallel 5; Pig also combines the input splits, irrespective of its size. This makes it efficient for small files by reducing the number of mappers. However, this will give performance issues for large files. You can disable this behavior in the Pig script, as shown in the following command: grunt> SET pig.splitCombination FALSE; Executing the preceding commands will create the Elasticsearch index and import crime data documents. If you observe the created documents in Elasticsearch, you can see the geoLocation value isan array in the [-87.74274476, 41.87404405]format. This is because by default, ES-Hadoop ignores the tuple field names and simply converts them as an ordered array. If you wish to make your geoLocation field look similar to the key/value-based object with the lat/lon keys, you can do so by including the following configuration in EsStorage: es.mapping.pig.tuple.use.field.names=true Writing from the JSON source If you have inputs as a well-formed JSON file, you can avoid conversion and transformations and directly pass the JSON document to Elasticsearch for indexing purposes. You may have the JSON data in Pig as chararray, bytearray, or in any other form that translates to well-formed JSON by calling the toString() method, as shown in the following code: grunt> JSON_DATA = LOAD '/ch07/crimes.json' USING PigStorage() AS (json:chararray); grunt> STORE JSON_DATA INTO 'esh_pig/crimes_json' USING org.elasticsearch.hadoop.pig.EsStorage('es.input.json=true'); Type conversions Take a look at the the type mapping of the esh_pig index in Elasticsearch. It maps the geoLocation type to double. This is done because Elasticsearch inferred the double type based on the field type we specified in Pig. To map geoLocation to geo_point, you must create the Elasticsearch mapping for it manually before executing the script. Although Elasticsearch provides a data type detection based on the type of field in the incoming document, it is always good to create the type mapping beforehand in Elasticsearch. This is a one-time activity that you should do. Then, you can run the MapReduce, Pig, Hive, Cascading, or Spark jobs multiple times. This will avoid any surprises in the type detection. For your reference, here is a list of some of the field types of Pig and Elasticsearch that map to each other. The table doesn't list no-brainer and absolutely intuitive type mappings: Pig type Elasticsearch type chararray This specifies string bytearray This indicates binary tuple This denotes an array(default) or object bag This specifies an array map This denotes an object bigdecimal This indicates Not supported biginteger This denotes Not supported Reading data from Elasticsearch Reading data from Elasticsearch using Pig is as simple as writing a single command with the Elasticsearch query. Here is a snippet of how to print tuples that has crimes related to theft: grunt> REGISTER hdfs://localhost:9000/lib/elasticsearch-hadoop-2.1.1.jar grunt> ES = LOAD 'esh_pig/crimes' using org.elasticsearch.hadoop.pig.EsStorage('{"query" : { "term" : { "primaryType" : "theft" } } }'); grunt> dump ES; Executing the preceding commands will print the tuples Pig console. Giving Spark to Elasticsearch Spark is a distributed computing system that provides huge performance boost compared to Hadoop MapReduce. It works on an abstraction of RDD (Resilient-distributed Datasets). This can be created for any data residing in Hadoop. Without any surprises, ES-Hadoop provides easy integration with Spark by enabling the creation of RDD from the data in Elasticsearch. Spark's increasing support of integrating with various data sources, such as HDFS, Parquet, Avro, S3, Cassandra, relational databases, and streaming data makes it special when it comes to data integration. This means that when you use ES-Hadoop with Spark, you can make all these sources integrate with Elasticsearch easily. Setting up Spark In order to set up Apache Spark in order to execute a job, you can perform the following steps: First, download the Apache Spark distribution with the following command: $ sudo wget –O /usr/local/spark.tgzhttp://www.apache.org/dyn/closer.cgi/spark/spark-1.4.1/spark-1.4.1-bin-hadoop2.4.tgz Then, extract Spark to the desired location and rename it to a convenient name, as shown in the following command: $ cd /user/local $ sudo tar –xvf spark.tgz $ sudo mv spark-1.4.1-bin-hadoop2.4 spark Importing data to Elasticsearch To import the crime dataset to Elasticsearch with Spark, let's see how we can write a Spark job. We will continue using Java to write Spark jobs for consistency. Here are the driver program's snippets: SparkConf conf = new SparkConf().setAppName("esh-spark").setMaster("local[4]"); conf.set("es.index.auto.create", "true"); JavaSparkContext context = new JavaSparkContext(conf); Set up the SparkConf object to configure the spark job. As always, you can also set most options (such as es.index.auto.create) and other configurations that we have seen throughout the article. Using this configuration, we created the JavaSparkContext object as follows: JavaRDD<String> textFile = context.textFile("hdfs://localhost:9000/ch07/crimes_dataset.csv"); Read the crime data CSV file as JavaRDD. Here, RDD is still of the type String that represents each line: JavaRDD<Crime> dataSplits = textFile.map(new Function<String, Crime>() { @Override public Crime call(String line) throws Exception { CSVParser parser = CSVParser.parse(line, CSVFormat.RFC4180); Crime c = new Crime(); CSVRecord record = parser.getRecords().get(0); c.setId(record.get(0)); .. .. String lat = record.get(10); String lon = record.get(11); Map<String, Double> geoLocation = new HashMap<>(); geoLocation.put("lat", StringUtils.isEmpty(lat)? null:Double.parseDouble(lat)); geoLocation.put("lon",StringUtils.isEmpty(lon)?null:Double. parseDouble(lon)); c.setGeoLocation(geoLocation); return c; } }); In the preceding snippet, we called the map() method on JavaRDD to map each of the input line to the Crime object. Note that we created a simple JavaBean class called Crime that implements the Serializable interface and maps to the Elasticsearch document structure. Using CSVParser, we parsed each field into the Crime object. We mapped nested the geoLocation object by embedding Map in the Crime object. This map is populated with the lat and lon fields. This map() method returns another JavaRDD that contains the Crime objects, as shown in the following code: JavaEsSpark.saveToEs(dataSplits, "esh_spark/crimes"); Save JavaRDD<Crime> to Elasticsearch with the JavaEsSpark class provided by Elasticsearch. For all the ES-Hadoop integrations, such as Pig, Hive, Cascading, Apache Storm, and Spark, you can use all the standard ES-Hadoop configurations and techniques. This includes dynamic/multiresource writes with a pattern similar to esh_spark/{primaryType} and use JSON strings to directly import the data to Elasticsearch as well. To control the Elasticsearch document metadata from being indexed, you can use the saveToEsWithMeta() method of JavaEsSpark. You can pass an instance of JavaPairRDD that contains Tuple2<Metadata, Object>, where Metadata represents a map that has the key/value pairs of the document metadata fields, such as id, ttl, timestamp, and version. Using SparkSQL ES-Hadoop also bridges Elasticsearch with the SparkSQL module. SparkSQL 1.3+ versions provide the DataFrame abstraction that represent a collection of Row. We will not discuss the details of DataFrame here. ES-Hadoop lets you persist your DataFrame instance to Elasticsearch transparently. Let's see how we can do this with the following code: SQLContext sqlContext = new SQLContext(context); DataFrame df = sqlContext.createDataFrame(dataSplits, Crime.class); Create an SQLContext instance using the JavaSparkContext instance. Using the SqlContextSqlContext instance, you can create DataFrame by calling the createDataFrame() method and passing the existing JavaRDD<T> and Class<T>, where T is a JavaBean class that implements the Serializable interface. Note that the passing class instance is required to infer a schema for DataFrame. If you wish to use nonJavaBean-based RDD, you can create the schema manually. The article source code contains the implementations of both the approaches for your reference. Take a look at the following code: JavaEsSparkSQL.saveToEs(df, "esh_sparksql/crimes_reflection"); Once you have the DataFrame instance, you can save it to Elasticsearch with the JavaEsSparkSQL class, as shown in the preceding code. Reading data from Elasticsearch Here is the snippet of SparkEsReader that finds crimes related to theft: JavaRDD<Map<String, Object>> esRDD = JavaEsSpark.esRDD(context, "esh_spark/crimes", "{"query" : { "term" : { "primaryType" : "theft" } } }").values(); for(Map<String,Object> item: esRDD.collect()){ System.out.println(item); } We used the same JavaEsSpark class to create RDD with documents that match the Elasticsearch query. Using SparkSQL ES-Hadoop provides a org.elasticsearch.spark.sql data source provider to read the data from Elasticsearch using SparkSQL, as shown in the following code: Map<String, String> options = new HashMap<>(); options.put("pushdown","true"); options.put("es.nodes","localhost"); DataFrame df = sqlContext.read() .options(options) .format("org.elasticsearch.spark.sql") .load("esh_sparksql/crimes_reflection"); The preceding code snippet uses the org.elasticsearch.spark.sql data source to load data from Elasticsearch. You can set the pushdown option to true to push the query execution down to Elasticsearch. This greatly increases its efficiency as the query execution is collocated where the data resides, as shown in the following code: df.registerTempTable("crimes"); DataFrame theftCrimes = sqlContext.sql("SELECT * FROM crimes WHERE primaryType='THEFT'"); for(Row row: theftCrimes.javaRDD().collect()){ System.out.println(row); } We registered table with the data frame and executed the SQL query on SqlContext. Note that we need to collect the final results locally to print in a driver class. Summary In this article, we looked at the various Hadoop ecosystem technologies. We set up Pig with ES-Hadoop and developed the script to interact with Elasticsearch. You also learned how to use ES-Hadoop to integrate Elasticsearch with Spark and empower it with powerful SQL engine SparkSQL. Resources for Article: Further resources on this subject: Extending ElasticSearch with Scripting [Article] Elasticsearch Administration [Article] Downloading and Setting Up ElasticSearch [Article]
Read more
  • 0
  • 0
  • 4453
Modal Close icon
Modal Close icon