Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 0
  • 7059

article-image-how-build-game-using-phaser
Mika Turunen
21 Oct 2015
9 min read
Save for later

How to build a game using Phaser

Mika Turunen
21 Oct 2015
9 min read
Let’s take a look into writing a simple Breakout, Arkanoid for some of us, clone with Phaser. To keep it as simple as possible I’ve created a separate github repository for the code used in this post. I’m going to assume you have some experience in JavaScript. You should also give Phasers official web site a visit and see what the commotion is about. Setup To have everything in working order, download https://nodejs.org/ and have it working on your command prompt, meaning commands like node and npm are recognized. If you are having difficulties or just like to explore the different options of creating a no hassle http server for Phaser projects, you can always look at the Phasers official getting started guide on the HTTP servers. Project structure Create a barkanoid directory on your local machine and extract or clone the files from the github repository into the directory. You should see the following project structure: barkanoid | |----js |----|----barkanoid.js |----assets |----|----background.jpg |----|----ball.png |----|----paddle.png |----|----tile0.png |----|----tile1.png |----|----tile2.png |----|----tile3.png |----|----tile4.png |----|----tile5.png |----index.html |----package.json Assets is for all game related assets such as graphics, sounds and the likes. Js directory is for all the JavaScript files and since we are keeping it as simple as possible for the sake of this post, it’s only one .js file. index.html is the actual game canvas. package.json is the node file that tells the Node package manager (npm) what to install when we use it. Installing dependencies There are a few dependencies that we first need to take care of, such as the actual Phaser itself and HTTP server we are going to serve our files from. Luckily for us, Node.js makes this super simple and with the project in the github you can just simply write the following command in the barkanoid directory. npm install It might take a while, depending on your Internet connection. All dependencies should now be installed. Programming time Phaser requires at least one HTML file to act as the starting canvas for our game, so let’s go ahead and create it. Save the index.html into the root of the Barkanoid directory for easier access. index.html <!doctype html> <html> <head> <meta charset="UTF-8"/> <title>Barkanoid Example</title> <script src="/node_modules/phaser/dist/phaser.min.js"></script> <script src="/js/barkanoid.js"></script> </head> <body> <div id="barkanoid"></div> </body> </html> Notice the html elements attribute id=”barkanoid”, that HTML div element is the container where Phaser will inject the game canvas. This can be called anything really but it’s important to know what the id of the element is so we can actually tell Phaser about this. Let’s continue with the js/barkanoid.js file. Create the Phaser game object and set it up with the HTML div element with the id “barkanoid”. // phaserCreate the game object itself var game = newPhaser.Game( 800, 600, // 800 x 600 rebackgroundolution. Phaser.AUTO, // Allow Phaser to determine Canvas or WebGL "barkanoid", // The HTML element ID we will connect Phaser to. { // Functions (callbacks) for Phaser to call in preload: phaserPreload, // in different states of its execution create: phaserCreate, update: phaserUpdate } ); You can attach callbacks for Phasers preload, create, update and render. For this project we only need the preload, create and update. Preload function: /** * Preload callback. Used to load all assets into Phaser. */ functionphaserPreload() { // Loading the background abackground an image game.load.image("background", "/assets/background.jpg"); // Loading the tiles game.load.image("tile0", "/assets/tile0.png"); game.load.image("tile1", "/assets/tile1.png"); game.load.image("tile2", "/assets/tile2.png"); game.load.image("tile3", "/assets/tile3.png"); game.load.image("tile4", "/assets/tile4.png"); game.load.image("tile5", "/assets/tile5.png"); // Loading the paddle and the ball game.load.image("paddle", "/assets/paddle.png"); game.load.image("ball", "/assets/ball.png"); } This is nothing too fancy. I am keeping it as simple as possible and just loading set of images into Phaser with game.load.image and giving them simple aliases and a location of the file. The following is the phaserCreate function, but don’t get scared, it’s actually quite simple even though a bit lengthy compared to the preload one. We’ll walk through it in three steps. /** * Create callback. Used to create all game related objects, set states and other pre-game running * details. */ functionphaserCreate() { game.physics.startSystem(Phaser.Physics.ARCADE); // All walls collide except the bottom game.physics.arcade.checkCollision.down = false; // Using the in-game name to fetch the loaded asset for the Background object background = game.add.tileSprite(0, 0, 800, 600, "background"); Simply telling Phaser that Arcade style physics are enabled and that we do not want to check for collisions on the bottom of the screen and create a simple background from the background image. // Continuing from the first part ... // Creating a tile group tiles = game.add.group(); tiles.enableBody = true; tiles.physicsdBodyType = Phaser.Physics.ARCADE; // Creating N tiles into the tile group for (var y = 0; y < 4; y++) { for (var x = 0; x < 15; x++) { // Randomizing the tile sprite we load for the tile var randomTileNumber = Math.floor(Math.random() * 6); var tile = tiles.create(120 + (x * 36), 100 + (y * 52), "tile" + randomTileNumber); tile.body.bounce.set(1); tile.body.immovable = true; } } Next create a group for the tiles object with game.add.group. The group can be of many different things but we are going to have a group of game objects for easier collision manipulation. The tile colors get randomized every time the game starts. Create four rows with 15 columns on them of tiles. // Continuing from the second part ... // Setup the player -- paddle paddle = game.add.sprite(game.world.centerX, 500, "paddle"); paddle.anchor.setTo(0.5, 0.5); game.physics.enable(paddle, Phaser.Physics.ARCADE); paddle.body.collideWorldBounds = true; paddle.body.bounce.set(1); paddle.body.immovable = true; // phaserCreate the ball ball = game.add.sprite(game.world.centerX, paddle.y - 16, "ball"); ball.anchor.set(0.5); ball.checkWorldBounds = true; game.physics.enable(ball, Phaser.Physics.ARCADE); ball.body.collideWorldBounds = true; ball.body.bounce.set(1); // When it goes out of bounds we'll call the function 'death' ball.events.onOutOfBounds.add(helpers.death, this); // Setup score text scoreText = game.add.text(32, 550, "score: 0", defaultTextOptions); livesText = game.add.text(680, 550, "lives: 3", defaultTextOptions); introText = game.add.text(game.world.centerX, 400, "- click to start -", boldTextOptions); introText.anchor.setTo(0.5, 0.5); game.input.onDown.add(helpers.release, this); } This creates the player, the ball and some informative text elements. And last but not least, the common update function Phaser calls every update cycle. This is where you can handle updating different objects, their states and other rocket-sciency parts one might have in a game. /** * Phaser Engines update loop that gets called every phaserUpdate. */ functionphaserUpdate () { paddle.x = game.input.x; // Making sure the player does not move out of bounds if (paddle.x < 24) { paddle.x = 24; } elseif (paddle.x > game.width - 24) { paddle.x = game.width - 24; } if (ballOnPaddle) { // Setting the ball on the paddle when player has it ball.body.x = paddle.x; } else { // Check collisions, the function gets called when the N collides with X game.physics.arcade.collide(ball, paddle, helpers.ballCollideWithPaddle, null, this); game.physics.arcade.collide(ball, tiles, helpers.ballCollideWithTile, null, this); } } You probably noticed the functions we are calling and objects we are using that were never declared anywhere, like defaultTextOptions and helpers.release. All the helper functions are defined after the callbacks for Phaser. // Few game related variables that we'll leave undefined var ball, paddle, tiles, livesText, introText, background; var ballOnPaddle = true; var lives = 3; var score = 0; var defaultTextOptions = { font: "20px Arial", align: "left", fill: "#ffffff" }; var boldTextOptions = { font: "40px Arial", fill: "#ffffff", align: "center" }; /** * Set of helper functions. */ var helpers = { /** * Releases ball from the paddle. */ release: function() { if (ballOnPaddle) { ballOnPaddle = false; ball.body.velocity.y = -300; ball.body.velocity.x = -75; introText.visible = false; } }, /** * Ball went out of bounds. */ death: function() { lives--; livesText.text = "lives: " + lives; if (lives === 0) { helpers.gameOver(); } else { ballOnPaddle = true; ball.reset(paddle.body.x + 16, paddle.y - 16); } }, /** * Game over, all lives lost. */ gameOver: function() { ball.body.velocity.setTo(0, 0); introText.text = "Game Over!"; introText.visible = true; }, /** * Callback for when ball collides with Tiles. */ ballCollideWithTile: function(ball, tile) { tile.kill(); score += 10; scoreText.text = "score: " + score; // Are they any tiles left? if (tiles.countLiving() <= 0) { // New level start score += 1000; scoreText.text = "score: " + score; introText.text = "- Next Level -"; // Attach ball to the players paddle ballOnPaddle = true; ball.body.velocity.set(0); ball.x = paddle.x + 16; ball.y = paddle.y - 16; // Tell tiles to revive tiles.callAll("revive"); } }, /** * Callback for when ball collides with the players paddle. */ ballCollideWithPaddle: function(ball, paddle) { var diff = 0; // Super simplistic bounce physics for the ball movement if (ball.x < paddle.x) { // Ball is on the left-hand side diff = paddle.x - ball.x; ball.body.velocity.x = (-10 * diff); } elseif (ball.x > paddle.x) { // Ball is on the right-hand side diff = ball.x -paddle.x; ball.body.velocity.x = (10 * diff); } else { // Ball is perfectly in the middle // Add a little random X to stop it bouncing straight up! ball.body.velocity.x = 2 + Math.random() * 8; } } }; Most of the helper functions are pretty self-explanatory and there's a decent amount of comments around them so they should be easy to understand. Time to play the game After about 200 lines or so of code and setting everything up, you should be ready to say nmp start in the barkanoid directory to start the game. Enjoy the Barkanoid game you just created. Play a round or two and start customizing it as much as you want. Have fun! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 6303

article-image-qlikview-tips-and-tricks
Packt
20 Oct 2015
6 min read
Save for later

QlikView Tips and Tricks

Packt
20 Oct 2015
6 min read
In this article by Andrew Dove and Roger Stone, author of the book QlikView Unlocked, we will cover the following key topics: A few coding tips The surprising data sources Include files Change logs (For more resources related to this topic, see here.) A few coding tips There are many ways to improve things in QlikView. Some are techniques and others are simply useful things to know or do. Here are a few of our favourite ones. Keep the coding style constant There's actually more to this than just being a tidy developer. So, always code your function names in the same way—it doesn't matter which style you use (unless you have installation standards that require a particular style). For example, you could use MonthStart(), monthstart(), or MONTHSTART(). They're all equally valid, but for consistency, choose one and stick to it. Use MUST_INCLUDE rather than INCLUDE This feature wasn't documented at all until quite a late service release of v11.2; however, it's very useful. If you use INCLUDE and the file you're trying to include can't be found, QlikView will silently ignore it. The consequences of this are unpredictable, ranging from strange behaviour to an outright script failure. If you use MUST_INCLUDE, QlikView will complain that the included file is missing, and you can fix the problem before it causes other issues. Actually, it seems strange that INCLUDE doesn't do this, but Qlik must have its reasons. Nevertheless, always use MUST_INCLUDE to save yourself some time and effort. Put version numbers in your code QlikView doesn't have a versioning system as such, and we have yet to see one that works effectively with QlikView. So, this requires some effort on the part of the developer. Devise a versioning system and always place the version number in a variable that is displayed somewhere in the application. Updating this number every time you make a change doesn't matter, but ensure that it's updated for every release to the user and ties in with your own release logs. Do stringing in the script and not in screen objects We would have put this in anyway, but its place in the article was assured by a recent experience on a user site. They wanted four lines of address and a postcode strung together in a single field, with each part separated by a comma and a space. However, any field could contain nulls; so, to avoid addresses such as ',,,,' or ', Somewhere ,,,', there had be a check for null in every field as the fields were strung together. The table only contained about 350 rows, but it took 56 seconds to refresh on screen when the work was done in an expression in a straight table. Moving the expression to the script and presenting just the resulting single field on screen took only 0.14 seconds. (That's right; it's about a seventh of a second). Plus, it didn't adversely affect script performance. We can't think of a better example of improving screen performance. The surprising data sources QlikView will read database tables, spreadsheets, XML files, and text files, but did you know that it can also take data from a web page? If you need some standard data from the Internet, there's no need to create your own version. Just grab it from a web page! How about ISO Country Codes? Here's an example. Open the script and click on Web files… below Data from Filesto the right of the bottom section of the screen. This will open the File Wizard: Source dialogue, as in the following screenshot. Enter the URL where the table of data resides: Then, click on Next and in this case, select @2 under Tables, as shown in the following screenshot: Click on Finish and your script will look something similar to this: LOAD F1, Country, A2, A3, Number FROM [http://www.airlineupdate.com/content_public/codes/misc_codes/icao _nat.htm] (html, codepage is 1252, embedded labels, table is @2); Now, you've got a great lookup table in about 30 seconds; it will take another few seconds to clean it up for your own purposes. One small caveat though—web pages can change address, content, and structure, so it's worth putting in some validation around this if you think there could be any volatility. Include files We have already said that you should use MUST_INCLUDE rather than INCLUDE, but we're always surprised that many developers never use include files at all. If the same code needs to be used in more than one place, it really should be in an include file. Suppose that you have several documents that use C:QlikFilesFinanceBudgets.xlsx and that the folder name is hard coded in all of them. As soon as the file is moved to another location, you will have several modifications to make, and it's easy to miss changing a document because you may not even realise it uses the file. The solution is simple, very effective, and guaranteed to save you many reload failures. Instead of coding the full folder name, create something similar to this: LET vBudgetFolder='C:QlikFilesFinance'; Put the line into an include file, for instance, FolderNames.inc. Then, code this into each script as follows: $(MUST_INCLUDE=FolderNames.inc) Finally, when you want to refer to your Budgets.xlsx spreadsheet, code this: $(vBudgetFolder)Budgets.xlsx Now, if the folder path has to change, you only need to change one line of code in the include file, and everything will work fine as long as you implement include files in all your documents. Note that this works just as well for folders containing QVD files and so on. You can also use this technique to include LOAD from QVDs or spreadsheets because you should always aim to have just one version of the truth. Change logs Unfortunately, one of the things QlikView is not great at is version control. It can be really hard to see what has been done between versions of a document, and using the -prj folder feature can be extremely tedious and not necessarily helpful. So, this means that you, as the developer, need to maintain some discipline over version control. To do this, ensure that you have an area of comments that looks something similar to this right at the top of your script: // Demo.qvw // // Roger Stone - One QV Ltd - 04-Jul-2015 // // PURPOSE // Sample code for QlikView Unlocked - Chapter 6 // // CHANGE LOG // Initial version 0.1 // - Pull in ISO table from Internet and local Excel data // // Version 0.2 // Remove unused fields and rename incoming ISO table fields to // match local spreadsheet // Ensure that you update this every time you make a change. You could make this even more helpful by explaining why the change was made and not just what change was made. You should also comment the expressions in charts when they are changed. Summary In this article, we covered few coding tips, the surprising data sources, include files, and change logs. Resources for Article: Further resources on this subject: Qlik Sense's Vision [Article] Securing QlikView Documents [Article] Common QlikView script errors [Article]
Read more
  • 0
  • 0
  • 12344

article-image-nginx-service
Packt
20 Oct 2015
15 min read
Save for later

Nginx service

Packt
20 Oct 2015
15 min read
In this article by Clement Nedelcu, author of the book, Nginx HTTP Server - Third Edition, we discuss the stages after having successfully built and installed Nginx. The default location for the output files is /usr/local/nginx. (For more resources related to this topic, see here.) Daemons and services The next step is obviously to execute Nginx. However, before doing so, it's important to understand the nature of this application. There are two types of computer applications—those that require immediate user input, thus running in the foreground, and those that do not, thus running in the background. Nginx is of the latter type, often referred to as daemon. Daemon names usually come with a trailing d and a couple of examples can be mentioned here—httpd, the HTTP server daemon, is the name given to Apache under several Linux distributions; named, the name server daemon; or crond the task scheduler—although, as you will notice, this is not the case for Nginx. When started from the command line, a daemon immediately returns the prompt, and in most cases, does not even bother outputting data to the terminal. Consequently, when starting Nginx you will not see any text appear on the screen, and the prompt will return immediately. While this might seem startling, it is on the contrary a good sign. It means the daemon was started correctly and the configuration did not contain any errors. User and group It is of utmost importance to understand the process architecture of Nginx and particularly the user and groups its various processes run under. A very common source of troubles when setting up Nginx is invalid file access permissions—due to a user or group misconfiguration, you often end up getting 403 Forbidden HTTP errors because Nginx cannot access the requested files. There are two levels of processes with possibly different permission sets: The Nginx master process: This should be started as root. In most Unix-like systems, processes started with the root account are allowed to open TCP sockets on any port, whereas other users can only open listening sockets on a port above 1024. If you do not start Nginx as root, standard ports such as 80 or 443 will not be accessible. Note that the user directive that allows you to specify a different user and group for the worker processes will not be taken into consideration for the master process. The Nginx worker processes: These are automatically spawned by the master process under the account you specified in the configuration file with the user directive. The configuration setting takes precedence over the configuration switch you may have specified at compile time. If you did not specify any of those, the worker processes will be started as user nobody, and group nobody (or nogroup depending on your OS). Nginx command-line switches The Nginx binary accepts command-line arguments to perform various operations, among which is controlling the background processes. To get the full list of commands, you may invoke the help screen using the following commands: [alex@example.com ~]$ cd /usr/local/nginx/sbin [alex@example.com sbin]$ ./nginx -h The next few sections will describe the purpose of these switches. Some allow you to control the daemon, some let you perform various operations on the application configuration. Starting and stopping the daemon You can start Nginx by running the Nginx binary without any switches. If the daemon is already running, a message will show up indicating that a socket is already listening on the specified port: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) […] [emerg]: still could not bind(). Beyond this point, you may control the daemon by stopping it, restarting it, or simply reloading its configuration. Controlling is done by sending signals to the process using the nginx -s command. Command Description nginx –s stop Stops the daemon immediately (using the TERM signal). nginx –s quit Stops the daemon gracefully (using the QUIT signal). nginx –s reopen Reopens the log files. nginx –s reload Reloads the configuration. Note that when starting the daemon, stopping it, or performing any of the preceding operations, the configuration file is first parsed and verified. If the configuration is invalid, whatever command you have submitted will fail, even when trying to stop the daemon. In other words, in some cases you will not be able to even stop Nginx if the configuration file is invalid. An alternate way to terminate the process, in desperate cases only, is to use the kill or killall commands with root privileges: [root@example.com ~]# killall nginx Testing the configuration As you can imagine, testing the validity of your configuration will become crucial if you constantly tweak your server setup . The slightest mistake in any of the configuration files can result in a loss of control over the service—you will then be unable to stop it via regular init control commands, and obviously, it will refuse to start again. Consequently, the following command will be useful to you in many occasions; it allows you to check the syntax, validity, and integrity of your configuration: [alex@example.com ~]$ /usr/local/nginx/sbin/nginx –t The –t switch stands for test configuration. Nginx will parse the configuration anew and let you know whether it is valid or not. A valid configuration file does not necessarily mean Nginx will start though, as there might be additional problems such as socket issues, invalid paths, or incorrect access permissions. Obviously, manipulating your configuration files while your server is in production is a dangerous thing to do and should be avoided when possible. The best practice, in this case, is to place your new configuration into a separate temporary file and run the test on that file. Nginx makes it possible by offering the –c switch: [alex@example.com sbin]$ ./nginx –t –c /home/alex/test.conf This command will parse /home/alex/test.conf and make sure it is a valid Nginx configuration file. When you are done, after making sure that your new file is valid, proceed to replacing your current configuration file and reload the server configuration: [alex@example.com sbin]$ cp -i /home/alex/test.conf /usr/local/nginx/conf/nginx.conf cp: erase 'nginx.conf' ? yes [alex@example.com sbin]$ ./nginx –s reload Other switches Another switch that might come in handy in many situations is –V. Not only does it tell you the current Nginx build version, but more importantly it also reminds you about the arguments that you used during the configuration step—in other words, the command switches that you passed to the configure script before compilation. [alex@example.com sbin]$ ./nginx -V nginx version: nginx/1.8.0 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) TLS SNI support enabled configure arguments: --with-http_ssl_module In this case, Nginx was configured with the --with-http_ssl_module switch only. Why is this so important? Well, if you ever try to use a module that was not included with the configure script during the precompilation process, the directive enabling the module will result in a configuration error. Your first reaction will be to wonder where the syntax error comes from. Your second reaction will be to wonder if you even built the module in the first place! Running nginx –V will answer this question. Additionally, the –g option lets you specify additional configuration directives in case they were not included in the configuration file: [alex@example.com sbin]$ ./nginx –g "timer_resolution 200ms"; Adding Nginx as a system service In this section, we will create a script that will transform the Nginx daemon into an actual system service. This will result in mainly two outcomes: the daemon will be controllable using standard commands, and more importantly, it will automatically be launched on system startup and stopped on system shutdown. System V scripts Most Linux-based operating systems to date use a System-V style init daemon. In other words, their startup process is managed by a daemon called init, which functions in a way that is inherited from the old System V Unix-based operating system. This daemon functions on the principle of runlevels, which represent the state of the computer. Here is a table representing the various runlevels and their signification: Runlevel State 0 System is halted 1 Single-user mode (rescue mode) 2 Multiuser mode, without NFS support 3 Full multiuser mode 4 Not used 5 Graphical interface mode 6 System reboot You can manually initiate a runlevel transition: use the telinit 0 command to shut down your computer or telinit 6 to reboot it. For each runlevel transition, a set of services are executed. This is the key concept to understand here: when your computer is stopped, its runlevel is 0. When you turn it on, there will be a transition from runlevel 0 to the default computer startup runlevel. The default startup runlevel is defined by your own system configuration (in the /etc/inittab file) and the default value depends on the distribution you are using: Debian and Ubuntu use runlevel 2, Red Hat and Fedora use runlevel 3 or 5, CentOS and Gentoo use runlevel 3, and so on—the list is long. So, in summary, when you start your computer running CentOS, it operates a transition from runlevel 0 to runlevel 3. That transition consists of starting all services that are scheduled for runlevel 3. The question is how to schedule a service to be started at a specific runlevel. For each runlevel, there is a directory containing scripts to be executed. If you enter these directories (rc0.d, rc1.d, to rc6.d), you will not find actual files, but rather symbolic links referring to scripts located in the init.d directory. Service startup scripts will indeed be placed in init.d, and links will be created by tools placing them in the proper directories. About init scripts An init script, also known as service startup script or even sysv script, is a shell script respecting a certain standard. The script controls a daemon application by responding to commands such as start, stop, and others, which are triggered at two levels. First, when the computer starts, if the service is scheduled to be started for the system runlevel, the init daemon will run the script with the start argument. The other possibility for you is to manually execute the script by calling it from the shell: [root@example.com ~]# service httpd start Or if your system does not come with the service command: [root@example.com ~]# /etc/init.d/httpd start The script must accept at least the start, stop, restart, force-reload, and status commands, as they will be used by the system to respectively start up, shut down, restart, forcefully reload the service, or inquire its status. However, to enlarge your field of action as a system administrator, it is often interesting to provide further options, such as a reload argument to reload the service configuration or a try-restart argument to stop and start the service again. Note that since service httpd start and /etc/init.d/httpd start essentially do the same thing, with the exception that the second command will work on all operating systems, we will make no further mention of the service command and will exclusively use the /etc/init.d/ method. Init script for Debian-based distributions We will thus create a shell script to start and stop our Nginx daemon and also to restart and reloading it. The purpose here is not to discuss Linux shell script programming, so we will merely provide the source code of an existing init script, along with some comments to help you understand it. Due to differences in the format of the init scripts from one distribution to another, we will discover two separate scripts here. The first one is meant for Debian-based distributions such as Debian, Ubuntu, Knoppix, and so forth. First, create a file called nginx with the text editor of your choice, and save it in the /etc/init.d/ directory (on some systems, /etc/init.d/ is actually a symbolic link to /etc/rc.d/init.d/). In the file you just created, insert the script provided in the code bundle supplied with this book. Make sure that you change the paths to make them correspond to your actual setup. You will need root permissions to save the script into the init.d directory. The complete init script for Debian-based distributions can be found in the code bundle. Init script for Red Hat–based distributions Due to the system tools, shell programming functions, and specific formatting that it requires, the preceding script is only compatible with Debian-based distributions. If your server is operated by a Red Hat–based distribution such as CentOS, Fedora, and many more, you will need an entirely different script. The complete init script for Red Hat–based distributions can be found in the code bundle. Installing the script Placing the file in the init.d directory does not complete our work. There are additional steps that will be required to enable the service. First, make the script executable. So far, it is only a piece of text that the system refuses to run. Granting executable permissions on the script is done with the chmod command: [root@example.com ~]# chmod +x /etc/init.d/nginx Note that if you created the file as the root user, you will need to be logged in as root to change the file permissions. At this point, you should already be able to start the service using service nginx start or /etc/init.d/nginx start, as well as stopping, restarting, or reloading the service. The last step here will be to make it so the script is automatically started at the proper runlevels. Unfortunately, doing this entirely depends on what operating system you are using. We will cover the two most popular families—Debian, Ubuntu, or other Debian-based distributions and Red Hat/Fedora/CentOS, or other Red Hat–derived systems. Debian-based distributions For the Debian-based distribution, a simple command will enable the init script for the system runlevel: [root@example.com ~]# update-rc.d -f nginx defaults This command will create links in the default system runlevel folders. For the reboot and shutdown runlevels, the script will be executed with the stop argument; for all other runlevels, the script will be executed with start. You can now restart your system and see your Nginx service being launched during the boot sequence. Red Hat–based distributions For the Red Hat–based systems family, the command differs, but you get an additional tool to manage system startup. Adding the service can be done via the following command: [root@example.com ~]# chkconfig nginx on Once that is done, you can then verify the runlevels for the service: [root@example.com ~]# chkconfig --list nginx Nginx 0:off 1:off 2:on 3:off 4:on 5:on 6:off Another tool will be useful to you to manage system services namely, ntsysv. It lists all services scheduled to be executed on system startup and allows you to enable or disable them at will. The tool ntsysv requires root privileges to be executed. Note that prior to using ntsysv, you must first run the chkconfig nginx on command, otherwise Nginx will not appear in the list of services. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed to you directly. NGINX Plus Since mid-2013, NGINX, Inc., the company behind the Nginx project, also offers a paid subscription called NGINX Plus. The announcement came as a surprise for the open source community, but several companies quickly jumped on the bandwagon and reported amazing improvements in terms of performance and scalability after using NGINX Plus. NGINX, Inc., the high performance web company, today announced the availability of NGINX Plus, a fully-supported version of the popular NGINX open source software complete with advanced features and offered with professional services. The product is developed and supported by the core engineering team at Nginx Inc., and is available immediately on a subscription basis. As business requirements continue to evolve rapidly, such as the shift to mobile and the explosion of dynamic content on the Web, CIOs are continuously looking for opportunities to increase application performance and development agility, while reducing dependencies on their infrastructure. NGINX Plus provides a flexible, scalable, uniformly applicable solution that was purpose built for these modern, distributed application architectures. Considering the pricing plans ($1,500 per year per instance) and the additional features made available, this platform is indeed clearly aimed at large corporations looking to integrate Nginx into their global architecture seamlessly and effortlessly. Professional support from the Nginx team is included and discounts can be offered for multiple-instance subscriptions. This book covers the open source version of Nginx only and does not detail advanced functionality offered by NGINX Plus. For more information about the paid subscription, take a look at http://www.nginx.com. Summary From this point on, Nginx is installed on your server and automatically starts with the system. Your web server is functional, though it does not yet answer the most basic functionality: serving a website. The first step towards hosting a website will be to prepare a suitable configuration file. Resources for Article: Further resources on this subject: Getting Started with Nginx[article] Fine-tune the NGINX Configuration[article] Nginx proxy module [article]
Read more
  • 0
  • 0
  • 6293

article-image-extracting-real-time-wildfire-data-arcgis-server-arcgis-rest-api
Packt
20 Oct 2015
6 min read
Save for later

Extracting Real-Time Wildfire Data from ArcGIS Server with the ArcGIS REST API

Packt
20 Oct 2015
6 min read
In this article by Eric Pimpler, the author of the book ArcGIS Blueprints, the ArcGIS platform, which contains a number of different products including ArcGIS Desktop[d1] , ArcGIS Pro, ArcGIS for Server, and ArcGIS Online, provides a robust environment in order to perform geographic analysis and mapping. Content produced by this platform can be integrated using the ArcGIS REST API and a programming language, such as Python. Many of the applications we build in this book use the ArcGIS REST API as a bridge to exchange information between software products. (For more resources related to this topic, see here.) We're going to start by developing a simple ArcGIS Desktop custom script tool in ArcToolbox that connects to an ArcGIS Server map service to retrieve real-time wildfire information. The wildfire information will be retrieved from a United States Geological Survey (USGS)[d1]  map service that provides real-time wildfire data. We'll use the ArcGIS REST API and Python requests module to connect to the map service and request the data. The response from the map service will contain data that will be written to a feature class stored in a local geodatabase using the ArcPy data access module. This will all be accomplished inside a custom script tool attached to an ArcGIS Python toolbox. In this article we will cover the following topics: ArcGIS Desktop Python toolboxes ArcGIS Server map and feature services A Python requests module A Python json module ArcGIS REST API ArcPy data access module (ArcPy.da) Design Before we start building the application, we'll spend some time planning what we'll build. This is a fairly simple application, but it serves to illustrate how ArcGIS Desktop and ArcGIS Server can easily be integrated using the ArcGIS REST API. In this application, we'll build an ArcGIS Python toolbox that serves as a container for a single tool named USGSDownload. The USGSDownload tool will use the Python requests, json, and ArcPy data modules to request real-time wildfire data from a USGS map service. The response from the map service will contain information, including the location of the fire, name of the fire, and some additional information that will then be written to a local geodatabase. The communication between the ArcGIS Desktop Python toolbox and ArcGIS Server map service is accomplished through the ArcGIS REST API and the Python language. Let's get started building the application. Creating the ArcGIS Desktop Python toolbox[d2]  There are two ways to create toolboxes in ArcGIS: script tools in custom toolboxes and script tools in Python toolboxes. Python toolboxes encapsulate everything in one place: parameters, validation code and source code. This is not the case with custom toolboxes, which are created using a wizard and a separate script that processes the business logic. A Python toolbox functions like any other toolbox in ArcToolbox, but it is created entirely in Python and has a file extension of .pyt. It is created programmatically as a class named Toolbox. In this article, you will learn how to create a Python toolbox and add a tool. You'll only create the basic structure of the toolbox and tool that will ultimately connect to an ArcGIS Server map service containing wildfire data. In a later section, you'll complete the functionality of the tool by adding code that connects to the map service, downloads the current data, and inserts it into a feature class. Open ArcCatalog. You can create a Python toolbox [d3] in a folder by right-clicking on the folder and selecting New | Python Toolbox. In ArcCatalog, there is a folder named Toolboxes and inside is a My Toolboxes folder, as seen in the following screenshot. Right-click on this folder and select New | Python Toolbox. The name of the toolbox is controlled by the file name. Name the toolbox InsertWildfires.py, as shown in following screenshot: The Python toolbox file .pyt can be edited in any text or code editor. By default, the code will open in Notepad[d4] . You can change this by setting the default editor for your script by going to Geoprocessing | Geoprocessing Options and the Editor section. You'll note in the Figure A: Geoprocessing options screenshot that I have set my editor to PyScripter, which is my preferred environment. You may want to change this to IDLE or whichever development environment you are currently using.For example, to find the path to the executable for the IDLE development environment, you can go to Start | All Programs | ArcGIS | Python 2.7 | IDLE. right-click on IDLE, and select Properties[d5]  to display the properties window. Inside the Target text box, you should see a path to the executable as seen in the following screenshot: Copy and paste the path into the Editor and Debugger sections inside the Geoprocessing Options dialog, as shown in following screenshot: Figure A: Geoprocessing options Right-click on InsertWildfires.pyt and select Edit. This will open the development environment you defined earlier, as seen in the following screenshot. Your environment will vary depending on the editor that you have defined. Remember that you will not be changing the name of the class, which is Toolbox. However, you will rename the Tool class to reflect the name of the tool you want to create. Each tool will have various methods, including __init__(), which is the constructor for the tool along with getParameterInfo(), isLicensed(), updateParameters(), updateMessages(), and execute(). You can use the __init__() method to set initialization properties, such as the tool's label and description. Find the class named Tool in your code and change the name of this tool to USGSDownload, set the label and description properties. class USGSDownload(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "USGS Download" self.description = "Download from USGS ArcGIS Server instance" self.canRunInBackground = False You can use the Tool class as a template for other tools you'd like to add to the toolbox by copying and pasting the class and it's methods. We're not going to do it in this article, but you need to be aware of this. Summary Integrating ArcGIS Desktop and ArcGIS Server is easily accomplished using the ArcGIS REST API and the Python programming language. In this article we created an ArcGIS Python toolbox containing a tool that connects to an ArcGIS Server map service, which contains real-time wildfire information and is hosted by the USGS. Resources for Article: Further resources on this subject: ArcGIS – Advanced ArcObjects[article] Using the ArcPy DataAccess Module withFeature Classesand Tables[article] Introduction to Mobile Web ArcGIS Development [article]
Read more
  • 0
  • 0
  • 2923

Packt
20 Oct 2015
3 min read
Save for later

OAuth 2.0 – Gaining Consent

Packt
20 Oct 2015
3 min read
In this article by Charles Bihis, the author of the book, Mastering OAuth 2.0, discusses the topic of gaining consent in OAuth 2.0. OAuth 2.0 is a framework built around the concept of resources and permissions for protecting those resources. Central to this is the idea of gaining consent. Let's look at an example.   (For more resources related to this topic, see here.) How does it work? You have just downloaded the iPhone app GoodApp. After installing, GoodApp would like to suggest contacts for you to add by looking at your Facebook friends. Conceptually, the OAuth 2.0 workflow can be represented like this:   The following are the steps present in the OAuth 2.0 workflow: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook to log in and authorize GoodApp. Facebook asks you directly for authorization to see if GoodApp can access your friend list on your behalf. You say yes. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. The preceding image and workflow presents a rough idea for how this interaction looks like using the OAuth 2.0 model. However, of particular interest to us now are steps 3-5. In these steps, the service provider, Facebook, is asking you, the user, whether or not you allow the client application, GoodApp, to perform a particular action. This is known as user consent. User consent When a client application wants to perform a particular action relating to you or resources you own, it must first ask you for permission. In this case, the client application, GoodApp, wants to access your friend list on the service provider, Facebook. In order for Facebook to allow this, they must ask you directly. This is where the user consent screen comes in. It is simply a page that you are presented with in your application that describes the permissions that are being requested of you by the client application along with an option to either allow or reject the request. You may be familiar with these types of screens already if you've ever tried to access resources on one service from another service. For example, the following is an example of a user consent screen that is presented when you want to log into Pinterest using your Facebook credentials. Incorporating this into our flow chart, we get a new image: This flow chart includes the following steps: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook. Here, Facebook asks you directly for authorization for GoodApp to access your friend list on your behalf. It does this by presenting the user consent form which you can either accept or deny. Let's assume you accept. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. When you accept the terms on the user consent screen, you have allowed GoodApp access to your Facebook friend list on your behalf. This is a concept known as delegated authority, and it is all accomplished by gaining consent. Summary In this article, we discussed the idea of gaining consent in OAuth 2.0, and how it works with the help of an example and flow charts. Resources for Article: Further resources on this subject: Oracle API Management Implementation 12c [article] Find Friends on Facebook [article] Core Ephesoft Features [article]
Read more
  • 0
  • 0
  • 27759
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-application-patterns
Packt
20 Oct 2015
9 min read
Save for later

Application Patterns

Packt
20 Oct 2015
9 min read
In this article by Marcelo Reyna, author of the book Meteor Design Patterns, we will cover application-wide patterns that share server- and client- side code. With these patterns, your code will become more secure and easier to manage. You will learn the following topic: Filtering and paging collections (For more resources related to this topic, see here.) Filtering and paging collections So far, we have been publishing collections without thinking much about how many documents we are pushing to the client. The more documents we publish, the longer it will take the web page to load. To solve this issue, we are going to learn how to show only a set number of documents and allow the user to navigate through the documents in the collection by either filtering or paging through them. Filters and pagination are easy to build with Meteor's reactivity. Router gotchas Routers will always have two types of parameters that they can accept: query parameters, and normal parameters. Query parameters are the objects that you will commonly see in site URLs followed by a question mark (<url-path>?page=1), while normal parameters are the type that you define within the route URL (<url>/<normal-parameter>/named_route/<normal-parameter-2>). It is a common practice to set query parameters on things such as pagination to keep your routes from creating URL conflicts. A URL conflict happens when two routes look the same but have different parameters. A products route such as /products/:page collides with a product detail route such as /products/:product-id. While both the routes are differently expressed because of the differences in their normal parameter, you arrive at both the routes using the same URL. This means that the only way the router can tell them apart is by routing to them programmatically. So the user would have to know that the FlowRouter.go() command has to be run in the console to reach either one of the products pages instead of simply using the URL. This is why we are going to use query parameters to keep our filtering and pagination stateful. Stateful pagination Stateful pagination is simply giving the user the option to copy and paste the URL to a different client and see the exact same section of the collection. This is important to make the site easy to share. Now we are going to understand how to control our subscription reactively so that the user can navigate through the entire collection. First, we need to set up our router to accept a page number. Then we will take this number and use it on our subscriber to pull in the data that we need. To set up the router, we will use a FlowRouter query parameter (the parameter that places a question mark next to the URL). Let's set up our query parameter: # /products/client/products.coffee Template.created "products", -> @autorun => tags = Session.get "products.tags" filter = page: Number(FlowRouter.getQueryParam("page")) or 0 if tags and not _.isEmpty tags _.extend filter, tags:tags order = Session.get "global.order" if order and not _.isEmpty order _.extend filter, order:order @subscribe "products", filter Template.products.helpers ... pages: current: -> FlowRouter.getQueryParam("page") or 0 Template.products.events "click .next-page": -> FlowRouter.setQueryParams page: Number(FlowRouter.getQueryParam("page")) + 1 "click .previous-page": -> if Number(FlowRouter.getQueryParam("page")) - 1 < 0 page = 0 else page = Number(FlowRouter.getQueryParam("page")) - 1 FlowRouter.setQueryParams page: page What we are doing here is straightforward. First, we extend the filter object with a page key that gets the current value of the page query parameter, and if this value does not exist, then it is set to 0. getQueryParam is a reactive data source, the autorun function will resubscribe when the value changes. Then we will create a helper for our view so that we can see what page we are on and the two events that set the page query parameter. But wait. How do we know when the limit to pagination has been reached? This is where the tmeasday:publish-counts package is very useful. It uses a publisher's special function to count exactly how many documents are being published. Let's set up our publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 if ops.tags and not _.isEmpty ops.tags @relations collection:Tags ... collection:ProductsTags ... collection:Products foreign_key:"product" options:product_options mappings:[ ... ] else Counts.publish this,"products", Products.find() noReady:true @relations collection:Products options:product_options mappings:[ ... ] if ops.order and not _.isEmpty ops.order ... @ready() To publish our counts, we used the Counts.publish function. This function takes in a few parameters: Counts.publish <always this>,<name of count>, <collection to count>, <parameters> Note that we used the noReady parameter to prevent the ready function from running prematurely. By doing this, we generate a counter that can be accessed on the client side by running Counts.get "products". Now you might be thinking, why not use Products.find().count() instead? In this particular scenario, this would be an excellent idea, but you absolutely have to use the Counts function to make the count reactive, so if any dependencies change, they will be accounted for. Let's modify our view and helpers to reflect our counter: # /products/client/products.coffee ... Template.products.helpers pages: current: -> FlowRouter.getQueryParam("page") or 0 is_last_page: -> current_page = Number(FlowRouter.getQueryParam("page")) or 0 max_allowed = 10 + current_page * 10 max_products = Counts.get "products" max_allowed > max_products //- /products/client/products.jade template(name="products") div#products.template ... section#featured_products div.container div.row br.visible-xs //- PAGINATION div.col-xs-4 button.btn.btn-block.btn-primary.previous-page i.fa.fa-chevron-left div.col-xs-4 button.btn.btn-block.btn-info {{pages.current}} div.col-xs-4 unless pages.is_last_page button.btn.btn-block.btn-primary.next-page i.fa.fa-chevron-right div.clearfix br //- PRODUCTS +momentum(plugin="fade-fast") ... Great! Users can now copy and paste the URL to obtain the same results they had before. This is exactly what we need to make sure our customers can share links. If we had kept our page variable confined to a Session or a ReactiveVar, it would have been impossible to share the state of the webapp. Filtering Filtering and searching, too, are critical aspects of any web app. Filtering works similar to pagination; the publisher takes additional variables that control the filter. We want to make sure that this is stateful, so we need to integrate this into our routes, and we need to program our publishers to react to this. Also, the filter needs to be compatible with the pager. Let's start by modifying the publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 filter = {} if ops.search and not _.isEmpty ops.search _.extend filter, name: $regex: ops.search $options:"i" if ops.tags and not _.isEmpty ops.tags @relations collection:Tags mappings:[ ... collection:ProductsTags mappings:[ collection:Products filter:filter ... ] else Counts.publish this,"products", Products.find filter noReady:true @relations collection:Products filter:filter ... if ops.order and not _.isEmpty ops.order ... @ready() To build any filter, we have to make sure that the property that creates the filter exists and _.extend our filter object based on this. This makes our code easier to maintain. Notice that we can easily add the filter to every section that includes the Products collection. With this, we have ensured that the filter is always used even if tags have filtered the data. By adding the filter to the Counts.publish function, we have ensured that the publisher is compatible with pagination as well. Let's build our controller: # /products/client/products.coffee Template.created "products", -> @autorun => ops = page: Number(FlowRouter.getQueryParam("page")) or 0 search: FlowRouter.getQueryParam "search" ... @subscribe "products", ops Template.products.helpers ... pages: search: -> FlowRouter.getQueryParam "search" ... Template.products.events ... "change .search": (event) -> search = $(event.currentTarget).val() if _.isEmpty search search = null FlowRouter.setQueryParams search:search page:null First, we have renamed our filter object to ops to keep things consistent between the publisher and subscriber. Then we have attached a search key to the ops object that takes the value of the search query parameter. Notice that we can pass an undefined value for search, and our subscriber will not fail, since the publisher already checks whether the value exists or not and extends filters based on this. It is always better to verify variables on the server side to ensure that the client doesn't accidentally break things. Also, we need to make sure that we know the value of that parameter so that we can create a new search helper under the pages helper. Finally, we have built an event for the search bar. Notice that we are setting query parameters to null whenever they do not apply. This makes sure that they do not appear in our URL if we do not need them. To finish, we need to create the search bar: //- /products/client/products.jade template(name="products") div#products.template header#promoter ... div#content section#features ... section#featured_products div.container div.row //- SEARCH div.col-xs-12 div.form-group.has-feedback input.input-lg.search.form-control(type="text" placeholder="Search products" autocapitalize="off" autocorrect="off" autocomplete="off" value="{{pages.search}}") span(style="pointer-events:auto; cursor:pointer;").form-control-feedback.fa.fa-search.fa-2x ... Notice that our search input is somewhat cluttered with special attributes. All these attributes ensure that our input is not doing the things that we do not want it to for iOS Safari. It is important to keep up with nonstandard attributes such as these to ensure that the site is mobile-friendly. You can find an updated list of these attributes here at https://developer.apple.com/library/safari/documentation/AppleApplications/Reference/SafariHTMLRef/Articles/Attributes.html. Summary This article covered how to control the amount of data that we publish. We also learned a pattern to build pagination that functions with filters as well, along with code examples. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor[article] Quick start - creating your first application[article] Getting Started with Meteor [article]
Read more
  • 0
  • 0
  • 8030

article-image-understanding-text-search-and-hierarchies-sap-hana
Packt
20 Oct 2015
9 min read
Save for later

Understanding Text Search and Hierarchies in SAP HANA

Packt
20 Oct 2015
9 min read
In this article by Vinay Singh, author of the book Real Time Analytics with SAP HANA, this article covers Full Text Search and hierarchies in SAP HANA, and how to create and use them in our data models. After completing this article, you should be able to: Create and use Full Text Search Create hierarchies—level and parent child hierarchies (For more resources related to this topic, see here.) Creating and using Full Text Search Before we proceed with the creation and use of Full Text Search, let's quickly go through the basic terms associated with it. They are as follows: Text Analysis: This is the process of analyzing unstructured text, extracting relevant information, and then transforming this information into structure information that can be leveraged in different ways. The scripts provide additional possibilities to analyze strings or large text columns by providing analysis rules for many industries in many languages in SAP HANA. Full Text Search: This capability of HANA helps to speed up search capabilities within large amounts of text data significantly. The primary function of Full Text Search is to optimize linguistic searches. Fuzzy Search: This functionality enables to find strings that match a pattern approximately (rather than exactly). It's a fault-tolerant search, meaning that a query returns records even if the search term contains additional or missing characters, or even spelling mistakes. It is an alternative to a non-fault tolerant SQL statement. The score() function: When using contains() in the where clause of a select statement, the score() function can be used to retrieve the score. This is a numeric value between 0.0 and 1.0. The score defines the similarity between the user input and the records returned by the search. A score of 0.0 means that there is no similarity. The higher the score, the more similar a record is to the search input. Some of the applied applications of fuzzy search could be: Fault-tolerant check for duplicate records. Its helps to prevent duplication entry in Systems by searching similar entries. Fault-tolerant search in text columns—for example, search documents on diode and find all documents that contain the term "triode". Fault-tolerant search in structure database content search for rhyming words, for example coffee Krispy biscuit and find toffee crisp biscuits (the standard example given by SAP). Let's see what are the use cases for text search: Combining structure and unstructured data Medicine and healthcare Patents Brand monitoring and the buying pattern of consumer Real-time analytics on a large volume of data Data from social media Finance data Sales optimization Monitoring and production planning The results of text analysis are stored in a table and therefore, can be leveraged in all the HANA- supported scenarios: Standard Analytics: Create analytical views and calculation views on top. For example, companies mentioned in news articles over time. Data mining, predictive: Using R, Predictive Analysis Library (PAL) functions. For example, clustering, time series analysis, and so on. Search-based applications: Create a search model and build a search UI with the HANA Info Access (InA) toolkit for HTML5. Text analysis results can be used to navigate and filter search results. For example, People finder, search UI for internal documents. The capabilities of HANA Full Text Search and text analysis are as follows: Native full text search Database text analysis The graphical modeling of search models Info Access toolkit for HTML5 UIs. The benefits of full text search: Extract unstructured content with no additional cost Combine structure and unstructured information for unified information access Less data duplication and transfer Harness the benefit of InA (Info Access toolkit ) for an HTML5 application The following are the supported data types by fuzzy search: Short text Text VARCHAR NVARCHAR Date Data with full text index. Enabling search option Before we can use the search option in any attribute or analytical view, we will need to enable this functionality in the SAP HANA Studio Preferences as shown in the following screenshot: We are well prepared to move ahead with the creation and use of Full Text search. Let's do this step by step as follows: Create the table that we will use to perform the Full Text Search statements: Create Schema <DEMO>; // I am creating , it would be already present from our previous exercises. SET SCHEMA DEMO; // Set the schema name Create a Column Table including FUZZY SEARCH indexed columns. DROP TABLE DEMO.searchtbl_FUZZY; CREATE COLUMN TABLE DEMO.searchtbl_FUZZY ( CUST_NAME TEXT FUZZY SEARCH INDEX ON, CUST_COUNTY TEXT FUZZY SEARCH INDEX ON, CUST_DEPT TEXT FUZZY SEARCH INDEX ON, ); Prepare the fuzzy search logic (SQL logic): Search for customers in the countries that contain the 'MAIN' word: SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_county, 'MAIN'); Search for customers in the countries that contain the 'MAIN' word but with Fuzzy parameter 0.4 SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_county, 'West', FUZZY(0.3)); Perform a fuzzy search for a customer working in a department that includes the department word : SELECT highlighted(cust_dept), score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(cust_dept, 'Department', FUZZY(0.5)); Fuzzy search for all the columns by looking for the customer word: SELECT score() AS score, * FROM searchtbl_FUZZY WHERE CONTAINS(*, 'Customer', FUZZY(0.5)); Creating hierarchies Hierarchies are created to maintain data in a structured format, such as maintaining customer or employee data based on their roles and splitting the data based on geographies. Hierarchical data is very useful for organizational purposes during decision making. Two types of hierarchies can be created in SAP HANA: The level hierarchy Parent-child hierarchy The hierarchies are initially created in the attribute view and later can be combined in the analytic view or calculation view for consumption in a report as per business requirements. Let's create both types of hierarchies in attribute views. Creating level hierarchy Each level represents a position in the hierarchy. For example, a time dimension might have a hierarchy that represents data at the month, quarter, and year levels. Each level above the base level contains aggregate values for the levels below it. Create a new attribute view (for your own practice, I would suggest you to create a new one). You can also use an existing one. Use the SNWD_PD EPM sample tables. In output view, mark the following as output: In the semantic node of the view, create new hierarchy as shown in the following screenshot and fill the details: Save and Activate the view. Now the hierarchy is ready to be used in an analytical view. Add a client and node key again as output to your attribute view that you just created, that is AT_LEVEL_HIERARCY_DEMO, as we will use these two fields in Create an analytical view. It should look like the following screenshot. Add the attribute view created in the preceding step and the SNWD_SO_I table to the data foundation: Join client to client and product guide to node key:  Save and activate. Go to MS Excel | All Programs | Microsoft Office | Microsoft Excel 2010 then go to Data tab | From Other Sources | From Data Connection Wizard. You will get a new popup for Data Connection Wizard | Other/Advanced | SAP HANA MDX Provider: You will be asked to provide the connection details, fill the details, and test the connection (these are the same details that you used while adding the system to SAP HANA Studio). Data Connection Wizard will now ask you to choose the analytical view (choose the one that you just created in the preceding step): The preceding steps will take you to an excel sheet and you will see data as per the choices that you chose in the Pivot table field list: Create parent-child hierarchy The parent-child hierarchy is a simple, two-level hierarchy where the child element has an attribute containing the parent element. These two columns define the hierarchical relationships among the members of the dimension. The first column, called the member key column, identifies each dimension member. The other column, called the parent column, identifies the parent of each dimension member. The parent attribute determines the name of each level in the parent-child hierarchy and determines whether the data for parent members should be displayed  Let's create a parent-child hierarchy using the following steps: Create an attribute view. Create a table that has the parent-child information: The following is the sample code and the insert statement: CREATE COLUMN TABLE "DEMO"."CCTR_HIE"( "CC_CHILD" NVARCHAR(4), "CC_PARENT" NVARCHAR(4)); insert into "DEMO"."CCTR_HIE" values('','') insert into "DEMO"."CCTR_HIE" values('C11','c1'); insert into "DEMO"."CCTR_HIE" values('C12','c1'); insert into "DEMO"."CCTR_HIE" values('C13','c1'); insert into "DEMO"."CCTR_HIE" values('C14','c2'); insert into "DEMO"."CCTR_HIE" values('C21','c2'); insert into "DEMO"."CCTR_HIE" values('C22','c2'); insert into "DEMO"."CCTR_HIE" values('C31','c3'); insert into "DEMO"."CCTR_HIE" values('C1','c'); insert into "DEMO"."CCTR_HIE" values('C2','c'); insert into "DEMO"."CCTR_HIE" values('C3','c'); We will put the preceding table into our data foundation of attribute view as follows: Make CC_CHILD as the key attribute. Now let's create new hierarchy as shown in the following screenshot: Save and activate the hierarchy. Create a new analytical view and add the HIE_PARENT_CHILD_DEMO view and the CCTR_COST table in data foundation. Join CCTR to CCTR_CILD with many is to one relationship. Make sure that in the semantic node, COST is set as a measure. Save and Activate the analytical view. Preview the data. As per the business need, we can use one of the two hierarchies along with attribute view or analytical view. Summary In this article, we took a deep dive into Full Text Search, fuzzy logic, and hierarchies concepts. We learned how to create and use text search and fuzzy logic. The parent-child and level hierarchies were discussed in detail with a hands-on approach on both. Resources for Article: Further resources on this subject: Sabermetrics with Apache Spark [article] Meeting SAP Lumira [article] Achieving High-Availability on AWS Cloud [article]
Read more
  • 0
  • 0
  • 16391

article-image-gamification-moodle-lms
Packt
19 Oct 2015
11 min read
Save for later

Gamification with Moodle LMS

Packt
19 Oct 2015
11 min read
 In this article by Natalie Denmeade, author of the book, Gamification with Moodle describes how teachers can use Gamification design in their course development within the Moodle Learning Management System (LMS) to increase the motivation and engagement of learners. (For more resources related to this topic, see here.) Gamification is a design process that re-frames goals to be more appealing and achievable by using game design principles. The goal of this process is it to keep learners engaged and motivated in a way that is not always present in traditional courses. When implemented in elegant solutions, learners may be unaware of the subtle game elements being used. A gamification strategy can be considered successful if learners are more engaged, feel challenged and confident to keep progressing, which has implications for the way teachers consider their course evaluation processes. It is important to note that Gamification in education is more about how the person feels at certain points in their learning journey than about the end product which may or may not look like a game. Gamification and Moodle After following the tutorials in this book, teachers will gain the basic skills to get started applying Gamification design techniques in their Moodle courses. They can take learners on a journey of risk, choice, surprise, delight, and transformation. Taking an activity and reframing it to be more appealing and achievable sounds like the job description of any teacher or coach! Therefore, many teachers are already doing this! Understanding games and play better can help teachers be more effective in using a wider range of game elements to aid retention and completions in their courses. In this book you will find hints and tips on how to apply proven strategies to online course development, including the research into a growth mindset from Carol Dweck in her book Mindset. You will see how the use of game elements in Foursquare (badges), Twitter (likes), and Linkedin (progress bar), can also be applied to Moodle course design. In addition, you will use the core features available in Moodle which were designed to encourage learner participation as they collaborate, tag, share, vote, network, and generate learning content for each other. Finally, explore new features and plug-ins which offer dozens of ways that teachers can use game elements in Moodle such as, badges, labels, rubrics, group assignments, custom grading scales, forums, and conditional activities. A benefit of using Moodle as a Gamification LMS is it was developed on social constructivist principles. As these are learner-centric principles this means it is easy to use common Moodle features to apply gamification through the implementation of game components, mechanics and dynamics. These have been described by Kevin Werbach (in the Coursera MOOC on Gamification) as: Game Dynamics are the grammar: (the hidden elements) Constraints, emotions, narrative, progression, relationships Game Mechanics are the verbs: The action is driven forward by challenges, chance, competition/cooperation, feedback, resource acquisition, rewards, transactions, turns, win states Game Components are the nouns: Achievements, avatars, badges, boss fights, collections, combat, content, unlocking, gifting, leaderboards, levels, points, quests, teams, virtual goods Most of these game elements are not new ideas to teachers. It could be argued that school is already gamified through the use of grades and feedback. In fact it would be impossible to find a classroom that is not using some game elements. This book will help you identify which elements will be most effective in your current context. Teachers are encouraged to start with a few and gradually expanding their repertoire. As with professional game design, just using game elements will not ensure learners are motivated and engaged. The measure of success of a Gamification strategy is that learners continue to build resilience and autonomy in their own learning. When implemented well, the potential benefits of using a Gamification design process in Moodle are to: Provide manageable set of subtasks and tasks by hiding and revealing content Make assessment criteria visible, predictable, and in plain English using marking guidelines and rubrics Increase ownership of learning paths through choice and activity restrictions Build individual and group identity through work place simulations and role play Offer freedom to fail and try again without negative repercussions Increase enjoyment of both teacher and learners When teachers follow the step by step guide provided in this book they will create a basic Moodle course that acts as a flexible framework ready for learning content. This approach is ideal for busy teachers who want to respond to the changing needs and situations in the classroom. The dynamic approach keeps Teachers in control of adding and changing content without involving a technology support team. Onboarding tips By using focussed examples, the book describes how to use Moodle to implement an activity loop that identifies a desired behaviour and wraps motivations and feedback around that action. For example, a desired action may be for each learner to update their Moodle profile information with their interests and an avatar. Various motivational strategies could be put in place to prompt (or force) the learners to complete this task, including: Ask learners to share their avatars, with a link to their profile in a forum with ratings. Everyone else is doing it and they will feel left out if they don't get a like or a comment (creating a social norm). They might get rated as having the best avatar. Update the forum type so that learners can't see other avatars until they make a post. Add a theme (for example, Lego inspired avatars) so that creating an avatar is a chance to be creative and play. Choosing how they represent themselves in an online space is an opportunity for autonomy. Set the conditional release so learners cannot see the next activity until this activity is marked as complete (for example, post at least 3 comments on other avatars). The value in this process is that learners have started building connections between new classmates. This activity loop is designed to appeal to diverse motivations and achieve multiple goals: Encourages learners to create an online persona and choose their level of anonymity Invite learners to look at each other’s profiles and speed up the process of getting to know each other Introduce learners to the idea of forum posting and rating in a low-risk (non-assessable) way Take the workload off the Teacher to assess each activity directly Enforce compliance through software options which saves admin time and creates an expectation of work standards for learners Feedback options Games celebrate small and large successes and so should Moodle courses. There are a number of ways to do this in Moodle, including simply automating feedback with a Label, which is revealed once a milestone is reached. These milestones could be an activity completion, topic completion, or a level has been reached in the course total. Feedback can be provided through symbols of the achievement. Learners of all ages are highly motivated by this. Nearly all human cultures use symbols, icons, medals and badges to indicate status and achievements such as a black belt in Karate, Victoria Cross and Order of Australia Medals, OBE, sporting trophies, Gold Logies, feathers and tattoos. Symbols of achievement can be achieved through the use of open badges. Moodle offers a simple way to issue badges in line with Open Badges Industry (OBI) standards. The learner can take full ownership of this badge when they export it to their online backpack. Higher education institutes are finding evidence that open badges are a highly effective way to increase motivation for mature learners. Kaplan University found the implementation of badges resulted in increased student engagement by 17 percent. As well as improving learner reactions to complete harder tasks, grades increased up to 9 percent. Class attendance and discussion board posts increased over the non-badged counterparts. Using open badges as a motivation strategy enables feedback to be regularly provided along the way from peers, automated reporting and the teacher. For advanced Moodlers, the book describes how rubrics can be used for "levelling up" and how the Moodle gradebook can be configured as an exponential point scoring system to indicate progress. Social game elements Implementing social game elements is a powerful way to increase motivation and participation. A Gamification experiment with thousands of MOOC participants measured participation of learners in three groups of "plain, game and social". Students in the game condition had a 22.5 percent higher test score in the final test compared to students in the plain condition. Students in the social condition showed an even stronger increase of almost 40 percent compared to students in the plain condition. (See A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Krause et al, 2014). Moodle has a number of components that can be used to encourage collaborative learning. Just as the online gaming world has created spaces where players communicate outside of the game in forums, wikis and You Tube channels as well as having people make cheat guides about the games and are happy to share their knowledge with beginners. In Moodle we can imitate these collaborative spaces gamers use to teach each other and make the most of the natural leaders and influencers in the class. Moodle activities can be used to encourage communication between learners and allow delegation and skill-sharing. For example, the teacher may quickly explain and train the most experienced in the group how to perform a certain task and then showcase their work to others as an example. The learner could create blog posts which become an online version of an exercise book. The learner chooses the sharing level so classmates only, or the whole world, can view what is shared and leave comments. The process of delegating instruction through the connection of leader/learners to lagger/learners, in a particular area, allows finish lines to be at different points. Rather spending the last few weeks marking every learner’s individual work, the Teacher can now focus their attention on the few people who have lagged behind and need support to meet the deadlines. It's worth taking the time to learn how to configure a Moodle course. This provides the ability to set up a system that is scalable and adaptable to each learner. The options in Moodle can be used to allow learners to create their own paths within the boundaries set by a teacher. Therefore, rather than creating personalised learning paths for every student, set up a suite of tools for learners to create their own learning paths. Learning how to configure Moodle activities will reduce administration tasks through automatic reports, assessments and conditional release of activities. The Moodle activities will automatically create data on learner participation and competence to assist in identifying struggling learners. The inbuilt reports available in Moodle LMS help Teachers to get to know their learners faster. In addition, the reports also create evidence for formative assessment which saves hours of marking time. Through the release from repetitive tasks, teachers can spend more time on the creative and rewarding aspects of teaching. Rather than wait for a game design company to create an awesome educational game for a subject area, get started by using the same techniques in your classroom. This creative process is rewarding for both teachers and learners because it can be constantly adapted for their unique needs. Summary Moodle provides a flexible Gamification platform because teachers are directly in control of modifying and adding a sequence of activities, without having to go through an administrator. Although it may not look as good as a video game (made with an extensive budget) learners will appreciate the effort and personalisation. The Gamification framework does require some preparation. However, once implemented it picks up a momentum of its own and the teacher has a reduced workload in the long run. Purchase the book and enjoy a journey into Gamification in education with Moodle! Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Moodle for Online Communities [article] State of Play of BuddyPress Themes [article]
Read more
  • 0
  • 0
  • 6559

article-image-sql-server-powershell
Packt
19 Oct 2015
8 min read
Save for later

SQL Server with PowerShell

Packt
19 Oct 2015
8 min read
In this article by Donabel Santos, author of the book, SQL Server 2014 with Powershell v5 Cookbook explains scripts and snippets of code that accomplish basic SQL Server tasks using PowerShell. She discusses simple tasks such as Listing SQL Server Instances and Discovering SQL Server Services to make you comfortable working with SQL Server programmatically. However, even if ever you explore how to create some common database objects using PowerShell, keep in mind that PowerShell will not always be the best tool for the task. There will be tasks that are best completed using T-SQL. It is still good to know what is possible in PowerShell and how to do them, so you know that you have alternatives depending on your requirements or situation. For the recipes, we are going to use PowerShell ISE quite a lot. If you prefer running the script from the PowerShell console rather run running the commands from the ISE, you can save the scripts in a .ps1 file and run it from the PowerShell console. (For more resources related to this topic, see here.) Listing SQL Server Instances In this recipe, we will list all SQL Server Instances in the local network. Getting ready Log in to the server that has your SQL Server development instance as an administrator. How to do it... Let's look at the steps to list your SQL Server instances: Open PowerShell ISE as administrator. Let's use the Start-Service cmdlet to start the SQL Browser service: Import-Module SQLPS -DisableNameChecking #out of the box, the SQLBrowser is disabled. To enable: Set-Service SQLBrowser -StartupType Automatic #sql browser must be installed and running for us #to discover SQL Server instances Start-Service "SQLBrowser" Next, you need to create a ManagedComputer object to get access to instances. Type the following script and run: $instanceName = "localhost" $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName #list server instances $managedComputer.ServerInstances Your result should look similar to the one shown in the following screenshot: Notice that $managedComputer.ServerInstances gives you not only instance names, but also additional properties such as ServerProtocols, Urn, State, and so on. Confirm that these are the same instances you see from SQL Server Management Studio. Open SQL Server Management Studio. Go to Connect | Database Engine. In the Server Name dropdown, click on Browse for More. Select the Network Servers tab and check the instances listed. Your screen should look similar to this: How it works... All services in a Windows operating system are exposed and accessible using Windows Management Instrumentation (WMI). WMI is Microsoft's framework for listing, setting, and configuring any Microsoft-related resource. This framework follows Web-based Enterprise Management (WBEM). The DISTRIBUTED MANAGEMENT TASK FORCE, INC. (http://www.dmtf.org/standards/wbem) defines WBEM as follows: A set of management and Internet standard technologies developed to unify the management of distributed computing environments. WBEM provides the ability for the industry to deliver a well-integrated set of standard-based management tools, facilitating the exchange of data across otherwise disparate technologies and platforms. In order to access SQL Server WMI-related objects, you can create a WMI ManagedComputer instance: $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName The ManagedComputer object has access to a ServerInstance property, which in turn lists all available instances in the local network. These instances however are only identifiable if the SQL Server Browser service is running. The SQL Server Browser is a Windows Service that can provide information on installed instances in a box. You need to start this service if you want to list the SQL Server-related services. There's more... The Services instance of the ManagedComputer object can also provide similar information, but you will have to filter for the server type SqlServer: #list server instances $managedComputer.Services | Where-Object Type –eq "SqlServer" | Select-Object Name, State, Type, StartMode, ProcessId Your result should look like this: Instead of creating a WMI instance by using the New-Object method, you can also use the Get-WmiObject cmdlet when creating your variable. Get-WmiObject, however, will not expose exactly the same properties exposed by the Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer object. To list instances using Get-WmiObject, you will need to discover what namespace is available in your environment: $hostName = "localhost" $namespace = Get-WMIObject -ComputerName $hostName -Namespace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" #see matching namespace objects $namespace #see namespace names $namespace | Select-Object -ExpandProperty "__NAMESPACE" $namespace | Select-Object -ExpandProperty "Name" If you are using PowerShell v2, you will have to change the Where-Object cmdlet usage to use the curly braces {} and the $_ variable: Where-Object {$_.Name -like "ComputerManagement*" } For SQL Server 2014, the namespace value is: ROOTMicrosoftSQLServerComputerManagement12 This value can be derived from $namespace.__NAMESPACE and $namespace.Name. Once you have the namespace, you can use this with Get-WmiObject to retrieve the instances. We can use the SqlServiceType property to filter. According to MSDN (http://msdn.microsoft.com/en-us/library/ms179591.aspx), these are the values of SqlServiceType: SqlServiceType Description 1 SQL Server Service 2 SQL Server Agent Service 3 Full-Text Search Engine Service 4 Integration Services Service 5 Analysis Services Service 6 Reporting Services Service 7 SQL Browser Service Thus, to retrieve the SQL Server instances, we need to provide the full namespace ROOTMicrosoftSQLServerComputerManagement12. We also need to filter for SQL Server Service type, or SQLServiceType = 1. The code is as follows: Get-WmiObject -ComputerName $hostName -Namespace "$($namespace.__NAMESPACE)$($namespace.Name)" -Class SqlService | Where-Object SQLServiceType -eq 1 | Select-Object ServiceName, DisplayName, SQLServiceType | Format-Table –AutoSize Your result should look similar to the following screenshot: Yet another way to list all the SQL Server instances in the local network is by using the System.Data.Sql.SQLSourceEnumerator class, instead of ManagedComputer. This class has a static method called Instance.GetDataSources that will list all SQL Server instances: [System.Data.Sql.SqlDataSourceEnumerator]: :Instance.GetDataSources() | Format-Table -AutoSize When you execute, your result should look similar to the following: If you have multiple SQL Server versions, you can use the following code to display your instances: #list services using WMI foreach ($path in $namespace) { Write-Verbose "SQL Services in:$($path.__NAMESPACE)$($path.Name)" Get-WmiObject -ComputerName $hostName ` -Namespace "$($path.__NAMESPACE)$($path.Name)" ` -Class SqlService | Where-Object SQLServiceType -eq 1 | Select-Object ServiceName, DisplayName, SQLServiceType | Format-Table –AutoSize } Discovering SQL Server Services In this recipe, we will enumerate all SQL Server Services and list their statuses. Getting ready Check which SQL Server services are installed in your instance. Go to Start | Run and type services.msc. You should see a screen similar to this: How to do it... Let's assume you are running this script on the server box: Open PowerShell ISE as administrator. Add the following code and execute: Import-Module SQLPS -DisableNameChecking #you can replace localhost with your instance name $instanceName = "localhost" $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName #list services $managedComputer.Services | Select-Object Name, Type, ServiceState, DisplayName | Format-Table -AutoSize Your result will look similar to the one shown in the following screenshot: Items listed in your screen will vary depending on the features installed and running in your instance Confirm that these are the services that exist in your server. Check your services window. How it works... Services that are installed on a system can be queried using WMI. Specific services for SQL Server are exposed through SMO's WMI ManagedComputer object. Some of the exposed properties are as follows: ClientProtocols ConnectionSettings ServerAliases ServerInstances Services There's more... An alternative way to get SQL Server-related services is by using Get-WMIObject. We will need to pass in the host name as well as the SQL Server WMI Provider for the ComputerManagement namespace. For SQL Server 2014, this value is ROOTMicrosoftSQLServerComputerManagement12. The script to retrieve the services is provided here. Note that we are dynamically composing the WMI namespace. The code is as follows: $hostName = "localhost" $namespace = Get-WMIObject -ComputerName $hostName -NameSpace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" Get-WmiObject -ComputerName $hostname -Namespace "$($namespace.__NAMESPACE)$($namespace.Name)" -Class SqlService | Select-Object ServiceName If you have multiple SQL Server versions installed and want to see just the most recent version's services, you can limit to the latest namespace by adding Select-Object –Last 1: $namespace = Get-WMIObject -ComputerName $hostName -NameSpace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" | Select-Object –Last 1 Yet another alternative but less accurate way of listing possible SQL Server related services is the following snippet of code: #alterative - but less accurate Get-Service *SQL* This uses the Get-Service cmdlet and filters base on the service name. This is less accurate because this grabs all processes that have SQL in the name, but may not necessarily be related to SQL Server. For example, if you have MySQL installed, it will get picked up as a process. Conversely, this will not pick up SQL Server-related services that do not have SQL in the name, such as ReportServer. Summary You will find that many of the scripts can be accomplished using PowerShell and SQL Management Objects (SMO). SMO is a library that exposes SQL Server classes that allow programmatic manipulation and automation of many database tasks. For some , we will also explore alternative ways of accomplishing the same tasks using different native PowerShell cmdlets. Now that we have a gist of SQL Server 2014 with PowerShell, lets build a full-fledged e-commerce project with SQL Server 2014 with Powershell v5 Cookbook. Resources for Article: Further resources on this subject: Exploring Windows PowerShell 5.0 [article] Working with PowerShell [article] Installing/upgrading PowerShell [article]
Read more
  • 0
  • 0
  • 9069
article-image-dynamic-path-planning-robot
Packt
19 Oct 2015
8 min read
Save for later

Dynamic Path Planning of a Robot

Packt
19 Oct 2015
8 min read
In this article by Richard Grimmett, the author of the book Raspberry Pi Robotic Blueprints, we will see how to do dynamic path planning. Dynamic path planning simply means that you don't have a knowledge of the entire world with all the possible barriers before you encounter them. Your robot will have to decide how to proceed while it is in motion. This can be a complex topic, but there are some basics that you can start to understand and apply as you ask your robot to move around in its environment. Let's first address the problem of where you want to go and need to execute a path without barriers and then add in the barriers. (For more resources related to this topic, see here.) Basic path planning In order to talk about dynamic path planning—planning a path where you don't know what barriers you might encounter—you'll need a framework to understand where your robot is as well as to determine the location of the goal. One common framework is an x-y grid. Here is a drawing of such a grid: There are three key points to remember, as follows: The lower left point is a fixed reference position. The directions x and y are also fixed and all the other positions will be measured with respect to this position and directions. Another important point is the starting location of your robot. Your robot will then keep track of its location using its x coordinate or position with respect to some fixed reference position in the x direction and its y coordinate or position with respect to some fixed reference position in the y direction to the goal. It will use the compass to keep track of these directions. The third important point is the position of the goal, also given in x and y coordinates with respect to the fixed reference position. If you know the starting location and angle of your robot, then you can plan an optimum (shortest distance) path to this goal. To do this, you can use the goal location and robot location and some fairly simple math to calculate the distance and angle from the robot to the goal. To calculate the distance, use the following equation: You can use the preceding equation to tell your robot how far to travel to the goal. The following equation will tell your robot the angle at which it needs to travel: Thefollowing is a graphical representation of the two pieces of information that we just saw: Now that you have a goal, angle, and distance, you can program your robot to move. To do this, you will write a program to do the path planning and call the movement functions that you created earlier in this article. You will need, however, to know the distance that your robot travels in a set of time so that you can tell your robot in time units, not distance units, how far to travel. You'll also need to be able to translate the distance that might be covered by your robot in a turn; however, this distance may be so small as to be of no importance. If you know the angle and distance, then you can move your robot to the goal. The following are the steps that you will program: Calculate the distance in units that your robot will need to travel to reach the goal. Convert this to the number of steps to achieve this distance. Calculate the angle that your robot will need to travel to reach the goal. You'll use the compass and your robot turn functions to achieve this angle. Now call the step functions for the proper number of times to move your robot for the correct distance. This is it. Now, we will use some very simple python code that executes this using functions to move the robot forward and turn it. In this case, it makes sense to create a file called robotLib.py with all of the functions that do the actual settings to step the biped robot forward and turn the robot. You'll then import these functions using the from robotLib import * statement and your python program can call these functions. This makes the path planning python program smaller and more manageable. You'll do the same thing with the compass program using the command: from compass import *. For more information on how to import the functions from one python file to another, see http://www.tutorialspoint.com/python/python_modules.htm. The following is a listing of the program: In this program, the user enters the goal location, and the robot decides the shortest direction to the desired angle by reading the angle. To make it simple, the robot is placed in the grid heading in the direction of an angle of 0. If the goal angle is less than 180 degrees, the robot will turn right. If it is greater than 180 degrees, the robot will turn left. The robot turns until the desired angle and its measured angle are within a few degrees. Then the robot takes the number of steps in order to reach the goal. Avoiding Obstacles Planning paths without obstacles is, as has been shown, quite easy. However, it becomes a bit more challenging when your robot needs to walk around the obstacles. Let's look at the case where there is an obstacle in the path that you calculated previously. It might look as follows: You can still use the same path planning algorithm to find the starting angle; however, you will now need to use your sonar sensor to detect the obstacle. When your sonar sensor detects the obstacle, you will need to stop and recalculate a path to avoid the barrier, then recalculate the desired path to the goal. One very simple way to do this is when your robot senses a barrier, turn right at 90 degrees, go a fixed distance, and then recalculate the optimum path. When you turn back to move toward the target, you will move along the optimum path if you sense no barrier. However, if your robot encounters the obstacle again, it will repeat the process until it reaches the goal. In this case, using these rules, the robot will travel the following path: To sense the barrier, you will use the library calls to the sensor. You're going to add more accuracy with this robot using the compass to determine your angle. You will do this by importing the compass capability using from compass import *. You will also be using the time library and time.sleep command to add a delay between the different statements in the code. You will need to change your track.py library so that the commands don't have a fixed ending time, as follows: Here is the first part of this code, two functions that provide the capability to turn to a known angle using the compass, and a function to calculate the distance and angle to turn the tracked vehicle to that angle: The second part of this code shows the main loop. The user enters the robot's current position and the desired end position in x and y coordinates. The code that calculates the angle and distance starts the robot on its way. If a barrier is sensed, the unit turns at 90 degrees, goes for two distance units, and then recalculates the path to the end goal, as shown in the following screenshot: Now, this algorithm is quite simple; however, there are others that have much more complex responses to the barriers. You can also see that by adding the sonar sensors to the sides, your robot could actually sense when the barrier has ended. You could also provide more complex decision processes about which way to turn to avoid an object. Again, there are many different path finding algorithms. See http://www.academia.edu/837604/A_Simple_Local_Path_Planning_Algorithm_for_Autonomous_Mobile_Robots for an example of this. These more complex algorithms can be explored using the basic functionality that you have built in this article. Summary We have seen how to add path planning to your tracked robot's capability. Your tracked robot can now not only move from point A to point B, but can also avoid the barriers that might be in the way. Resources for Article: Further resources on this subject: Debugging Applications with PDB and Log Files[article] Develop a Digital Clock[article] Color and motion finding [article]
Read more
  • 0
  • 0
  • 17619

article-image-getting-started-cocos2d-x
Packt
19 Oct 2015
11 min read
Save for later

Getting started with Cocos2d-x

Packt
19 Oct 2015
11 min read
 In this article written by Akihiro Matsuura, author of the book Cocos2d-x Cookbook, we're going to install Cocos2d-x and set up the development environment. The following topics will be covered in this article: Installing Cocos2d-x Using Cocos command Building the project by Xcode Building the project by Eclipse Cocos2d-x is written in C++, so it can build on any platform. Cocos2d-x is open source written in C++, so we can feel free to read the game framework. Cocos2d-x is not a black box, and this proves to be a big advantage for us when we use it. Cocos2d-x version 3, which supports C++11, was only recently released. It also supports 3D and has an improved rendering performance. (For more resources related to this topic, see here.) Installing Cocos2d-x Getting ready To follow this recipe, you need to download the zip file from the official site of Cocos2d-x (http://www.cocos2d-x.org/download). In this article we've used version 3.4 which was the latest stable version that was available. How to do it... Unzip your file to any folder. This time, we will install the user's home directory. For example, if the user name is syuhari, then the install path is /Users/syuhari/cocos2d-x-3.4. We call it COCOS_ROOT. The following steps will guide you through the process of setting up Cocos2d-x: Open the terminal Change the directory in terminal to COCOS_ROOT, using the following comand: $ cd ~/cocos2d-x-v3.4 Run setup.py, using the following command: $ ./setup.py The terminal will ask you for NDK_ROOT. Enter into NDK_ROOT path. The terminal will will then ask you for ANDROID_SDK_ROOT. Enter the ANDROID_SDK_ROOT path. Finally, the terminal will ask you for ANT_ROOT. Enter the ANT_ROOT path. After the execution of the setup.py command, you need to execute the following command to add the system variables: $ source ~/.bash_profile Open the .bash_profile file, and you will find that setup.py shows how to set each path in your system. You can view the .bash_profile file using the cat command: $ cat ~/.bash_profile We now verify whether Cocos2d-x can be installed: Open the terminal and run the cocos command without parameters. $ cocos If you can see a window like the following screenshot, you have successfully completed the Cocos2d-x install process. How it works... Let's take a look at what we did throughout the above recipe. You can install Cocos2d-x by just unzipping it. You know setup.py is only setting up the cocos command and the path for Android build in the environment. Installing Cocos2d-x is very easy and simple. If you want to install a different version of Cocos2d-x, you can do that too. To do so, you need to follow the same steps that are given in this recipe, but which will be for a different version. There's more... Setting up the Android environment  is a bit tough. If you started to develop at Cocos2d-x soon, you can turn after the settings part of Android. And you would do it when you run on Android. In this case, you don't have to install Android SDK, NDK, and Apache. Also, when you run setup.py, you only press Enter without entering a path for each question. Using the cocos command The next step is using the cocos command. It is a cross-platform tool with which you can create a new project, build it, run it, and deploy it. The cocos command works for all Cocos2d-x supported platforms. And you don't need to use an IDE if you don't want to. In this recipe, we take a look at this command and explain how to use it. How to do it... You can use the cocos command help by executing it with the --help parameter, as follows: $ cocos --help We then move on to generating our new project: Firstly, we create a new Cocos2d-x project with the cocos new command, as shown here: $ cocos new MyGame -p com.example.mygame -l cpp -d ~/Documents/ The result of this command is shown the following screenshot: Behind the new parameter is the project name. The other parameters that are mentioned denote the following: MyGame is the name of your project. -p is the package name for Android. This is the application id in Google Play store. So, you should use the reverse domain name to the unique name. -l is the programming language used for the project. You should use "cpp" because we will use C++. -d is the location in which to generate the new project. This time, we generate it in the user's documents directory. You can look up these variables using the following command: $ cocos new —help Congratulations, you can generate your new project. The next step is to build and run using the cocos command. Compiling the project If you want to build and run for iOS, you need to execute the following command: $ cocos run -s ~/Documents/MyGame -p ios The parameters that are mentioned are explained as follows: -s is the directory of the project. This could be an absolute path or a relative path. -p denotes which platform to run on.If you want to run on Android you use -p android. The available options are IOS, android, win32, mac, and linux. You can run cocos run –help for more detailed information. The result of this command is shown in the following screenshot: You can now build and run iOS applications of cocos2d-x. However, you have to wait for a long time if this is your first time building an iOS application. That's why it takes a long time to build cocos2d-x library, depending on if it was clean build or first build. How it works... The cocos command can create a new project and build it. You should use the cocos command if you want to create a new project. Of course, you can build by using Xcode or Eclipse. You can easier of there when you develop and debug. There's more... The cocos run command has other parameters. They are the following: --portrait will set the project as a portrait. This command has no argument. --ios-bundleid will set the bundle ID for the iOS project. However, it is not difficult to set it later. The cocos command also includes some other commands, which are as follows: The compile command: This command is used to build a project. The following patterns are useful parameters. You can see all parameters and options if you execute cocos compile [–h] command. cocos compile [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The deploy command: This command only takes effect when the target platform is android. It will re-install the specified project to the android device or simulator. cocos deploy [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The run command continues to compile and deploy commands. Building the project by Xcode Getting ready Before building the project by Xcode, you require Xcode with an iOS developer account to test it on a physical device. However, you can also test it on an iOS simulator. If you did not install Xcode, you can get it from Mac App Store. Once you have installed it, get it activated. How to do it... Open your project from Xcode. You can open your project by double-clicking on the file placed at: ~/Documents/MyGame/proj.ios_mac/MyGame.xcodeproj. Build and Run by Xcode You should select an iOS simulator or real device on which you want to run your project. How it works... If this is your first time building, it will take a long time. But continue to build with confidence as it's the first time. You can develop your game faster if you develop and debug it using Xcode rather than Eclipse. Building the project by Eclipse Getting ready You must finish the first recipe before you begin this step. If you have not finished it yet, you will need to install Eclipse. How to do it... Setting up NDK_ROOT: Open the preference of Eclipse Open C++ | Build | Environment Click on Add and set the new variable, name is NDK_ROOT, value is NDK_ROOT path. Importing your project into Eclipse: Open the file and click on Import Go to Android | Existing Android Code into Workspace Click on Next Import the project to Eclipse at ~/Documents/MyGame/proj.android. Importing Cocos2d-x library into Eclipse Perform the same steps from Step 3 to Step 4. Import the project cocos2d lib at ~/Documents/MyGame/cocos2d/cocos/platform/android/java, using the folowing command: importing cocos2d lib Build and Run Click on Run icon The first time, Eclipse asks you to select a way to run your application. You select Android Application and click on OK, as shown in the following screenshot: If you connected the Android device on your Mac, you can run your game on your real device or an emulator. The following screenshot shows that it is running it on Nexus5. If you added cpp files into your project, you have to modify the Android.mk file at ~/Documenst/MyGame/proj.android/jni/Android.mk. This file is needed to build NDK. This fix is required to add files. The original Android.mk would look as follows: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp If you added the TitleScene.cpp file, you have to modify it as shown in the following code: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp ../../Classes/TitleScene.cpp The preceding example shows an instance of when you add the TitleScene.cpp file. However, if you are also adding other files, you need to add all the added files. How it works... You get lots of errors when importing your project into Eclipse, but don't panic. After importing cocos2d-x library, errors soon disappear. This allows us to set the path of NDK, Eclipse could compile C++. After you modified C++ codes, run your project in Eclipse. Eclipse automatically compiles C++ codes, Java codes, and then runs. It is a tedious task to fix Android.mk again to add the C++ files. The following code is original Android.mk: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/../../Classes The following code is customized Android.mk that adds C++ files automatically. CPP_FILES := $(shell find $(LOCAL_PATH)/../../Classes -name *.cpp) LOCAL_SRC_FILES := hellocpp/main.cpp LOCAL_SRC_FILES += $(CPP_FILES:$(LOCAL_PATH)/%=%) LOCAL_C_INCLUDES := $(shell find $(LOCAL_PATH)/../../Classes -type d) The first line of the code gets C++ files to the Classes directory into CPP_FILES variable. The second and third lines add C++ files into LOCAL_C_INCLUDES variable. By doing so, C++ files will be automatically compiled in NDK. If you need to compile a file other than the extension .cpp file, you will need to add it manually. There's more... If you want to manually build C++ in NDK, you can use the following command: $ ./build_native.py This script is located at the ~/Documenst/MyGame/proj.android . It uses ANDROID_SDK_ROOT and NDK_ROOT in it. If you want to see its options, run ./build_native.py –help. Summary Cocos2d-x is an open source, cross-platform game engine, which is free and mature. It can publish games for mobile devices and desktops, including iPhone, iPad, Android, Kindle, Windows, and Mac. The book Cocos2d-x Cookbook focuses on using version 3.4, which is the latest version of Cocos2d-x that was available at the time of writing. We focus on iOS and Android development, and we'll be using Mac because we need it to develop iOS applications. Resources for Article: Further resources on this subject: CREATING GAMES WITH COCOS2D-X IS EASY AND 100 PERCENT FREE [Article] Dragging a CCNode in Cocos2D-Swift [Article] COCOS2D-X: INSTALLATION [Article]
Read more
  • 0
  • 0
  • 25632

article-image-overview-oozie
Packt
19 Oct 2015
5 min read
Save for later

An Overview of Oozie

Packt
19 Oct 2015
5 min read
In this article by Jagat Singh, the author of the book Apache Oozie Essentials, we will see a basic overview of Oozie and its concepts in brief. (For more resources related to this topic, see here.) Concepts Oozie is a workflow scheduler system to run Apache Hadoop jobs. Oozie workflow jobs are Directed Acyclic Graphs (DAGs) (https://en.wikipedia.org/wiki/Directed_acyclic_graph) representation of actions. Actions tell what to do in the job. Oozie supports running jobs of various types such as Java, Map-reduce, Pig, Hive, Sqoop, Spark, and Distcp. The output of one action can be consumed by the next action to create chain sequence. Oozie has client server architecture, in which we install the server for storing the jobs and using client we submit our jobs to the server. Let's get an idea of few basic concepts of Oozie. Workflow Workflow tells Oozie 'what' to do. It is a collection of actions arranged in required dependency graph. So as part of workflows definition we write some actions and call them in certain order. These are of various types for tasks, which we can do as part of workflow for example, Hadoop filesystem action, Pig action, Hive action, Mapreduce action , Spark action, and so on. Coordinator Coordinator tells Oozie 'when' to do. Coordinators let us to run inter-dependent workflows as data pipelines based on some starting criteria. Most of the Oozie jobs are triggered at given scheduled time interval or when input dataset is present for triggering the job. Following are important definitions related to coordinators: Nominal time: The scheduled time at which job should execute. Example, we process pressrelease every day at 8:00PM. Actual time: The real time when the job ran. In some cases if the input data does not arrive the job might start late. This type of data dependent job triggering is indicated by done-flag (more on this later). The done-flag gives signal to start the job execution. The general skeleton template of coordinator is shown in the following figure: Bundles Bundles tell Oozie which all things to do together as a group. For example a set of coordinators, which can be run together to satisfy a given business requirement can be combined as Bundle. Book case study One of the main used cases of Hadoop is ETL data processing. Suppose that we work for a large consulting company and have won project to setup Big data cluster inside customer data center. On high level the requirements are to setup environment that will satisfy the following flow: We get data from various sources in Hadoop (File based loads, Sqoop based loads) We preprocess them with various scripts (Pig, Hive, Mapreduce) Insert that data into Hive tables for use by analyst and data scientists Data scientists write machine learning models (Spark) We will be using Oozie as our processing scheduling system to do all the above. In our architecture we have one landing server, which sits outside as front door of the cluster. All source systems send files to us via scp and we regularly (for example, nightly to keep simple) push them to HDFS using the hadoop fs -copyFromLocal command. This script is cron driven. It has very simple business logic run every night at 8:00 PM and moves all the files, which it sees, on landing server into HDFS. The Oozie works as follows: Oozie picks the file and cleans it using Pig Script to replace all the delimiters from comma (,) to pipes (|). We will write the same code using Pig and Map Reduce. We then push those processed files into a Hive table. For different source system which is database based MySQL table we do nightly Sqoop when the load of Database in light. So we extract all the records that have been generated on previous business day. The output of that also we insert into Hive tables. Analyst and Data scientists write there magical Hive scripts and Spark machine learning models on those Hive tables. We will use Oozie to schedule all of these regular tasks. Node types Workflow is composed on nodes; the logical DAG of nodes represents 'what' part of the work done by Oozie. Each of the node does specified work and on success moves to one node or on failure moves to other node. For example on success go to OK node and on fail goes to Kill node. Nodes in the Oozie workflow are of the following types. Control flow nodes These nodes are responsible for defining start, end, and control flow of what to do inside the workflow. These can be from following: Start node End node Kill node Decision node Fork and Join node Action nodes Actions nodes represent the actual processing tasks, which are executed when called. These are of various types for example Pig action, Hive action, and Mapreduce action. Summary So in this article we looked at the concepts of Oozie in brief. We also learnt the types on nodes in Oozie. Resources for Article: Further resources on this subject: Introduction to Hadoop[article] Hadoop and HDInsight in a Heartbeat[article] Cloudera Hadoop and HP Vertica [article]
Read more
  • 0
  • 0
  • 2490
article-image-part2-chatops-slack-and-aws-cli
Yohei Yoshimuta
18 Oct 2015
5 min read
Save for later

Part2. ChatOps with Slack and AWS CLI

Yohei Yoshimuta
18 Oct 2015
5 min read
In part 1 of this series, we installed the AWS-CLI tool, created an AWS EC2 instance, listed one, terminated one, and downloaded an AWS S3 content via AWS CLI instead of UI. Now that we know AWS CLI is very useful and that it supports an extensive API, let's use it more for daily development and operation works through Slack, which is a very popular chat tool. ChatOps, which is used in the title, means doing operations via a chat tool. So, before explaining the full process, I assume you are using Slack. Let's see how we can control EC2 instances using Slack. Integrate Slack with Hubot At first, you need to setup Hubot project with Slack Adapter on your machine. I assume that your machine is MacOSX, but the way to setup is not so different. # Install redis $ brew install redis # Install node $ brew install node # Install npm packages npm install -g hubot coffee-script yo generator-hubot # Make your project directory mkdir -p /path/to/hubot cd /path/to/hubot # Generate your project basement yohubot ? Owner: yoheimuta<yoheimuta@gmail.com> ? Bot name: hubot ? Description: A simple helpful robot for your Company ? Bot adapter: (campfire) slack ? Bot adapter: slack Then, you have to register an integration with Hubot on a Slack configuration page. This page issues an API Token that is necessary to work the integration. # Run the hubot $ HUBOT_SLACK_TOKEN=YOUR-API-TOKEN ./bin/hubot --adapter slack If all works as expected, Slack should respond PONG when you type hubot-name ping in Slack UI. Install hubot-aws module Ok, you are ready to introduce hubot-aws which makes Slack to be your team's AWS CLI environment. # Install and add hubot-aws to your package.json file: $ npm install --save hubot-aws # Addhubot-aws to your external-scripts.json: $ vi external-scripts.json # Set an AWS credential if your machine has no ~./aws/credentials $ export HUBOT_AWS_ACCESS_KEY_ID="ACCESS_KEY" $ export HUBOT_AWS_SECRET_ACCESS_KEY="SECRET_ACCESS_KEY" # Set an AWS region no matter whether your machine has ~/.aws/config or not $ export HUBOT_AWS_REGION="us-west-2" # Set a DEBUG flag until you use in production $ export HUBOT_AWS_DEBUG="1" Run an EC2 instance We are going to run an EC2 instance, which is provisioned mongodb, nodejs and Let's Chat — Self-hosted chat for small teams app. In order to run it, we first need to create config files. $ mkdiraws_config $ cdaws_config/ # Prepare option parameters $ viapp.cson $ catapp.cson MinCount: 1 MaxCount: 1 ImageId: "ami-936d9d93" KeyName: "my-key" InstanceType: "t2.micro" Placement: AvailabilityZone: "us-west-2" NetworkInterfaces: [ { Groups: [ "sg-***" ] SubnetId : "subnet-***" DeviceIndex : 0 AssociatePublicIpAddress : true } ] # Prepare a provisioning shell script $ viinitfile $ catinitfile #!/bin/bash # Install mongodb sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 sudo echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | sudo tee -a /etc/apt/sources.list.d/10gen.list # Install puppet sudowget -P /tmp https://apt.puppetlabs.com/puppetlabs-release-precise.deb sudodpkg -i /tmp/puppetlabs-release-precise.deb sudo apt-get -y update sudo apt-get -y install puppet 2>&1 | tee /tmp/initfile.log # Install a puppet module sudo puppet module install jay-letschat 2>&1 | tee /tmp/initfile1.log # Create a puppet manifest sudosh -c "cat >> /etc/puppet/manifests/letschat.pp<<'EOF'; class { 'letschat::db': user => 'lcadmin', pass => 'unsafepassword', bind_ip => '0.0.0.0', database_name => 'letschat', database_port => '27017', } -> class { 'letschat::app': dbuser => 'lcadmin', dbpass => 'unsafepassword', dbname => 'letschat', dbhost => 'localhost', dbport => '27017', deploy_dir => '/etc/letschat', http_enabled => true, lc_bind_address => '0.0.0.0', http_port => '5000', ssl_enabled => false, cookie => 'secret', authproviders => 'local', registration => true, } EOF" # Apply a puppet manifest sudo puppet apply /etc/puppet/manifests/letschat.pp 2>&1 | tee /tmp/initfile2.log $ export HUBOT_AWS_EC2_RUN_CONFIG="aws_config/app.cson" $ export HUBOT_AWS_EC2_RUN_USERDATA_PATH="aws_config/initfile" $ HUBOT_SLACK_TOKEN=YOUR-API-TOKEN ./bin/hubot --adapter slack Let's type hubot ec2 run --dry-run to validate the config and then hubot ec2 run to start running an EC2 instance. Enter public-ipaddr:5000 into a browser to enjoy Let's Chat app after initialization of the instance (maybe it takes about 5 minutes). You will find public-ipaddr from AWS UI or hubot ec2 ls --instance_id=*** (this command is described below). List running EC2 instances You now have created an EC2 instance. The detail of running EC2 instances is displayed whenever your colleague and you type hubot ec2 ls. It's cool. Terminate an EC2 instance Well, it's time to terminate an EC2 instance to save money. Type hubot ec2 terminate --instance_id=***. That's all. Conclusion ChatOps is very useful, especially for your team's routine ops work. For our team's example, we ran a temporal app instance to test a development feature before deploying it in production, terminating a problematic EC2 instance logging errors, and creating settings about Auto Scaling that is relatively difficult to automate completely yet like using Terraform for our team. Hubot-AWS works for dev&ops engineers who want to use AWS CLI while sharing ops with their colleagues but have no time to completely automate. About the author YoheiYoshimuta is a software engineer with a proven record of delivering high quality software in both game and advertising industries. He has extensive experience in building products from scratch in small and large team. His primary focuses are Perl, Go and AWS technologies. You can reach him at @yoheimutaonGitHubandTwitter.
Read more
  • 0
  • 1
  • 12891

article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 12666
Modal Close icon
Modal Close icon