Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-application-patterns
Packt
20 Oct 2015
9 min read
Save for later

Application Patterns

Packt
20 Oct 2015
9 min read
In this article by Marcelo Reyna, author of the book Meteor Design Patterns, we will cover application-wide patterns that share server- and client- side code. With these patterns, your code will become more secure and easier to manage. You will learn the following topic: Filtering and paging collections (For more resources related to this topic, see here.) Filtering and paging collections So far, we have been publishing collections without thinking much about how many documents we are pushing to the client. The more documents we publish, the longer it will take the web page to load. To solve this issue, we are going to learn how to show only a set number of documents and allow the user to navigate through the documents in the collection by either filtering or paging through them. Filters and pagination are easy to build with Meteor's reactivity. Router gotchas Routers will always have two types of parameters that they can accept: query parameters, and normal parameters. Query parameters are the objects that you will commonly see in site URLs followed by a question mark (<url-path>?page=1), while normal parameters are the type that you define within the route URL (<url>/<normal-parameter>/named_route/<normal-parameter-2>). It is a common practice to set query parameters on things such as pagination to keep your routes from creating URL conflicts. A URL conflict happens when two routes look the same but have different parameters. A products route such as /products/:page collides with a product detail route such as /products/:product-id. While both the routes are differently expressed because of the differences in their normal parameter, you arrive at both the routes using the same URL. This means that the only way the router can tell them apart is by routing to them programmatically. So the user would have to know that the FlowRouter.go() command has to be run in the console to reach either one of the products pages instead of simply using the URL. This is why we are going to use query parameters to keep our filtering and pagination stateful. Stateful pagination Stateful pagination is simply giving the user the option to copy and paste the URL to a different client and see the exact same section of the collection. This is important to make the site easy to share. Now we are going to understand how to control our subscription reactively so that the user can navigate through the entire collection. First, we need to set up our router to accept a page number. Then we will take this number and use it on our subscriber to pull in the data that we need. To set up the router, we will use a FlowRouter query parameter (the parameter that places a question mark next to the URL). Let's set up our query parameter: # /products/client/products.coffee Template.created "products", -> @autorun => tags = Session.get "products.tags" filter = page: Number(FlowRouter.getQueryParam("page")) or 0 if tags and not _.isEmpty tags _.extend filter, tags:tags order = Session.get "global.order" if order and not _.isEmpty order _.extend filter, order:order @subscribe "products", filter Template.products.helpers ... pages: current: -> FlowRouter.getQueryParam("page") or 0 Template.products.events "click .next-page": -> FlowRouter.setQueryParams page: Number(FlowRouter.getQueryParam("page")) + 1 "click .previous-page": -> if Number(FlowRouter.getQueryParam("page")) - 1 < 0 page = 0 else page = Number(FlowRouter.getQueryParam("page")) - 1 FlowRouter.setQueryParams page: page What we are doing here is straightforward. First, we extend the filter object with a page key that gets the current value of the page query parameter, and if this value does not exist, then it is set to 0. getQueryParam is a reactive data source, the autorun function will resubscribe when the value changes. Then we will create a helper for our view so that we can see what page we are on and the two events that set the page query parameter. But wait. How do we know when the limit to pagination has been reached? This is where the tmeasday:publish-counts package is very useful. It uses a publisher's special function to count exactly how many documents are being published. Let's set up our publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 if ops.tags and not _.isEmpty ops.tags @relations collection:Tags ... collection:ProductsTags ... collection:Products foreign_key:"product" options:product_options mappings:[ ... ] else Counts.publish this,"products", Products.find() noReady:true @relations collection:Products options:product_options mappings:[ ... ] if ops.order and not _.isEmpty ops.order ... @ready() To publish our counts, we used the Counts.publish function. This function takes in a few parameters: Counts.publish <always this>,<name of count>, <collection to count>, <parameters> Note that we used the noReady parameter to prevent the ready function from running prematurely. By doing this, we generate a counter that can be accessed on the client side by running Counts.get "products". Now you might be thinking, why not use Products.find().count() instead? In this particular scenario, this would be an excellent idea, but you absolutely have to use the Counts function to make the count reactive, so if any dependencies change, they will be accounted for. Let's modify our view and helpers to reflect our counter: # /products/client/products.coffee ... Template.products.helpers pages: current: -> FlowRouter.getQueryParam("page") or 0 is_last_page: -> current_page = Number(FlowRouter.getQueryParam("page")) or 0 max_allowed = 10 + current_page * 10 max_products = Counts.get "products" max_allowed > max_products //- /products/client/products.jade template(name="products") div#products.template ... section#featured_products div.container div.row br.visible-xs //- PAGINATION div.col-xs-4 button.btn.btn-block.btn-primary.previous-page i.fa.fa-chevron-left div.col-xs-4 button.btn.btn-block.btn-info {{pages.current}} div.col-xs-4 unless pages.is_last_page button.btn.btn-block.btn-primary.next-page i.fa.fa-chevron-right div.clearfix br //- PRODUCTS +momentum(plugin="fade-fast") ... Great! Users can now copy and paste the URL to obtain the same results they had before. This is exactly what we need to make sure our customers can share links. If we had kept our page variable confined to a Session or a ReactiveVar, it would have been impossible to share the state of the webapp. Filtering Filtering and searching, too, are critical aspects of any web app. Filtering works similar to pagination; the publisher takes additional variables that control the filter. We want to make sure that this is stateful, so we need to integrate this into our routes, and we need to program our publishers to react to this. Also, the filter needs to be compatible with the pager. Let's start by modifying the publisher: # /products/server/products_pub.coffee Meteor.publish "products", (ops={}) -> limit = 10 product_options = skip:ops.page * limit limit:limit sort: name:1 filter = {} if ops.search and not _.isEmpty ops.search _.extend filter, name: $regex: ops.search $options:"i" if ops.tags and not _.isEmpty ops.tags @relations collection:Tags mappings:[ ... collection:ProductsTags mappings:[ collection:Products filter:filter ... ] else Counts.publish this,"products", Products.find filter noReady:true @relations collection:Products filter:filter ... if ops.order and not _.isEmpty ops.order ... @ready() To build any filter, we have to make sure that the property that creates the filter exists and _.extend our filter object based on this. This makes our code easier to maintain. Notice that we can easily add the filter to every section that includes the Products collection. With this, we have ensured that the filter is always used even if tags have filtered the data. By adding the filter to the Counts.publish function, we have ensured that the publisher is compatible with pagination as well. Let's build our controller: # /products/client/products.coffee Template.created "products", -> @autorun => ops = page: Number(FlowRouter.getQueryParam("page")) or 0 search: FlowRouter.getQueryParam "search" ... @subscribe "products", ops Template.products.helpers ... pages: search: -> FlowRouter.getQueryParam "search" ... Template.products.events ... "change .search": (event) -> search = $(event.currentTarget).val() if _.isEmpty search search = null FlowRouter.setQueryParams search:search page:null First, we have renamed our filter object to ops to keep things consistent between the publisher and subscriber. Then we have attached a search key to the ops object that takes the value of the search query parameter. Notice that we can pass an undefined value for search, and our subscriber will not fail, since the publisher already checks whether the value exists or not and extends filters based on this. It is always better to verify variables on the server side to ensure that the client doesn't accidentally break things. Also, we need to make sure that we know the value of that parameter so that we can create a new search helper under the pages helper. Finally, we have built an event for the search bar. Notice that we are setting query parameters to null whenever they do not apply. This makes sure that they do not appear in our URL if we do not need them. To finish, we need to create the search bar: //- /products/client/products.jade template(name="products") div#products.template header#promoter ... div#content section#features ... section#featured_products div.container div.row //- SEARCH div.col-xs-12 div.form-group.has-feedback input.input-lg.search.form-control(type="text" placeholder="Search products" autocapitalize="off" autocorrect="off" autocomplete="off" value="{{pages.search}}") span(style="pointer-events:auto; cursor:pointer;").form-control-feedback.fa.fa-search.fa-2x ... Notice that our search input is somewhat cluttered with special attributes. All these attributes ensure that our input is not doing the things that we do not want it to for iOS Safari. It is important to keep up with nonstandard attributes such as these to ensure that the site is mobile-friendly. You can find an updated list of these attributes here at https://developer.apple.com/library/safari/documentation/AppleApplications/Reference/SafariHTMLRef/Articles/Attributes.html. Summary This article covered how to control the amount of data that we publish. We also learned a pattern to build pagination that functions with filters as well, along with code examples. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor[article] Quick start - creating your first application[article] Getting Started with Meteor [article]
Read more
  • 0
  • 0
  • 8030

article-image-qlikview-tips-and-tricks
Packt
20 Oct 2015
6 min read
Save for later

QlikView Tips and Tricks

Packt
20 Oct 2015
6 min read
In this article by Andrew Dove and Roger Stone, author of the book QlikView Unlocked, we will cover the following key topics: A few coding tips The surprising data sources Include files Change logs (For more resources related to this topic, see here.) A few coding tips There are many ways to improve things in QlikView. Some are techniques and others are simply useful things to know or do. Here are a few of our favourite ones. Keep the coding style constant There's actually more to this than just being a tidy developer. So, always code your function names in the same way—it doesn't matter which style you use (unless you have installation standards that require a particular style). For example, you could use MonthStart(), monthstart(), or MONTHSTART(). They're all equally valid, but for consistency, choose one and stick to it. Use MUST_INCLUDE rather than INCLUDE This feature wasn't documented at all until quite a late service release of v11.2; however, it's very useful. If you use INCLUDE and the file you're trying to include can't be found, QlikView will silently ignore it. The consequences of this are unpredictable, ranging from strange behaviour to an outright script failure. If you use MUST_INCLUDE, QlikView will complain that the included file is missing, and you can fix the problem before it causes other issues. Actually, it seems strange that INCLUDE doesn't do this, but Qlik must have its reasons. Nevertheless, always use MUST_INCLUDE to save yourself some time and effort. Put version numbers in your code QlikView doesn't have a versioning system as such, and we have yet to see one that works effectively with QlikView. So, this requires some effort on the part of the developer. Devise a versioning system and always place the version number in a variable that is displayed somewhere in the application. Updating this number every time you make a change doesn't matter, but ensure that it's updated for every release to the user and ties in with your own release logs. Do stringing in the script and not in screen objects We would have put this in anyway, but its place in the article was assured by a recent experience on a user site. They wanted four lines of address and a postcode strung together in a single field, with each part separated by a comma and a space. However, any field could contain nulls; so, to avoid addresses such as ',,,,' or ', Somewhere ,,,', there had be a check for null in every field as the fields were strung together. The table only contained about 350 rows, but it took 56 seconds to refresh on screen when the work was done in an expression in a straight table. Moving the expression to the script and presenting just the resulting single field on screen took only 0.14 seconds. (That's right; it's about a seventh of a second). Plus, it didn't adversely affect script performance. We can't think of a better example of improving screen performance. The surprising data sources QlikView will read database tables, spreadsheets, XML files, and text files, but did you know that it can also take data from a web page? If you need some standard data from the Internet, there's no need to create your own version. Just grab it from a web page! How about ISO Country Codes? Here's an example. Open the script and click on Web files… below Data from Filesto the right of the bottom section of the screen. This will open the File Wizard: Source dialogue, as in the following screenshot. Enter the URL where the table of data resides: Then, click on Next and in this case, select @2 under Tables, as shown in the following screenshot: Click on Finish and your script will look something similar to this: LOAD F1, Country, A2, A3, Number FROM [http://www.airlineupdate.com/content_public/codes/misc_codes/icao _nat.htm] (html, codepage is 1252, embedded labels, table is @2); Now, you've got a great lookup table in about 30 seconds; it will take another few seconds to clean it up for your own purposes. One small caveat though—web pages can change address, content, and structure, so it's worth putting in some validation around this if you think there could be any volatility. Include files We have already said that you should use MUST_INCLUDE rather than INCLUDE, but we're always surprised that many developers never use include files at all. If the same code needs to be used in more than one place, it really should be in an include file. Suppose that you have several documents that use C:QlikFilesFinanceBudgets.xlsx and that the folder name is hard coded in all of them. As soon as the file is moved to another location, you will have several modifications to make, and it's easy to miss changing a document because you may not even realise it uses the file. The solution is simple, very effective, and guaranteed to save you many reload failures. Instead of coding the full folder name, create something similar to this: LET vBudgetFolder='C:QlikFilesFinance'; Put the line into an include file, for instance, FolderNames.inc. Then, code this into each script as follows: $(MUST_INCLUDE=FolderNames.inc) Finally, when you want to refer to your Budgets.xlsx spreadsheet, code this: $(vBudgetFolder)Budgets.xlsx Now, if the folder path has to change, you only need to change one line of code in the include file, and everything will work fine as long as you implement include files in all your documents. Note that this works just as well for folders containing QVD files and so on. You can also use this technique to include LOAD from QVDs or spreadsheets because you should always aim to have just one version of the truth. Change logs Unfortunately, one of the things QlikView is not great at is version control. It can be really hard to see what has been done between versions of a document, and using the -prj folder feature can be extremely tedious and not necessarily helpful. So, this means that you, as the developer, need to maintain some discipline over version control. To do this, ensure that you have an area of comments that looks something similar to this right at the top of your script: // Demo.qvw // // Roger Stone - One QV Ltd - 04-Jul-2015 // // PURPOSE // Sample code for QlikView Unlocked - Chapter 6 // // CHANGE LOG // Initial version 0.1 // - Pull in ISO table from Internet and local Excel data // // Version 0.2 // Remove unused fields and rename incoming ISO table fields to // match local spreadsheet // Ensure that you update this every time you make a change. You could make this even more helpful by explaining why the change was made and not just what change was made. You should also comment the expressions in charts when they are changed. Summary In this article, we covered few coding tips, the surprising data sources, include files, and change logs. Resources for Article: Further resources on this subject: Qlik Sense's Vision [Article] Securing QlikView Documents [Article] Common QlikView script errors [Article]
Read more
  • 0
  • 0
  • 12344

article-image-nginx-service
Packt
20 Oct 2015
15 min read
Save for later

Nginx service

Packt
20 Oct 2015
15 min read
In this article by Clement Nedelcu, author of the book, Nginx HTTP Server - Third Edition, we discuss the stages after having successfully built and installed Nginx. The default location for the output files is /usr/local/nginx. (For more resources related to this topic, see here.) Daemons and services The next step is obviously to execute Nginx. However, before doing so, it's important to understand the nature of this application. There are two types of computer applications—those that require immediate user input, thus running in the foreground, and those that do not, thus running in the background. Nginx is of the latter type, often referred to as daemon. Daemon names usually come with a trailing d and a couple of examples can be mentioned here—httpd, the HTTP server daemon, is the name given to Apache under several Linux distributions; named, the name server daemon; or crond the task scheduler—although, as you will notice, this is not the case for Nginx. When started from the command line, a daemon immediately returns the prompt, and in most cases, does not even bother outputting data to the terminal. Consequently, when starting Nginx you will not see any text appear on the screen, and the prompt will return immediately. While this might seem startling, it is on the contrary a good sign. It means the daemon was started correctly and the configuration did not contain any errors. User and group It is of utmost importance to understand the process architecture of Nginx and particularly the user and groups its various processes run under. A very common source of troubles when setting up Nginx is invalid file access permissions—due to a user or group misconfiguration, you often end up getting 403 Forbidden HTTP errors because Nginx cannot access the requested files. There are two levels of processes with possibly different permission sets: The Nginx master process: This should be started as root. In most Unix-like systems, processes started with the root account are allowed to open TCP sockets on any port, whereas other users can only open listening sockets on a port above 1024. If you do not start Nginx as root, standard ports such as 80 or 443 will not be accessible. Note that the user directive that allows you to specify a different user and group for the worker processes will not be taken into consideration for the master process. The Nginx worker processes: These are automatically spawned by the master process under the account you specified in the configuration file with the user directive. The configuration setting takes precedence over the configuration switch you may have specified at compile time. If you did not specify any of those, the worker processes will be started as user nobody, and group nobody (or nogroup depending on your OS). Nginx command-line switches The Nginx binary accepts command-line arguments to perform various operations, among which is controlling the background processes. To get the full list of commands, you may invoke the help screen using the following commands: [alex@example.com ~]$ cd /usr/local/nginx/sbin [alex@example.com sbin]$ ./nginx -h The next few sections will describe the purpose of these switches. Some allow you to control the daemon, some let you perform various operations on the application configuration. Starting and stopping the daemon You can start Nginx by running the Nginx binary without any switches. If the daemon is already running, a message will show up indicating that a socket is already listening on the specified port: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) […] [emerg]: still could not bind(). Beyond this point, you may control the daemon by stopping it, restarting it, or simply reloading its configuration. Controlling is done by sending signals to the process using the nginx -s command. Command Description nginx –s stop Stops the daemon immediately (using the TERM signal). nginx –s quit Stops the daemon gracefully (using the QUIT signal). nginx –s reopen Reopens the log files. nginx –s reload Reloads the configuration. Note that when starting the daemon, stopping it, or performing any of the preceding operations, the configuration file is first parsed and verified. If the configuration is invalid, whatever command you have submitted will fail, even when trying to stop the daemon. In other words, in some cases you will not be able to even stop Nginx if the configuration file is invalid. An alternate way to terminate the process, in desperate cases only, is to use the kill or killall commands with root privileges: [root@example.com ~]# killall nginx Testing the configuration As you can imagine, testing the validity of your configuration will become crucial if you constantly tweak your server setup . The slightest mistake in any of the configuration files can result in a loss of control over the service—you will then be unable to stop it via regular init control commands, and obviously, it will refuse to start again. Consequently, the following command will be useful to you in many occasions; it allows you to check the syntax, validity, and integrity of your configuration: [alex@example.com ~]$ /usr/local/nginx/sbin/nginx –t The –t switch stands for test configuration. Nginx will parse the configuration anew and let you know whether it is valid or not. A valid configuration file does not necessarily mean Nginx will start though, as there might be additional problems such as socket issues, invalid paths, or incorrect access permissions. Obviously, manipulating your configuration files while your server is in production is a dangerous thing to do and should be avoided when possible. The best practice, in this case, is to place your new configuration into a separate temporary file and run the test on that file. Nginx makes it possible by offering the –c switch: [alex@example.com sbin]$ ./nginx –t –c /home/alex/test.conf This command will parse /home/alex/test.conf and make sure it is a valid Nginx configuration file. When you are done, after making sure that your new file is valid, proceed to replacing your current configuration file and reload the server configuration: [alex@example.com sbin]$ cp -i /home/alex/test.conf /usr/local/nginx/conf/nginx.conf cp: erase 'nginx.conf' ? yes [alex@example.com sbin]$ ./nginx –s reload Other switches Another switch that might come in handy in many situations is –V. Not only does it tell you the current Nginx build version, but more importantly it also reminds you about the arguments that you used during the configuration step—in other words, the command switches that you passed to the configure script before compilation. [alex@example.com sbin]$ ./nginx -V nginx version: nginx/1.8.0 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) TLS SNI support enabled configure arguments: --with-http_ssl_module In this case, Nginx was configured with the --with-http_ssl_module switch only. Why is this so important? Well, if you ever try to use a module that was not included with the configure script during the precompilation process, the directive enabling the module will result in a configuration error. Your first reaction will be to wonder where the syntax error comes from. Your second reaction will be to wonder if you even built the module in the first place! Running nginx –V will answer this question. Additionally, the –g option lets you specify additional configuration directives in case they were not included in the configuration file: [alex@example.com sbin]$ ./nginx –g "timer_resolution 200ms"; Adding Nginx as a system service In this section, we will create a script that will transform the Nginx daemon into an actual system service. This will result in mainly two outcomes: the daemon will be controllable using standard commands, and more importantly, it will automatically be launched on system startup and stopped on system shutdown. System V scripts Most Linux-based operating systems to date use a System-V style init daemon. In other words, their startup process is managed by a daemon called init, which functions in a way that is inherited from the old System V Unix-based operating system. This daemon functions on the principle of runlevels, which represent the state of the computer. Here is a table representing the various runlevels and their signification: Runlevel State 0 System is halted 1 Single-user mode (rescue mode) 2 Multiuser mode, without NFS support 3 Full multiuser mode 4 Not used 5 Graphical interface mode 6 System reboot You can manually initiate a runlevel transition: use the telinit 0 command to shut down your computer or telinit 6 to reboot it. For each runlevel transition, a set of services are executed. This is the key concept to understand here: when your computer is stopped, its runlevel is 0. When you turn it on, there will be a transition from runlevel 0 to the default computer startup runlevel. The default startup runlevel is defined by your own system configuration (in the /etc/inittab file) and the default value depends on the distribution you are using: Debian and Ubuntu use runlevel 2, Red Hat and Fedora use runlevel 3 or 5, CentOS and Gentoo use runlevel 3, and so on—the list is long. So, in summary, when you start your computer running CentOS, it operates a transition from runlevel 0 to runlevel 3. That transition consists of starting all services that are scheduled for runlevel 3. The question is how to schedule a service to be started at a specific runlevel. For each runlevel, there is a directory containing scripts to be executed. If you enter these directories (rc0.d, rc1.d, to rc6.d), you will not find actual files, but rather symbolic links referring to scripts located in the init.d directory. Service startup scripts will indeed be placed in init.d, and links will be created by tools placing them in the proper directories. About init scripts An init script, also known as service startup script or even sysv script, is a shell script respecting a certain standard. The script controls a daemon application by responding to commands such as start, stop, and others, which are triggered at two levels. First, when the computer starts, if the service is scheduled to be started for the system runlevel, the init daemon will run the script with the start argument. The other possibility for you is to manually execute the script by calling it from the shell: [root@example.com ~]# service httpd start Or if your system does not come with the service command: [root@example.com ~]# /etc/init.d/httpd start The script must accept at least the start, stop, restart, force-reload, and status commands, as they will be used by the system to respectively start up, shut down, restart, forcefully reload the service, or inquire its status. However, to enlarge your field of action as a system administrator, it is often interesting to provide further options, such as a reload argument to reload the service configuration or a try-restart argument to stop and start the service again. Note that since service httpd start and /etc/init.d/httpd start essentially do the same thing, with the exception that the second command will work on all operating systems, we will make no further mention of the service command and will exclusively use the /etc/init.d/ method. Init script for Debian-based distributions We will thus create a shell script to start and stop our Nginx daemon and also to restart and reloading it. The purpose here is not to discuss Linux shell script programming, so we will merely provide the source code of an existing init script, along with some comments to help you understand it. Due to differences in the format of the init scripts from one distribution to another, we will discover two separate scripts here. The first one is meant for Debian-based distributions such as Debian, Ubuntu, Knoppix, and so forth. First, create a file called nginx with the text editor of your choice, and save it in the /etc/init.d/ directory (on some systems, /etc/init.d/ is actually a symbolic link to /etc/rc.d/init.d/). In the file you just created, insert the script provided in the code bundle supplied with this book. Make sure that you change the paths to make them correspond to your actual setup. You will need root permissions to save the script into the init.d directory. The complete init script for Debian-based distributions can be found in the code bundle. Init script for Red Hat–based distributions Due to the system tools, shell programming functions, and specific formatting that it requires, the preceding script is only compatible with Debian-based distributions. If your server is operated by a Red Hat–based distribution such as CentOS, Fedora, and many more, you will need an entirely different script. The complete init script for Red Hat–based distributions can be found in the code bundle. Installing the script Placing the file in the init.d directory does not complete our work. There are additional steps that will be required to enable the service. First, make the script executable. So far, it is only a piece of text that the system refuses to run. Granting executable permissions on the script is done with the chmod command: [root@example.com ~]# chmod +x /etc/init.d/nginx Note that if you created the file as the root user, you will need to be logged in as root to change the file permissions. At this point, you should already be able to start the service using service nginx start or /etc/init.d/nginx start, as well as stopping, restarting, or reloading the service. The last step here will be to make it so the script is automatically started at the proper runlevels. Unfortunately, doing this entirely depends on what operating system you are using. We will cover the two most popular families—Debian, Ubuntu, or other Debian-based distributions and Red Hat/Fedora/CentOS, or other Red Hat–derived systems. Debian-based distributions For the Debian-based distribution, a simple command will enable the init script for the system runlevel: [root@example.com ~]# update-rc.d -f nginx defaults This command will create links in the default system runlevel folders. For the reboot and shutdown runlevels, the script will be executed with the stop argument; for all other runlevels, the script will be executed with start. You can now restart your system and see your Nginx service being launched during the boot sequence. Red Hat–based distributions For the Red Hat–based systems family, the command differs, but you get an additional tool to manage system startup. Adding the service can be done via the following command: [root@example.com ~]# chkconfig nginx on Once that is done, you can then verify the runlevels for the service: [root@example.com ~]# chkconfig --list nginx Nginx 0:off 1:off 2:on 3:off 4:on 5:on 6:off Another tool will be useful to you to manage system services namely, ntsysv. It lists all services scheduled to be executed on system startup and allows you to enable or disable them at will. The tool ntsysv requires root privileges to be executed. Note that prior to using ntsysv, you must first run the chkconfig nginx on command, otherwise Nginx will not appear in the list of services. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed to you directly. NGINX Plus Since mid-2013, NGINX, Inc., the company behind the Nginx project, also offers a paid subscription called NGINX Plus. The announcement came as a surprise for the open source community, but several companies quickly jumped on the bandwagon and reported amazing improvements in terms of performance and scalability after using NGINX Plus. NGINX, Inc., the high performance web company, today announced the availability of NGINX Plus, a fully-supported version of the popular NGINX open source software complete with advanced features and offered with professional services. The product is developed and supported by the core engineering team at Nginx Inc., and is available immediately on a subscription basis. As business requirements continue to evolve rapidly, such as the shift to mobile and the explosion of dynamic content on the Web, CIOs are continuously looking for opportunities to increase application performance and development agility, while reducing dependencies on their infrastructure. NGINX Plus provides a flexible, scalable, uniformly applicable solution that was purpose built for these modern, distributed application architectures. Considering the pricing plans ($1,500 per year per instance) and the additional features made available, this platform is indeed clearly aimed at large corporations looking to integrate Nginx into their global architecture seamlessly and effortlessly. Professional support from the Nginx team is included and discounts can be offered for multiple-instance subscriptions. This book covers the open source version of Nginx only and does not detail advanced functionality offered by NGINX Plus. For more information about the paid subscription, take a look at http://www.nginx.com. Summary From this point on, Nginx is installed on your server and automatically starts with the system. Your web server is functional, though it does not yet answer the most basic functionality: serving a website. The first step towards hosting a website will be to prepare a suitable configuration file. Resources for Article: Further resources on this subject: Getting Started with Nginx[article] Fine-tune the NGINX Configuration[article] Nginx proxy module [article]
Read more
  • 0
  • 0
  • 6293

article-image-extracting-real-time-wildfire-data-arcgis-server-arcgis-rest-api
Packt
20 Oct 2015
6 min read
Save for later

Extracting Real-Time Wildfire Data from ArcGIS Server with the ArcGIS REST API

Packt
20 Oct 2015
6 min read
In this article by Eric Pimpler, the author of the book ArcGIS Blueprints, the ArcGIS platform, which contains a number of different products including ArcGIS Desktop[d1] , ArcGIS Pro, ArcGIS for Server, and ArcGIS Online, provides a robust environment in order to perform geographic analysis and mapping. Content produced by this platform can be integrated using the ArcGIS REST API and a programming language, such as Python. Many of the applications we build in this book use the ArcGIS REST API as a bridge to exchange information between software products. (For more resources related to this topic, see here.) We're going to start by developing a simple ArcGIS Desktop custom script tool in ArcToolbox that connects to an ArcGIS Server map service to retrieve real-time wildfire information. The wildfire information will be retrieved from a United States Geological Survey (USGS)[d1]  map service that provides real-time wildfire data. We'll use the ArcGIS REST API and Python requests module to connect to the map service and request the data. The response from the map service will contain data that will be written to a feature class stored in a local geodatabase using the ArcPy data access module. This will all be accomplished inside a custom script tool attached to an ArcGIS Python toolbox. In this article we will cover the following topics: ArcGIS Desktop Python toolboxes ArcGIS Server map and feature services A Python requests module A Python json module ArcGIS REST API ArcPy data access module (ArcPy.da) Design Before we start building the application, we'll spend some time planning what we'll build. This is a fairly simple application, but it serves to illustrate how ArcGIS Desktop and ArcGIS Server can easily be integrated using the ArcGIS REST API. In this application, we'll build an ArcGIS Python toolbox that serves as a container for a single tool named USGSDownload. The USGSDownload tool will use the Python requests, json, and ArcPy data modules to request real-time wildfire data from a USGS map service. The response from the map service will contain information, including the location of the fire, name of the fire, and some additional information that will then be written to a local geodatabase. The communication between the ArcGIS Desktop Python toolbox and ArcGIS Server map service is accomplished through the ArcGIS REST API and the Python language. Let's get started building the application. Creating the ArcGIS Desktop Python toolbox[d2]  There are two ways to create toolboxes in ArcGIS: script tools in custom toolboxes and script tools in Python toolboxes. Python toolboxes encapsulate everything in one place: parameters, validation code and source code. This is not the case with custom toolboxes, which are created using a wizard and a separate script that processes the business logic. A Python toolbox functions like any other toolbox in ArcToolbox, but it is created entirely in Python and has a file extension of .pyt. It is created programmatically as a class named Toolbox. In this article, you will learn how to create a Python toolbox and add a tool. You'll only create the basic structure of the toolbox and tool that will ultimately connect to an ArcGIS Server map service containing wildfire data. In a later section, you'll complete the functionality of the tool by adding code that connects to the map service, downloads the current data, and inserts it into a feature class. Open ArcCatalog. You can create a Python toolbox [d3] in a folder by right-clicking on the folder and selecting New | Python Toolbox. In ArcCatalog, there is a folder named Toolboxes and inside is a My Toolboxes folder, as seen in the following screenshot. Right-click on this folder and select New | Python Toolbox. The name of the toolbox is controlled by the file name. Name the toolbox InsertWildfires.py, as shown in following screenshot: The Python toolbox file .pyt can be edited in any text or code editor. By default, the code will open in Notepad[d4] . You can change this by setting the default editor for your script by going to Geoprocessing | Geoprocessing Options and the Editor section. You'll note in the Figure A: Geoprocessing options screenshot that I have set my editor to PyScripter, which is my preferred environment. You may want to change this to IDLE or whichever development environment you are currently using.For example, to find the path to the executable for the IDLE development environment, you can go to Start | All Programs | ArcGIS | Python 2.7 | IDLE. right-click on IDLE, and select Properties[d5]  to display the properties window. Inside the Target text box, you should see a path to the executable as seen in the following screenshot: Copy and paste the path into the Editor and Debugger sections inside the Geoprocessing Options dialog, as shown in following screenshot: Figure A: Geoprocessing options Right-click on InsertWildfires.pyt and select Edit. This will open the development environment you defined earlier, as seen in the following screenshot. Your environment will vary depending on the editor that you have defined. Remember that you will not be changing the name of the class, which is Toolbox. However, you will rename the Tool class to reflect the name of the tool you want to create. Each tool will have various methods, including __init__(), which is the constructor for the tool along with getParameterInfo(), isLicensed(), updateParameters(), updateMessages(), and execute(). You can use the __init__() method to set initialization properties, such as the tool's label and description. Find the class named Tool in your code and change the name of this tool to USGSDownload, set the label and description properties. class USGSDownload(object): def __init__(self): """Define the tool (tool name is the name of the class).""" self.label = "USGS Download" self.description = "Download from USGS ArcGIS Server instance" self.canRunInBackground = False You can use the Tool class as a template for other tools you'd like to add to the toolbox by copying and pasting the class and it's methods. We're not going to do it in this article, but you need to be aware of this. Summary Integrating ArcGIS Desktop and ArcGIS Server is easily accomplished using the ArcGIS REST API and the Python programming language. In this article we created an ArcGIS Python toolbox containing a tool that connects to an ArcGIS Server map service, which contains real-time wildfire information and is hosted by the USGS. Resources for Article: Further resources on this subject: ArcGIS – Advanced ArcObjects[article] Using the ArcPy DataAccess Module withFeature Classesand Tables[article] Introduction to Mobile Web ArcGIS Development [article]
Read more
  • 0
  • 0
  • 2923

article-image-sql-server-powershell
Packt
19 Oct 2015
8 min read
Save for later

SQL Server with PowerShell

Packt
19 Oct 2015
8 min read
In this article by Donabel Santos, author of the book, SQL Server 2014 with Powershell v5 Cookbook explains scripts and snippets of code that accomplish basic SQL Server tasks using PowerShell. She discusses simple tasks such as Listing SQL Server Instances and Discovering SQL Server Services to make you comfortable working with SQL Server programmatically. However, even if ever you explore how to create some common database objects using PowerShell, keep in mind that PowerShell will not always be the best tool for the task. There will be tasks that are best completed using T-SQL. It is still good to know what is possible in PowerShell and how to do them, so you know that you have alternatives depending on your requirements or situation. For the recipes, we are going to use PowerShell ISE quite a lot. If you prefer running the script from the PowerShell console rather run running the commands from the ISE, you can save the scripts in a .ps1 file and run it from the PowerShell console. (For more resources related to this topic, see here.) Listing SQL Server Instances In this recipe, we will list all SQL Server Instances in the local network. Getting ready Log in to the server that has your SQL Server development instance as an administrator. How to do it... Let's look at the steps to list your SQL Server instances: Open PowerShell ISE as administrator. Let's use the Start-Service cmdlet to start the SQL Browser service: Import-Module SQLPS -DisableNameChecking #out of the box, the SQLBrowser is disabled. To enable: Set-Service SQLBrowser -StartupType Automatic #sql browser must be installed and running for us #to discover SQL Server instances Start-Service "SQLBrowser" Next, you need to create a ManagedComputer object to get access to instances. Type the following script and run: $instanceName = "localhost" $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName #list server instances $managedComputer.ServerInstances Your result should look similar to the one shown in the following screenshot: Notice that $managedComputer.ServerInstances gives you not only instance names, but also additional properties such as ServerProtocols, Urn, State, and so on. Confirm that these are the same instances you see from SQL Server Management Studio. Open SQL Server Management Studio. Go to Connect | Database Engine. In the Server Name dropdown, click on Browse for More. Select the Network Servers tab and check the instances listed. Your screen should look similar to this: How it works... All services in a Windows operating system are exposed and accessible using Windows Management Instrumentation (WMI). WMI is Microsoft's framework for listing, setting, and configuring any Microsoft-related resource. This framework follows Web-based Enterprise Management (WBEM). The DISTRIBUTED MANAGEMENT TASK FORCE, INC. (http://www.dmtf.org/standards/wbem) defines WBEM as follows: A set of management and Internet standard technologies developed to unify the management of distributed computing environments. WBEM provides the ability for the industry to deliver a well-integrated set of standard-based management tools, facilitating the exchange of data across otherwise disparate technologies and platforms. In order to access SQL Server WMI-related objects, you can create a WMI ManagedComputer instance: $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName The ManagedComputer object has access to a ServerInstance property, which in turn lists all available instances in the local network. These instances however are only identifiable if the SQL Server Browser service is running. The SQL Server Browser is a Windows Service that can provide information on installed instances in a box. You need to start this service if you want to list the SQL Server-related services. There's more... The Services instance of the ManagedComputer object can also provide similar information, but you will have to filter for the server type SqlServer: #list server instances $managedComputer.Services | Where-Object Type –eq "SqlServer" | Select-Object Name, State, Type, StartMode, ProcessId Your result should look like this: Instead of creating a WMI instance by using the New-Object method, you can also use the Get-WmiObject cmdlet when creating your variable. Get-WmiObject, however, will not expose exactly the same properties exposed by the Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer object. To list instances using Get-WmiObject, you will need to discover what namespace is available in your environment: $hostName = "localhost" $namespace = Get-WMIObject -ComputerName $hostName -Namespace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" #see matching namespace objects $namespace #see namespace names $namespace | Select-Object -ExpandProperty "__NAMESPACE" $namespace | Select-Object -ExpandProperty "Name" If you are using PowerShell v2, you will have to change the Where-Object cmdlet usage to use the curly braces {} and the $_ variable: Where-Object {$_.Name -like "ComputerManagement*" } For SQL Server 2014, the namespace value is: ROOTMicrosoftSQLServerComputerManagement12 This value can be derived from $namespace.__NAMESPACE and $namespace.Name. Once you have the namespace, you can use this with Get-WmiObject to retrieve the instances. We can use the SqlServiceType property to filter. According to MSDN (http://msdn.microsoft.com/en-us/library/ms179591.aspx), these are the values of SqlServiceType: SqlServiceType Description 1 SQL Server Service 2 SQL Server Agent Service 3 Full-Text Search Engine Service 4 Integration Services Service 5 Analysis Services Service 6 Reporting Services Service 7 SQL Browser Service Thus, to retrieve the SQL Server instances, we need to provide the full namespace ROOTMicrosoftSQLServerComputerManagement12. We also need to filter for SQL Server Service type, or SQLServiceType = 1. The code is as follows: Get-WmiObject -ComputerName $hostName -Namespace "$($namespace.__NAMESPACE)$($namespace.Name)" -Class SqlService | Where-Object SQLServiceType -eq 1 | Select-Object ServiceName, DisplayName, SQLServiceType | Format-Table –AutoSize Your result should look similar to the following screenshot: Yet another way to list all the SQL Server instances in the local network is by using the System.Data.Sql.SQLSourceEnumerator class, instead of ManagedComputer. This class has a static method called Instance.GetDataSources that will list all SQL Server instances: [System.Data.Sql.SqlDataSourceEnumerator]: :Instance.GetDataSources() | Format-Table -AutoSize When you execute, your result should look similar to the following: If you have multiple SQL Server versions, you can use the following code to display your instances: #list services using WMI foreach ($path in $namespace) { Write-Verbose "SQL Services in:$($path.__NAMESPACE)$($path.Name)" Get-WmiObject -ComputerName $hostName ` -Namespace "$($path.__NAMESPACE)$($path.Name)" ` -Class SqlService | Where-Object SQLServiceType -eq 1 | Select-Object ServiceName, DisplayName, SQLServiceType | Format-Table –AutoSize } Discovering SQL Server Services In this recipe, we will enumerate all SQL Server Services and list their statuses. Getting ready Check which SQL Server services are installed in your instance. Go to Start | Run and type services.msc. You should see a screen similar to this: How to do it... Let's assume you are running this script on the server box: Open PowerShell ISE as administrator. Add the following code and execute: Import-Module SQLPS -DisableNameChecking #you can replace localhost with your instance name $instanceName = "localhost" $managedComputer = New-Object Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer $instanceName #list services $managedComputer.Services | Select-Object Name, Type, ServiceState, DisplayName | Format-Table -AutoSize Your result will look similar to the one shown in the following screenshot: Items listed in your screen will vary depending on the features installed and running in your instance Confirm that these are the services that exist in your server. Check your services window. How it works... Services that are installed on a system can be queried using WMI. Specific services for SQL Server are exposed through SMO's WMI ManagedComputer object. Some of the exposed properties are as follows: ClientProtocols ConnectionSettings ServerAliases ServerInstances Services There's more... An alternative way to get SQL Server-related services is by using Get-WMIObject. We will need to pass in the host name as well as the SQL Server WMI Provider for the ComputerManagement namespace. For SQL Server 2014, this value is ROOTMicrosoftSQLServerComputerManagement12. The script to retrieve the services is provided here. Note that we are dynamically composing the WMI namespace. The code is as follows: $hostName = "localhost" $namespace = Get-WMIObject -ComputerName $hostName -NameSpace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" Get-WmiObject -ComputerName $hostname -Namespace "$($namespace.__NAMESPACE)$($namespace.Name)" -Class SqlService | Select-Object ServiceName If you have multiple SQL Server versions installed and want to see just the most recent version's services, you can limit to the latest namespace by adding Select-Object –Last 1: $namespace = Get-WMIObject -ComputerName $hostName -NameSpace rootMicrosoftSQLServer -Class "__NAMESPACE" | Where-Object Name -like "ComputerManagement*" | Select-Object –Last 1 Yet another alternative but less accurate way of listing possible SQL Server related services is the following snippet of code: #alterative - but less accurate Get-Service *SQL* This uses the Get-Service cmdlet and filters base on the service name. This is less accurate because this grabs all processes that have SQL in the name, but may not necessarily be related to SQL Server. For example, if you have MySQL installed, it will get picked up as a process. Conversely, this will not pick up SQL Server-related services that do not have SQL in the name, such as ReportServer. Summary You will find that many of the scripts can be accomplished using PowerShell and SQL Management Objects (SMO). SMO is a library that exposes SQL Server classes that allow programmatic manipulation and automation of many database tasks. For some , we will also explore alternative ways of accomplishing the same tasks using different native PowerShell cmdlets. Now that we have a gist of SQL Server 2014 with PowerShell, lets build a full-fledged e-commerce project with SQL Server 2014 with Powershell v5 Cookbook. Resources for Article: Further resources on this subject: Exploring Windows PowerShell 5.0 [article] Working with PowerShell [article] Installing/upgrading PowerShell [article]
Read more
  • 0
  • 0
  • 9069

article-image-gamification-moodle-lms
Packt
19 Oct 2015
11 min read
Save for later

Gamification with Moodle LMS

Packt
19 Oct 2015
11 min read
 In this article by Natalie Denmeade, author of the book, Gamification with Moodle describes how teachers can use Gamification design in their course development within the Moodle Learning Management System (LMS) to increase the motivation and engagement of learners. (For more resources related to this topic, see here.) Gamification is a design process that re-frames goals to be more appealing and achievable by using game design principles. The goal of this process is it to keep learners engaged and motivated in a way that is not always present in traditional courses. When implemented in elegant solutions, learners may be unaware of the subtle game elements being used. A gamification strategy can be considered successful if learners are more engaged, feel challenged and confident to keep progressing, which has implications for the way teachers consider their course evaluation processes. It is important to note that Gamification in education is more about how the person feels at certain points in their learning journey than about the end product which may or may not look like a game. Gamification and Moodle After following the tutorials in this book, teachers will gain the basic skills to get started applying Gamification design techniques in their Moodle courses. They can take learners on a journey of risk, choice, surprise, delight, and transformation. Taking an activity and reframing it to be more appealing and achievable sounds like the job description of any teacher or coach! Therefore, many teachers are already doing this! Understanding games and play better can help teachers be more effective in using a wider range of game elements to aid retention and completions in their courses. In this book you will find hints and tips on how to apply proven strategies to online course development, including the research into a growth mindset from Carol Dweck in her book Mindset. You will see how the use of game elements in Foursquare (badges), Twitter (likes), and Linkedin (progress bar), can also be applied to Moodle course design. In addition, you will use the core features available in Moodle which were designed to encourage learner participation as they collaborate, tag, share, vote, network, and generate learning content for each other. Finally, explore new features and plug-ins which offer dozens of ways that teachers can use game elements in Moodle such as, badges, labels, rubrics, group assignments, custom grading scales, forums, and conditional activities. A benefit of using Moodle as a Gamification LMS is it was developed on social constructivist principles. As these are learner-centric principles this means it is easy to use common Moodle features to apply gamification through the implementation of game components, mechanics and dynamics. These have been described by Kevin Werbach (in the Coursera MOOC on Gamification) as: Game Dynamics are the grammar: (the hidden elements) Constraints, emotions, narrative, progression, relationships Game Mechanics are the verbs: The action is driven forward by challenges, chance, competition/cooperation, feedback, resource acquisition, rewards, transactions, turns, win states Game Components are the nouns: Achievements, avatars, badges, boss fights, collections, combat, content, unlocking, gifting, leaderboards, levels, points, quests, teams, virtual goods Most of these game elements are not new ideas to teachers. It could be argued that school is already gamified through the use of grades and feedback. In fact it would be impossible to find a classroom that is not using some game elements. This book will help you identify which elements will be most effective in your current context. Teachers are encouraged to start with a few and gradually expanding their repertoire. As with professional game design, just using game elements will not ensure learners are motivated and engaged. The measure of success of a Gamification strategy is that learners continue to build resilience and autonomy in their own learning. When implemented well, the potential benefits of using a Gamification design process in Moodle are to: Provide manageable set of subtasks and tasks by hiding and revealing content Make assessment criteria visible, predictable, and in plain English using marking guidelines and rubrics Increase ownership of learning paths through choice and activity restrictions Build individual and group identity through work place simulations and role play Offer freedom to fail and try again without negative repercussions Increase enjoyment of both teacher and learners When teachers follow the step by step guide provided in this book they will create a basic Moodle course that acts as a flexible framework ready for learning content. This approach is ideal for busy teachers who want to respond to the changing needs and situations in the classroom. The dynamic approach keeps Teachers in control of adding and changing content without involving a technology support team. Onboarding tips By using focussed examples, the book describes how to use Moodle to implement an activity loop that identifies a desired behaviour and wraps motivations and feedback around that action. For example, a desired action may be for each learner to update their Moodle profile information with their interests and an avatar. Various motivational strategies could be put in place to prompt (or force) the learners to complete this task, including: Ask learners to share their avatars, with a link to their profile in a forum with ratings. Everyone else is doing it and they will feel left out if they don't get a like or a comment (creating a social norm). They might get rated as having the best avatar. Update the forum type so that learners can't see other avatars until they make a post. Add a theme (for example, Lego inspired avatars) so that creating an avatar is a chance to be creative and play. Choosing how they represent themselves in an online space is an opportunity for autonomy. Set the conditional release so learners cannot see the next activity until this activity is marked as complete (for example, post at least 3 comments on other avatars). The value in this process is that learners have started building connections between new classmates. This activity loop is designed to appeal to diverse motivations and achieve multiple goals: Encourages learners to create an online persona and choose their level of anonymity Invite learners to look at each other’s profiles and speed up the process of getting to know each other Introduce learners to the idea of forum posting and rating in a low-risk (non-assessable) way Take the workload off the Teacher to assess each activity directly Enforce compliance through software options which saves admin time and creates an expectation of work standards for learners Feedback options Games celebrate small and large successes and so should Moodle courses. There are a number of ways to do this in Moodle, including simply automating feedback with a Label, which is revealed once a milestone is reached. These milestones could be an activity completion, topic completion, or a level has been reached in the course total. Feedback can be provided through symbols of the achievement. Learners of all ages are highly motivated by this. Nearly all human cultures use symbols, icons, medals and badges to indicate status and achievements such as a black belt in Karate, Victoria Cross and Order of Australia Medals, OBE, sporting trophies, Gold Logies, feathers and tattoos. Symbols of achievement can be achieved through the use of open badges. Moodle offers a simple way to issue badges in line with Open Badges Industry (OBI) standards. The learner can take full ownership of this badge when they export it to their online backpack. Higher education institutes are finding evidence that open badges are a highly effective way to increase motivation for mature learners. Kaplan University found the implementation of badges resulted in increased student engagement by 17 percent. As well as improving learner reactions to complete harder tasks, grades increased up to 9 percent. Class attendance and discussion board posts increased over the non-badged counterparts. Using open badges as a motivation strategy enables feedback to be regularly provided along the way from peers, automated reporting and the teacher. For advanced Moodlers, the book describes how rubrics can be used for "levelling up" and how the Moodle gradebook can be configured as an exponential point scoring system to indicate progress. Social game elements Implementing social game elements is a powerful way to increase motivation and participation. A Gamification experiment with thousands of MOOC participants measured participation of learners in three groups of "plain, game and social". Students in the game condition had a 22.5 percent higher test score in the final test compared to students in the plain condition. Students in the social condition showed an even stronger increase of almost 40 percent compared to students in the plain condition. (See A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Krause et al, 2014). Moodle has a number of components that can be used to encourage collaborative learning. Just as the online gaming world has created spaces where players communicate outside of the game in forums, wikis and You Tube channels as well as having people make cheat guides about the games and are happy to share their knowledge with beginners. In Moodle we can imitate these collaborative spaces gamers use to teach each other and make the most of the natural leaders and influencers in the class. Moodle activities can be used to encourage communication between learners and allow delegation and skill-sharing. For example, the teacher may quickly explain and train the most experienced in the group how to perform a certain task and then showcase their work to others as an example. The learner could create blog posts which become an online version of an exercise book. The learner chooses the sharing level so classmates only, or the whole world, can view what is shared and leave comments. The process of delegating instruction through the connection of leader/learners to lagger/learners, in a particular area, allows finish lines to be at different points. Rather spending the last few weeks marking every learner’s individual work, the Teacher can now focus their attention on the few people who have lagged behind and need support to meet the deadlines. It's worth taking the time to learn how to configure a Moodle course. This provides the ability to set up a system that is scalable and adaptable to each learner. The options in Moodle can be used to allow learners to create their own paths within the boundaries set by a teacher. Therefore, rather than creating personalised learning paths for every student, set up a suite of tools for learners to create their own learning paths. Learning how to configure Moodle activities will reduce administration tasks through automatic reports, assessments and conditional release of activities. The Moodle activities will automatically create data on learner participation and competence to assist in identifying struggling learners. The inbuilt reports available in Moodle LMS help Teachers to get to know their learners faster. In addition, the reports also create evidence for formative assessment which saves hours of marking time. Through the release from repetitive tasks, teachers can spend more time on the creative and rewarding aspects of teaching. Rather than wait for a game design company to create an awesome educational game for a subject area, get started by using the same techniques in your classroom. This creative process is rewarding for both teachers and learners because it can be constantly adapted for their unique needs. Summary Moodle provides a flexible Gamification platform because teachers are directly in control of modifying and adding a sequence of activities, without having to go through an administrator. Although it may not look as good as a video game (made with an extensive budget) learners will appreciate the effort and personalisation. The Gamification framework does require some preparation. However, once implemented it picks up a momentum of its own and the teacher has a reduced workload in the long run. Purchase the book and enjoy a journey into Gamification in education with Moodle! Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Moodle for Online Communities [article] State of Play of BuddyPress Themes [article]
Read more
  • 0
  • 0
  • 6559
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-overview-oozie
Packt
19 Oct 2015
5 min read
Save for later

An Overview of Oozie

Packt
19 Oct 2015
5 min read
In this article by Jagat Singh, the author of the book Apache Oozie Essentials, we will see a basic overview of Oozie and its concepts in brief. (For more resources related to this topic, see here.) Concepts Oozie is a workflow scheduler system to run Apache Hadoop jobs. Oozie workflow jobs are Directed Acyclic Graphs (DAGs) (https://en.wikipedia.org/wiki/Directed_acyclic_graph) representation of actions. Actions tell what to do in the job. Oozie supports running jobs of various types such as Java, Map-reduce, Pig, Hive, Sqoop, Spark, and Distcp. The output of one action can be consumed by the next action to create chain sequence. Oozie has client server architecture, in which we install the server for storing the jobs and using client we submit our jobs to the server. Let's get an idea of few basic concepts of Oozie. Workflow Workflow tells Oozie 'what' to do. It is a collection of actions arranged in required dependency graph. So as part of workflows definition we write some actions and call them in certain order. These are of various types for tasks, which we can do as part of workflow for example, Hadoop filesystem action, Pig action, Hive action, Mapreduce action , Spark action, and so on. Coordinator Coordinator tells Oozie 'when' to do. Coordinators let us to run inter-dependent workflows as data pipelines based on some starting criteria. Most of the Oozie jobs are triggered at given scheduled time interval or when input dataset is present for triggering the job. Following are important definitions related to coordinators: Nominal time: The scheduled time at which job should execute. Example, we process pressrelease every day at 8:00PM. Actual time: The real time when the job ran. In some cases if the input data does not arrive the job might start late. This type of data dependent job triggering is indicated by done-flag (more on this later). The done-flag gives signal to start the job execution. The general skeleton template of coordinator is shown in the following figure: Bundles Bundles tell Oozie which all things to do together as a group. For example a set of coordinators, which can be run together to satisfy a given business requirement can be combined as Bundle. Book case study One of the main used cases of Hadoop is ETL data processing. Suppose that we work for a large consulting company and have won project to setup Big data cluster inside customer data center. On high level the requirements are to setup environment that will satisfy the following flow: We get data from various sources in Hadoop (File based loads, Sqoop based loads) We preprocess them with various scripts (Pig, Hive, Mapreduce) Insert that data into Hive tables for use by analyst and data scientists Data scientists write machine learning models (Spark) We will be using Oozie as our processing scheduling system to do all the above. In our architecture we have one landing server, which sits outside as front door of the cluster. All source systems send files to us via scp and we regularly (for example, nightly to keep simple) push them to HDFS using the hadoop fs -copyFromLocal command. This script is cron driven. It has very simple business logic run every night at 8:00 PM and moves all the files, which it sees, on landing server into HDFS. The Oozie works as follows: Oozie picks the file and cleans it using Pig Script to replace all the delimiters from comma (,) to pipes (|). We will write the same code using Pig and Map Reduce. We then push those processed files into a Hive table. For different source system which is database based MySQL table we do nightly Sqoop when the load of Database in light. So we extract all the records that have been generated on previous business day. The output of that also we insert into Hive tables. Analyst and Data scientists write there magical Hive scripts and Spark machine learning models on those Hive tables. We will use Oozie to schedule all of these regular tasks. Node types Workflow is composed on nodes; the logical DAG of nodes represents 'what' part of the work done by Oozie. Each of the node does specified work and on success moves to one node or on failure moves to other node. For example on success go to OK node and on fail goes to Kill node. Nodes in the Oozie workflow are of the following types. Control flow nodes These nodes are responsible for defining start, end, and control flow of what to do inside the workflow. These can be from following: Start node End node Kill node Decision node Fork and Join node Action nodes Actions nodes represent the actual processing tasks, which are executed when called. These are of various types for example Pig action, Hive action, and Mapreduce action. Summary So in this article we looked at the concepts of Oozie in brief. We also learnt the types on nodes in Oozie. Resources for Article: Further resources on this subject: Introduction to Hadoop[article] Hadoop and HDInsight in a Heartbeat[article] Cloudera Hadoop and HP Vertica [article]
Read more
  • 0
  • 0
  • 2490

article-image-getting-started-cocos2d-x
Packt
19 Oct 2015
11 min read
Save for later

Getting started with Cocos2d-x

Packt
19 Oct 2015
11 min read
 In this article written by Akihiro Matsuura, author of the book Cocos2d-x Cookbook, we're going to install Cocos2d-x and set up the development environment. The following topics will be covered in this article: Installing Cocos2d-x Using Cocos command Building the project by Xcode Building the project by Eclipse Cocos2d-x is written in C++, so it can build on any platform. Cocos2d-x is open source written in C++, so we can feel free to read the game framework. Cocos2d-x is not a black box, and this proves to be a big advantage for us when we use it. Cocos2d-x version 3, which supports C++11, was only recently released. It also supports 3D and has an improved rendering performance. (For more resources related to this topic, see here.) Installing Cocos2d-x Getting ready To follow this recipe, you need to download the zip file from the official site of Cocos2d-x (http://www.cocos2d-x.org/download). In this article we've used version 3.4 which was the latest stable version that was available. How to do it... Unzip your file to any folder. This time, we will install the user's home directory. For example, if the user name is syuhari, then the install path is /Users/syuhari/cocos2d-x-3.4. We call it COCOS_ROOT. The following steps will guide you through the process of setting up Cocos2d-x: Open the terminal Change the directory in terminal to COCOS_ROOT, using the following comand: $ cd ~/cocos2d-x-v3.4 Run setup.py, using the following command: $ ./setup.py The terminal will ask you for NDK_ROOT. Enter into NDK_ROOT path. The terminal will will then ask you for ANDROID_SDK_ROOT. Enter the ANDROID_SDK_ROOT path. Finally, the terminal will ask you for ANT_ROOT. Enter the ANT_ROOT path. After the execution of the setup.py command, you need to execute the following command to add the system variables: $ source ~/.bash_profile Open the .bash_profile file, and you will find that setup.py shows how to set each path in your system. You can view the .bash_profile file using the cat command: $ cat ~/.bash_profile We now verify whether Cocos2d-x can be installed: Open the terminal and run the cocos command without parameters. $ cocos If you can see a window like the following screenshot, you have successfully completed the Cocos2d-x install process. How it works... Let's take a look at what we did throughout the above recipe. You can install Cocos2d-x by just unzipping it. You know setup.py is only setting up the cocos command and the path for Android build in the environment. Installing Cocos2d-x is very easy and simple. If you want to install a different version of Cocos2d-x, you can do that too. To do so, you need to follow the same steps that are given in this recipe, but which will be for a different version. There's more... Setting up the Android environment  is a bit tough. If you started to develop at Cocos2d-x soon, you can turn after the settings part of Android. And you would do it when you run on Android. In this case, you don't have to install Android SDK, NDK, and Apache. Also, when you run setup.py, you only press Enter without entering a path for each question. Using the cocos command The next step is using the cocos command. It is a cross-platform tool with which you can create a new project, build it, run it, and deploy it. The cocos command works for all Cocos2d-x supported platforms. And you don't need to use an IDE if you don't want to. In this recipe, we take a look at this command and explain how to use it. How to do it... You can use the cocos command help by executing it with the --help parameter, as follows: $ cocos --help We then move on to generating our new project: Firstly, we create a new Cocos2d-x project with the cocos new command, as shown here: $ cocos new MyGame -p com.example.mygame -l cpp -d ~/Documents/ The result of this command is shown the following screenshot: Behind the new parameter is the project name. The other parameters that are mentioned denote the following: MyGame is the name of your project. -p is the package name for Android. This is the application id in Google Play store. So, you should use the reverse domain name to the unique name. -l is the programming language used for the project. You should use "cpp" because we will use C++. -d is the location in which to generate the new project. This time, we generate it in the user's documents directory. You can look up these variables using the following command: $ cocos new —help Congratulations, you can generate your new project. The next step is to build and run using the cocos command. Compiling the project If you want to build and run for iOS, you need to execute the following command: $ cocos run -s ~/Documents/MyGame -p ios The parameters that are mentioned are explained as follows: -s is the directory of the project. This could be an absolute path or a relative path. -p denotes which platform to run on.If you want to run on Android you use -p android. The available options are IOS, android, win32, mac, and linux. You can run cocos run –help for more detailed information. The result of this command is shown in the following screenshot: You can now build and run iOS applications of cocos2d-x. However, you have to wait for a long time if this is your first time building an iOS application. That's why it takes a long time to build cocos2d-x library, depending on if it was clean build or first build. How it works... The cocos command can create a new project and build it. You should use the cocos command if you want to create a new project. Of course, you can build by using Xcode or Eclipse. You can easier of there when you develop and debug. There's more... The cocos run command has other parameters. They are the following: --portrait will set the project as a portrait. This command has no argument. --ios-bundleid will set the bundle ID for the iOS project. However, it is not difficult to set it later. The cocos command also includes some other commands, which are as follows: The compile command: This command is used to build a project. The following patterns are useful parameters. You can see all parameters and options if you execute cocos compile [–h] command. cocos compile [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The deploy command: This command only takes effect when the target platform is android. It will re-install the specified project to the android device or simulator. cocos deploy [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The run command continues to compile and deploy commands. Building the project by Xcode Getting ready Before building the project by Xcode, you require Xcode with an iOS developer account to test it on a physical device. However, you can also test it on an iOS simulator. If you did not install Xcode, you can get it from Mac App Store. Once you have installed it, get it activated. How to do it... Open your project from Xcode. You can open your project by double-clicking on the file placed at: ~/Documents/MyGame/proj.ios_mac/MyGame.xcodeproj. Build and Run by Xcode You should select an iOS simulator or real device on which you want to run your project. How it works... If this is your first time building, it will take a long time. But continue to build with confidence as it's the first time. You can develop your game faster if you develop and debug it using Xcode rather than Eclipse. Building the project by Eclipse Getting ready You must finish the first recipe before you begin this step. If you have not finished it yet, you will need to install Eclipse. How to do it... Setting up NDK_ROOT: Open the preference of Eclipse Open C++ | Build | Environment Click on Add and set the new variable, name is NDK_ROOT, value is NDK_ROOT path. Importing your project into Eclipse: Open the file and click on Import Go to Android | Existing Android Code into Workspace Click on Next Import the project to Eclipse at ~/Documents/MyGame/proj.android. Importing Cocos2d-x library into Eclipse Perform the same steps from Step 3 to Step 4. Import the project cocos2d lib at ~/Documents/MyGame/cocos2d/cocos/platform/android/java, using the folowing command: importing cocos2d lib Build and Run Click on Run icon The first time, Eclipse asks you to select a way to run your application. You select Android Application and click on OK, as shown in the following screenshot: If you connected the Android device on your Mac, you can run your game on your real device or an emulator. The following screenshot shows that it is running it on Nexus5. If you added cpp files into your project, you have to modify the Android.mk file at ~/Documenst/MyGame/proj.android/jni/Android.mk. This file is needed to build NDK. This fix is required to add files. The original Android.mk would look as follows: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp If you added the TitleScene.cpp file, you have to modify it as shown in the following code: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp ../../Classes/TitleScene.cpp The preceding example shows an instance of when you add the TitleScene.cpp file. However, if you are also adding other files, you need to add all the added files. How it works... You get lots of errors when importing your project into Eclipse, but don't panic. After importing cocos2d-x library, errors soon disappear. This allows us to set the path of NDK, Eclipse could compile C++. After you modified C++ codes, run your project in Eclipse. Eclipse automatically compiles C++ codes, Java codes, and then runs. It is a tedious task to fix Android.mk again to add the C++ files. The following code is original Android.mk: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/../../Classes The following code is customized Android.mk that adds C++ files automatically. CPP_FILES := $(shell find $(LOCAL_PATH)/../../Classes -name *.cpp) LOCAL_SRC_FILES := hellocpp/main.cpp LOCAL_SRC_FILES += $(CPP_FILES:$(LOCAL_PATH)/%=%) LOCAL_C_INCLUDES := $(shell find $(LOCAL_PATH)/../../Classes -type d) The first line of the code gets C++ files to the Classes directory into CPP_FILES variable. The second and third lines add C++ files into LOCAL_C_INCLUDES variable. By doing so, C++ files will be automatically compiled in NDK. If you need to compile a file other than the extension .cpp file, you will need to add it manually. There's more... If you want to manually build C++ in NDK, you can use the following command: $ ./build_native.py This script is located at the ~/Documenst/MyGame/proj.android . It uses ANDROID_SDK_ROOT and NDK_ROOT in it. If you want to see its options, run ./build_native.py –help. Summary Cocos2d-x is an open source, cross-platform game engine, which is free and mature. It can publish games for mobile devices and desktops, including iPhone, iPad, Android, Kindle, Windows, and Mac. The book Cocos2d-x Cookbook focuses on using version 3.4, which is the latest version of Cocos2d-x that was available at the time of writing. We focus on iOS and Android development, and we'll be using Mac because we need it to develop iOS applications. Resources for Article: Further resources on this subject: CREATING GAMES WITH COCOS2D-X IS EASY AND 100 PERCENT FREE [Article] Dragging a CCNode in Cocos2D-Swift [Article] COCOS2D-X: INSTALLATION [Article]
Read more
  • 0
  • 0
  • 25632

article-image-dynamic-path-planning-robot
Packt
19 Oct 2015
8 min read
Save for later

Dynamic Path Planning of a Robot

Packt
19 Oct 2015
8 min read
In this article by Richard Grimmett, the author of the book Raspberry Pi Robotic Blueprints, we will see how to do dynamic path planning. Dynamic path planning simply means that you don't have a knowledge of the entire world with all the possible barriers before you encounter them. Your robot will have to decide how to proceed while it is in motion. This can be a complex topic, but there are some basics that you can start to understand and apply as you ask your robot to move around in its environment. Let's first address the problem of where you want to go and need to execute a path without barriers and then add in the barriers. (For more resources related to this topic, see here.) Basic path planning In order to talk about dynamic path planning—planning a path where you don't know what barriers you might encounter—you'll need a framework to understand where your robot is as well as to determine the location of the goal. One common framework is an x-y grid. Here is a drawing of such a grid: There are three key points to remember, as follows: The lower left point is a fixed reference position. The directions x and y are also fixed and all the other positions will be measured with respect to this position and directions. Another important point is the starting location of your robot. Your robot will then keep track of its location using its x coordinate or position with respect to some fixed reference position in the x direction and its y coordinate or position with respect to some fixed reference position in the y direction to the goal. It will use the compass to keep track of these directions. The third important point is the position of the goal, also given in x and y coordinates with respect to the fixed reference position. If you know the starting location and angle of your robot, then you can plan an optimum (shortest distance) path to this goal. To do this, you can use the goal location and robot location and some fairly simple math to calculate the distance and angle from the robot to the goal. To calculate the distance, use the following equation: You can use the preceding equation to tell your robot how far to travel to the goal. The following equation will tell your robot the angle at which it needs to travel: Thefollowing is a graphical representation of the two pieces of information that we just saw: Now that you have a goal, angle, and distance, you can program your robot to move. To do this, you will write a program to do the path planning and call the movement functions that you created earlier in this article. You will need, however, to know the distance that your robot travels in a set of time so that you can tell your robot in time units, not distance units, how far to travel. You'll also need to be able to translate the distance that might be covered by your robot in a turn; however, this distance may be so small as to be of no importance. If you know the angle and distance, then you can move your robot to the goal. The following are the steps that you will program: Calculate the distance in units that your robot will need to travel to reach the goal. Convert this to the number of steps to achieve this distance. Calculate the angle that your robot will need to travel to reach the goal. You'll use the compass and your robot turn functions to achieve this angle. Now call the step functions for the proper number of times to move your robot for the correct distance. This is it. Now, we will use some very simple python code that executes this using functions to move the robot forward and turn it. In this case, it makes sense to create a file called robotLib.py with all of the functions that do the actual settings to step the biped robot forward and turn the robot. You'll then import these functions using the from robotLib import * statement and your python program can call these functions. This makes the path planning python program smaller and more manageable. You'll do the same thing with the compass program using the command: from compass import *. For more information on how to import the functions from one python file to another, see http://www.tutorialspoint.com/python/python_modules.htm. The following is a listing of the program: In this program, the user enters the goal location, and the robot decides the shortest direction to the desired angle by reading the angle. To make it simple, the robot is placed in the grid heading in the direction of an angle of 0. If the goal angle is less than 180 degrees, the robot will turn right. If it is greater than 180 degrees, the robot will turn left. The robot turns until the desired angle and its measured angle are within a few degrees. Then the robot takes the number of steps in order to reach the goal. Avoiding Obstacles Planning paths without obstacles is, as has been shown, quite easy. However, it becomes a bit more challenging when your robot needs to walk around the obstacles. Let's look at the case where there is an obstacle in the path that you calculated previously. It might look as follows: You can still use the same path planning algorithm to find the starting angle; however, you will now need to use your sonar sensor to detect the obstacle. When your sonar sensor detects the obstacle, you will need to stop and recalculate a path to avoid the barrier, then recalculate the desired path to the goal. One very simple way to do this is when your robot senses a barrier, turn right at 90 degrees, go a fixed distance, and then recalculate the optimum path. When you turn back to move toward the target, you will move along the optimum path if you sense no barrier. However, if your robot encounters the obstacle again, it will repeat the process until it reaches the goal. In this case, using these rules, the robot will travel the following path: To sense the barrier, you will use the library calls to the sensor. You're going to add more accuracy with this robot using the compass to determine your angle. You will do this by importing the compass capability using from compass import *. You will also be using the time library and time.sleep command to add a delay between the different statements in the code. You will need to change your track.py library so that the commands don't have a fixed ending time, as follows: Here is the first part of this code, two functions that provide the capability to turn to a known angle using the compass, and a function to calculate the distance and angle to turn the tracked vehicle to that angle: The second part of this code shows the main loop. The user enters the robot's current position and the desired end position in x and y coordinates. The code that calculates the angle and distance starts the robot on its way. If a barrier is sensed, the unit turns at 90 degrees, goes for two distance units, and then recalculates the path to the end goal, as shown in the following screenshot: Now, this algorithm is quite simple; however, there are others that have much more complex responses to the barriers. You can also see that by adding the sonar sensors to the sides, your robot could actually sense when the barrier has ended. You could also provide more complex decision processes about which way to turn to avoid an object. Again, there are many different path finding algorithms. See http://www.academia.edu/837604/A_Simple_Local_Path_Planning_Algorithm_for_Autonomous_Mobile_Robots for an example of this. These more complex algorithms can be explored using the basic functionality that you have built in this article. Summary We have seen how to add path planning to your tracked robot's capability. Your tracked robot can now not only move from point A to point B, but can also avoid the barriers that might be in the way. Resources for Article: Further resources on this subject: Debugging Applications with PDB and Log Files[article] Develop a Digital Clock[article] Color and motion finding [article]
Read more
  • 0
  • 0
  • 17619

article-image-part2-chatops-slack-and-aws-cli
Yohei Yoshimuta
18 Oct 2015
5 min read
Save for later

Part2. ChatOps with Slack and AWS CLI

Yohei Yoshimuta
18 Oct 2015
5 min read
In part 1 of this series, we installed the AWS-CLI tool, created an AWS EC2 instance, listed one, terminated one, and downloaded an AWS S3 content via AWS CLI instead of UI. Now that we know AWS CLI is very useful and that it supports an extensive API, let's use it more for daily development and operation works through Slack, which is a very popular chat tool. ChatOps, which is used in the title, means doing operations via a chat tool. So, before explaining the full process, I assume you are using Slack. Let's see how we can control EC2 instances using Slack. Integrate Slack with Hubot At first, you need to setup Hubot project with Slack Adapter on your machine. I assume that your machine is MacOSX, but the way to setup is not so different. # Install redis $ brew install redis # Install node $ brew install node # Install npm packages npm install -g hubot coffee-script yo generator-hubot # Make your project directory mkdir -p /path/to/hubot cd /path/to/hubot # Generate your project basement yohubot ? Owner: yoheimuta<yoheimuta@gmail.com> ? Bot name: hubot ? Description: A simple helpful robot for your Company ? Bot adapter: (campfire) slack ? Bot adapter: slack Then, you have to register an integration with Hubot on a Slack configuration page. This page issues an API Token that is necessary to work the integration. # Run the hubot $ HUBOT_SLACK_TOKEN=YOUR-API-TOKEN ./bin/hubot --adapter slack If all works as expected, Slack should respond PONG when you type hubot-name ping in Slack UI. Install hubot-aws module Ok, you are ready to introduce hubot-aws which makes Slack to be your team's AWS CLI environment. # Install and add hubot-aws to your package.json file: $ npm install --save hubot-aws # Addhubot-aws to your external-scripts.json: $ vi external-scripts.json # Set an AWS credential if your machine has no ~./aws/credentials $ export HUBOT_AWS_ACCESS_KEY_ID="ACCESS_KEY" $ export HUBOT_AWS_SECRET_ACCESS_KEY="SECRET_ACCESS_KEY" # Set an AWS region no matter whether your machine has ~/.aws/config or not $ export HUBOT_AWS_REGION="us-west-2" # Set a DEBUG flag until you use in production $ export HUBOT_AWS_DEBUG="1" Run an EC2 instance We are going to run an EC2 instance, which is provisioned mongodb, nodejs and Let's Chat — Self-hosted chat for small teams app. In order to run it, we first need to create config files. $ mkdiraws_config $ cdaws_config/ # Prepare option parameters $ viapp.cson $ catapp.cson MinCount: 1 MaxCount: 1 ImageId: "ami-936d9d93" KeyName: "my-key" InstanceType: "t2.micro" Placement: AvailabilityZone: "us-west-2" NetworkInterfaces: [ { Groups: [ "sg-***" ] SubnetId : "subnet-***" DeviceIndex : 0 AssociatePublicIpAddress : true } ] # Prepare a provisioning shell script $ viinitfile $ catinitfile #!/bin/bash # Install mongodb sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 sudo echo "deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" | sudo tee -a /etc/apt/sources.list.d/10gen.list # Install puppet sudowget -P /tmp https://apt.puppetlabs.com/puppetlabs-release-precise.deb sudodpkg -i /tmp/puppetlabs-release-precise.deb sudo apt-get -y update sudo apt-get -y install puppet 2>&1 | tee /tmp/initfile.log # Install a puppet module sudo puppet module install jay-letschat 2>&1 | tee /tmp/initfile1.log # Create a puppet manifest sudosh -c "cat >> /etc/puppet/manifests/letschat.pp<<'EOF'; class { 'letschat::db': user => 'lcadmin', pass => 'unsafepassword', bind_ip => '0.0.0.0', database_name => 'letschat', database_port => '27017', } -> class { 'letschat::app': dbuser => 'lcadmin', dbpass => 'unsafepassword', dbname => 'letschat', dbhost => 'localhost', dbport => '27017', deploy_dir => '/etc/letschat', http_enabled => true, lc_bind_address => '0.0.0.0', http_port => '5000', ssl_enabled => false, cookie => 'secret', authproviders => 'local', registration => true, } EOF" # Apply a puppet manifest sudo puppet apply /etc/puppet/manifests/letschat.pp 2>&1 | tee /tmp/initfile2.log $ export HUBOT_AWS_EC2_RUN_CONFIG="aws_config/app.cson" $ export HUBOT_AWS_EC2_RUN_USERDATA_PATH="aws_config/initfile" $ HUBOT_SLACK_TOKEN=YOUR-API-TOKEN ./bin/hubot --adapter slack Let's type hubot ec2 run --dry-run to validate the config and then hubot ec2 run to start running an EC2 instance. Enter public-ipaddr:5000 into a browser to enjoy Let's Chat app after initialization of the instance (maybe it takes about 5 minutes). You will find public-ipaddr from AWS UI or hubot ec2 ls --instance_id=*** (this command is described below). List running EC2 instances You now have created an EC2 instance. The detail of running EC2 instances is displayed whenever your colleague and you type hubot ec2 ls. It's cool. Terminate an EC2 instance Well, it's time to terminate an EC2 instance to save money. Type hubot ec2 terminate --instance_id=***. That's all. Conclusion ChatOps is very useful, especially for your team's routine ops work. For our team's example, we ran a temporal app instance to test a development feature before deploying it in production, terminating a problematic EC2 instance logging errors, and creating settings about Auto Scaling that is relatively difficult to automate completely yet like using Terraform for our team. Hubot-AWS works for dev&ops engineers who want to use AWS CLI while sharing ops with their colleagues but have no time to completely automate. About the author YoheiYoshimuta is a software engineer with a proven record of delivering high quality software in both game and advertising industries. He has extensive experience in building products from scratch in small and large team. His primary focuses are Perl, Go and AWS technologies. You can reach him at @yoheimutaonGitHubandTwitter.
Read more
  • 0
  • 1
  • 12891
article-image-understanding-crm-extendibility-architecture
Packt
16 Oct 2015
22 min read
Save for later

Understanding CRM Extendibility Architecture

Packt
16 Oct 2015
22 min read
 In this article by Mahender Pal, the author of the book Microsoft Dynamics CRM 2015 Application Design, we will see how Microsoft Dynamics CRM provides different components that can be highly extended to map our custom business requirements. Although CRM provides a rich set of features that help us execute different business operations without any modification. However, we can still extend its behavior and capabilities with the supported customization. (For more resources related to this topic, see here.) The following is the extendibility architecture of CRM 2015, where we can see how different components interact with each other and what are the components that can be extended with the help of CRM APIs: Extendibility Architecture Let's discuss these components one by one and the possible extendibility options for them. CRM databases During installation of CRM, two databases, organization and configuration, are created. The organization database is created with the name of organization_MSCRM and the configuration database is created with the name of MSCRM_CONFIG. The organization database contains complete organization-related data stored on different entities. For every entity in CRM, there is a corresponding table with the name of Entityname+Base. Although technically it is possible but any direct data modification in these tables are not supported. Any changes to CRM data should be done by using CRM APIs only. Adding indexes to the CRM database is supported, you can refer to https://msdn.microsoft.com/en-us/library/gg328350.aspx for more details on supported customizations. Apart from table, CRM also creates a special view for every entity with the name of Filtered+Entityname. These fields view provide data based on the user security role; so for example, if you are a sales person you will only get data while querying filtered views based on the sales person role. We use filtered views for writing custom reports for CRM. You can refer to more details on filtered views at https://technet.microsoft.com/en-us/library/dn531182.aspx. Entity relationship diagram can be downloaded from https://msdn.microsoft.com/en-us/library/jj602918.aspx for CRM 2015. The Platform Layer Platform layer works as middleware between CRM UI and database, it is responsible for executing inbuilt and custom business logic and moving data back and forth. When we browse a CRM application, the platform layer presents data that is available based on the current user security roles. When we develop and deploy custom component on the top of platform layer. Process Process is a way of implementing automation in CRM. We can set up process using process designer and also develop custom assemblies to enhance the capability of workflow designer and include custom steps. CRM web services CRM provides Windows Communication Foundation (WCF) based web services, which help us interact with organization data and metadata; so whenever we want to create or modify an entity's data or want to customize a CRM component's metadata, we need to utilize these web services. We can also develop our custom web services with the help of CRM web services if required. We will be discussing more about CRM web services in details in a later topic. Plugins Plugins are another way of extending the CRM capability. These are .NET assemblies that help us implement our custom business logic in the CRM platform. It helps us to execute our business logic before or after the main platform operation. We can also run our plugin on a transaction that is similar to a SQL transaction, which means if any operation failed, all the changes under transaction will rollback. We can setup asynchronous and synchronous plugins. Reporting CRM provides rich reporting capabilities. We have many out of box reports for every module such as sales, marketing, and service. We can also create new reports and customize existing reports in Visual Studio. While working with reports, we always utilize an entity-specific filtered view so that data can be exposed based on the user security role. We should never use a CRM table while writing reports. Custom reports can be developed using out of box report wizard or using Visual Studio. The report wizard helps us create reports by following a couple of screens where we can select an entity and filter the criteria for our report with different rendering and formatting options. We can create two types of reports in Visual Studio SSRS and FetchXML. Custom SSRS reports are supported on CRM on premise deployment whereas CRM online only FetchXML. You can refer to https://technet.microsoft.com/en-us/library/dn531183.aspx for more details on report development. Client extensions We can also extend the CRM application from the Web and Outlook client. We can also develop custom utility tools for it. Sitemap and Command bar editor add-ons are example of such applications. We can modify different CRM components such as entity structure, web resources, business rules, different type of web resources, and other components. CRM web services can be utilized to map custom requirements. We can make navigational changes from CRM clients by modifying Sitemap and Command Bar definition. Integrated extensions We can also develop custom extensions in terms of custom utility and middle layer to interact with CRM using APIs. It can be a portal application or any .NET or non .NET utility. CRM SDK comes with many tools that help us to develop these integrated applications. We will be discussing more on custom integration with CRM in a later topic. Introduction to Microsoft Dynamics CRM SDK Microsoft Dynamics CRM SDK contains resources that help us develop code for CRM. It includes different CRM APIs and helpful resources such as sample codes (both server side and client side) and a list of tools to facilitate CRM development. It provides a complete documentation of the APIs, methods, and their uses, so if you are a CRM developer, technical consultant, or solution architect, the first thing you need to make sure is to download the latest CRM SDK. You can download the latest version of CRM SDK from http://www.microsoft.com/en-us/download/details.aspx?id=44567. The following table talks about the different resources that come with CRM SDK: Name Descriptions Bin This folder contains all the assemblies of CRM. Resources This folder contains different resources such as data import maps, default entity ribbon XML definition, and images icons of CRM applications. SampleCode This folder contains all the server side and client side sample code that can help you get started with the CRM development. This folder also contains sample PowerShell commands. Schemas This folder contains XML schemas for CRM entities, command bars, and sitemap. These schemas can be imported in Visual Studio while editing the customization of the .xml file manually. Solutions This folder contains the CRM 2015 solution compatibility chart and one portal solution. Templates This folder contains the visual studio templates that can be used to develop components for a unified service desk and the CRM package deployment. Tools This folder contains tools that are shipped with CRM SDK such as the metadata browser that can used to get CRM entity metadata, plugin registration tool, web resource utility, and others. Walkthroughts This folder contains console and web portal applications. CrmSdk2015 This is the .chm help file. EntityMetadata This file contains entity metadata information. Message-entity support for plugins This is a very important file that will help you understand events available for entities to write custom business logic (plug-ins) Learning about CRM assemblies CRM SDK ships with different assemblies under the bin folder that we can use to write CRM application extension. We can utilize them to interact with CRM metadata and organization data. The following table provides details about the most common CRM assemblies: Name Details Microsoft.Xrm.Sdk.Deployment This assembly is used to work with the CRM organization. We can create, update, and delete organization assembly methods. Microsoft.Xrm.Sdk This is very important assembly as it contains the core methods and their details, this assembly is used for every CRM extension. This assembly contains different namespaces for different functionality, for example Query, which contains different classes to query CRM DB; Metadata, which help us interact with the metadata of the CRM application; Discovery, which help us interact with the discover service (we will be discussing the discovery services in a later topic); Messages, which provide classes for all CURD operation requests and responses with metadata classes. Microsoft.Xrm.Sdk.Workflow This assembly helps us extend the CRM workflows' capability. It contains methods and types which are required for writing custom workflow activity. This assembly contains the activities namespace, which is used by the CRM workflow designer. Microsoft.Crm.Sdk.Proxy This assembly contains all noncore requests and response messages. Microsoft.Xrm.Tooling This is a new assembly added in SDK. This assembly helps to write Windows client applications for CRM Microsoft.Xrm.Portal This assembly provides methods for portal development, which includes security management, cache management, and content management. Microsoft.Xrm.Client This is another assembly that is used in the CRM client application to communicate with CRM from the application. It contains connection classes that we can use to setup the connection using different CRM authentication methods. We will be working with these APIs in later topics. Understanding CRM web services Microsoft Dynamics CRM provides web service support, which can be used to work with CRM data or metadata. CRM web services are mentioned here. The deployment service The deployment service helps us work with organizations. Using this web service, we can create a new organization, delete, or update existing organizations. The discovery service Discovery services help us identify correct web service endpoints based on the user. Let's take an example where we have multiple CRM organizations, and we want to get a list of the organization where current users have access, so we can utilize discovery service to find out unique organization ID, endpoint URL and other details. We will be working with discovery service in a later topic. The organization service The organization service is used to work with CRM organization data and metadata. It has the CRUD method and other request and response messages. For example, if we want to create or modify any existing entity record, we can use organization service methods. The organization data service The organization data service is a RESTful service that we can use to get data from CRM. We can use this service's CRUD methods to work with data, but we can't use this service to work with CRM metadata. To work with CRM web services, we can use the following two programming models: Late bound Early bound Early bound In early bound classes, we use proxy classes which are generated by CrmSvcUtil.exe. This utility is included in CRM SDK under the SDKBin path. This utility generates classes for every entity available in the CRM system. In this programming model, a schema name is used to refer to an entity and its attributes. This provides intelligence support, so we don't need to remember the entity and attributes name; as soon as we type the first letter of the entity name, it will display all the entities with that name. We can use the following syntax to generate proxy class for CRM on premise: CrmSvcUtil.exe /url:http://<ServerName>/<organizationName>/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:<username> /password:<password> /domain:<domainName> /namespace:<outputNamespace> /serviceContextName:<serviceContextName> The following is the code to generate proxy for CRM online: CrmSvcUtil.exe /url:https://orgname.api.crm.dynamics.com/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:"myname@myorg.onmicrosoft.com" /password:"myp@ssword! Organization service URLs can be obtained by navigating to Settings | Customization | Developer Resources. We are using CRM online for our demo. In case of CRM online, the organization service URL is dependent on the region where your organization is hosted. You can refer to https://msdn.microsoft.com/en-us/library/gg328127.aspx to get details about different CRM online regions. We can follow these steps to generate the proxy class for CRM online: Navigate to Developer Command Prompt under Visual Studio Tools in your development machine where visual studio is installed. Go to the Bin folder under CRM SDK and paste the preceding command: CrmSvcUtil.exe /url:https://ORGName.api.crm5.dynamics.com/XRMServices/2011/ Organization.svc /out:Xrm.cs /username:"user@ORGName.onmicrosoft.com" /password:"password" CrmSVCUtil Once this file is generated, we can add this file to our visual studio solution. Late bound In the late bound programming model, we use the generic Entity object to refer to entities, which means that we can also refer an entity which is not part of the CRM yet. In this programming mode, we need to use logical names to refer to an entity and its attribute. No intelligence support is available during code development in case of late bound. The following is an example of using the Entity class: Entity AccountObj = new Entity("account"); Using Client APIs for a CRM connection CRM client API helps us connect with CRM easily from .NET applications. It simplifies the developer's task to setup connection with CRM using a simplified connection string. We can use this connection string to create a organization service object. The following is the setup to console applications for our demo: Connect to Visual Studio and go to File | New | Project. Select Visual C# | Console Application and fill CRMConnectiondemo under the Name textbox as shown in the following screenshot: Console app Make sure you have installed the .NET 4.5.2 and .NET 4.5.2 developer packs before creating sample applications. Right-click on References and add the following CRM SDK: Microsoft.Xrm.SDK Microsoft.Xrm.Client We also need to add the following .NET assemblies System.Runtime.Serialization System.Configuration Make sure to add the App.config file if not available under project. We need to right-click on Project Name | Add Item and add Application Configuration File as shown here: app.configfile We need to add a connection string to our app.config file; we are using CRM online for our demo application, so we will be using following connection string: <?xml version="1.0" encoding="UTF-8"?> <configuration> <connectionStrings> <add name="OrganizationService" connectionString="Url=https://CRMOnlineServerURL; Username=User@ORGNAME.onmicrosoft.com; Password=Password;" /> </connectionStrings> </configuration> Right-click on Project, select Add Existing File, and browse our file that we generated earlier to add to our console application. Now we can add two classes in our application—one for early bound and another for late bound and let's name them Earlybound.cs and Latebound.cs You can refer to https://msdn.microsoft.com/en-us/library/jj602970.aspx to connection string for other deployment type, if not using CRM online After adding the preceding classes, our solution structure should look like this: Working with organization web services Whenever we need to interact with CRM SDK, we need to use the CRM web services. Most of the time, we will be working with the Organization service to create and modify data. Organization services contains the following methods to interact with metadata and organization data, we will add these methods to our corresponding Earlybound.cs and Latebound.cs files in our console application. Create This method is used to create system or custom entity records. We can use this method when we want to create entity records using CRM SDK, for example, if we need to develop one utility for data import, we can use this method or we want to create lead record in dynamics from a custom website. This methods takes an entity object as a parameter and returns GUID of the record created. The following is an example of creating an account record with early and late bound. With different data types, we are setting some of the basic account entity fields in our code: Early bound: private void CreateAccount() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountObject = new Account { Name = "HIMBAP Early Bound Example", Address1_City = "Delhi", CustomerTypeCode = new OptionSetValue(3), DoNotEMail = false, Revenue = new Money(5000), NumberOfEmployees = 50, LastUsedInCampaign = new DateTime(2015, 3, 2) }; crmService.Create(accountObject); } } Late bound: private void Create() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountObj = new Entity("account"); //setting string value accountObj["name"] = "HIMBAP"; accountObj["address1_city"] = "Delhi"; accountObj["accountnumber"] = "101"; //setting optionsetvalue accountObj["customertypecode"] = new OptionSetValue(3); //setting boolean accountObj["donotemail"] = false; //setting money accountObj["revenue"] = new Money(5000); //setting entity reference/lookup accountObj["primarycontactid"] = new EntityReference("contact", new Guid("F6954457- 6005-E511-80F4-C4346BADC5F4")); //setting integer accountObj["numberofemployees"] = 50; //Date Time accountObj["lastusedincampaign"] = new DateTime(2015, 05, 13); Guid AccountID = crmService.Create(accountObj); } } We can also use the create method to create primary and related entity in a single call, for example in the following call, we are creating an account and the related contact record in a single call: private void CreateRecordwithRelatedEntity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountEntity = new Entity("account"); accountEntity["name"] = "HIMBAP Technology"; Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountEntity.RelatedEntities.Add(accountcontactRel, Related); crmService.Create(accountEntity); } } In the preceding code, first we created account entity objects, and then we created an object of related contact entity and added it to entity collection. After that, we added a related entity collection to the primary entity with the entity relationship name; in this case, it is contact_customer_accounts. After that, we passed our account entity object to create a method to create an account and the related contact records. When we will run this code, it will create the account as shown here: relatedrecord Update This method is used to update existing record properties, for example, we might want to change the account city or any other address information. This methods takes the entity object as the parameter, but we need to make sure to update the primary key field to update any record. The following are the examples of updating the account city and setting the state property: Early bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountUpdate = new Account { AccountId = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"), Address1_City = "Lad Bharol", Address1_StateOrProvince = "Himachal Pradesh" }; crmService.Update(accountUpdate); } } Late bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountUpdate = new Entity("account"); accountUpdate["accountid"] = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"); accountUpdate["address1_city"] = " Lad Bharol"; accountUpdate["address1_stateorprovince"] = "Himachal Pradesh"; crmService.Update(accountUpdate); } } Similarly, to create method, we can also use the update method to update the primary entity and the related entity in a single call as follows: private void Updateprimaryentitywithrelatedentity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountToUpdate = new Entity("account"); accountToUpdate["name"] = "HIMBAP Technology"; accountToUpdate["websiteurl"] = "www.himbap.com"; accountToUpdate["accountid"] = new Guid("29FC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual account id Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; relatedContact["jobtitle"] = "Sr Consultant"; relatedContact["contactid"] = new Guid("2AFC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual contact id EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountToUpdate.RelatedEntities.Add (accountcontactRel, Related); crmService.Update(accountToUpdate); } } Retrieve This method is used to get data from the CRM based on the primary field, which means that this will only return one record at a time. This method has the following three parameter: Entity: This is needed to pass the logical name of the entity as fist parameter ID: This is needed to pass the primary ID of the record that we want to query Columnset: This is needed to specify the fields list that we want to fetch The following are examples of using the retrieve method Early bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account retrievedAccount = (Account)crmService.Retrieve (Account.EntityLogicalName, new Guid("7D5E187C-9344-4267- 9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name" })); //replace with actual account id } } Late bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity retrievedAccount = (Entity)crmService.Retrieve("account", new Guid("7D5E187C- 9344-4267-9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name"})); } RetrieveMultiple The RetrieveMultiple method provides options to define our query object where we can define criteria to fetch records from primary and related entities. This method takes the query object as a parameter and returns the entity collection as a response. The following are examples of using retrievemulitple with the early and late bounds: Late Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression query = new QueryExpression { EntityName = "account", ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = { FilterOperator = LogicalOperator.Or, Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values={"Delhi"} }, new ConditionExpression { AttributeName="accountnumber", Operator=ConditionOperator.NotNull } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(query); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string>("name").ToString()); } } } Early Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression RetrieveAccountsQuery = new QueryExpression { EntityName = Account.EntityLogicalName, ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = new FilterExpression { Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values = { "Delhi" } } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(RetrieveAccountsQuery); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string> ("name").ToString()); } } } } Delete This method is used to delete entity records from the CRM database. This methods takes the entityname and primaryid fields as parameters: private void Delete() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { crmService.Delete("account", new Guid("85A882EE-A500-E511- 80F9-C4346BAC0E7C")); } } Associate This method is used to setup a link between two related entities. It has the following parameters: Entity Name: This is the logical name of the primary entity Entity Id: This is the primary entity records it field. Relationship: This is the name of the relationship between two entities Related Entities: This is the correction of references The following is an example of using this method with an early bound: private void Associate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC"))); // Create an object that defines the relationship between the contact and account (we want to setup primary contact) Relationship relationship = new Relationship("account_primary_contact"); //Associate the contact with the accounts. crmService.Associate("contact", new Guid("38FC3E74-B30B- E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Disassociate This method is the reverse of the associate. It is used to remove a link between two entity records. This method takes the same setup of parameter as associate method takes. The following is an example of a disassociate account and contact record: private void Disassociate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC "))); // Create an object that defines the relationship between the contact and account. Relationship relationship = new Relationship("account_primary_contact"); //Disassociate the records. crmService.Disassociate("contact", new Guid("15FC3E74- B30B-E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Execute Apart from the common method that we discussed, the execute method helps to execute requests that is not available as a direct method. This method takes a request as a parameter and returns the response as a result. All the common methods that we used previously can also be used as a request with the execute method. The following is an example of working with metadata and creating a custom event entity using the execute method: private void Usingmetadata() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { CreateEntityRequest createRequest = new CreateEntityRequest { Entity = new EntityMetadata { SchemaName = "him_event", DisplayName = new Label("Event", 1033), DisplayCollectionName = new Label("Events", 1033), Description = new Label("Custom entity demo", 1033), OwnershipType = OwnershipTypes.UserOwned, IsActivity = false, }, PrimaryAttribute = new StringAttributeMetadata { SchemaName = "him_eventname", RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None), MaxLength = 100, FormatName = StringFormatName.Text, DisplayName = new Label("Event Name", 1033), Description = new Label("Primary attribute demo", 1033) } }; crmService.Execute(createRequest); } } In the preceding code, we have utilized the CreateEntityRequest class, which is used to create a custom entity. After executing the preceding code, we can check out the entity under the default solution by navigating to Settings | Customizations | Customize the System. You can refer to https://msdn.microsoft.com/en-us/library/gg309553.aspx to see other requests that we can use with the execute method. Testing the console application After adding the preceding methods, we can test our console application by writing a simple test method where we can call our CRUD methods, for example, in the following example, we have added method in our Earlybound.cs. public void EarlyboundTesting() { Console.WriteLine("Creating Account Record....."); CreateAccount(); Console.WriteLine("Updating Account Record....."); Update(); Console.WriteLine("Retriving Account Record....."); Retrieve(); Console.WriteLine("Deleting Account Record....."); Delete(); } After that we can call this method in Main method of Program.cs file like below: static void Main(string[] args) { Earlybound obj = new Earlybound(); Console.WriteLine("Testing Early bound"); obj.EarlyboundTesting(); } Press F5 to run your console application. Summary In this article, you learned about the Microsoft Dynamics CRM 2015 SDK feature. We discussed various options that are available in CRM SDK. You learned about the different CRM APIs and their uses. You learned about different programming models in CRM to work with CRM SDK using different methods of CRM web services, and we created a sample console application. Resources for Article: Further resources on this subject: Attracting Leads and Building Your List [article] PostgreSQL in Action [article] Auto updating child records in Process Builder [article]
Read more
  • 0
  • 0
  • 4512

article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 12666

article-image-finding-useful-information
Packt
16 Oct 2015
22 min read
Save for later

Finding useful information

Packt
16 Oct 2015
22 min read
In this article written by Benjamin Cane, author of the book Red Hat Enterprise Linux Troubleshooting Guide the author goes on to explain how before starting to explore troubleshooting commands, we should first cover locations of useful information. Useful information is a bit of an ubiquitous term, pretty much every file, directory, or command can provide useful information. What he really plans to cover are places where it is possible to find information for almost any issue. (For more resources related to this topic, see here.) Log files Log files are often the first place to start looking for troubleshooting information. Whenever a service or server is experiencing an issue, checking the log files for errors can often answer many questions quickly. The default location By default, RHEL and most Linux distributions keep their log files in /var/log/, which is actually part of the Filesystem Hierarchy Standard (FHS) maintained by the Linux Foundation. However, while /var/log/ might be the default location not all log files are located there(http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). While /var/log/httpd/ is the default location for Apache logs, this location can be changed with Apache's configuration files. This is especially common when Apache was installed outside of the standard RHEL package. Like Apache, most services allow for custom log locations. It is not uncommon to find custom directories or file systems outside of /var/log created specifically for log files. Common log files The following table is a short list of common log files and a description of what you can find within them. Do keep in mind that this list is specific to Red Hat Enterprise Linux 7, and while other Linux distributions might follow similar conventions, they are not guaranteed. Log file Description /var/log/messages By default, this log file contains all syslog messages (except e-mail) of INFO or higher priority. /var/log/secure This log file contains authentication related message items such as: SSH logins User creations Sudo violations and privilege escalation /var/log/cron This log file contains a history of crond executions as well as start and end times of cron.daily, cron.weekly, and other executions. /var/log/maillog This log file is the default log location of mail events. If using postfix, this is the default location for all postfix-related messages. /var/log/httpd/ This log directory is the default location for Apache logs. While this is the default location, it is not a guaranteed location for all Apache logs. /var/log/mysql.log This log file is the default log file for mysqld. Much like the httpd logs, this is default and can be changed easily. /var/log/sa/ This directory contains the results of the sa commands that run every 10 minutes by default. For many issues, one of the first log files to review is the /var/log/messages log. On RHEL systems, this log file receives all system logs of INFO priority or higher. In general, this means that any significant event sent to syslog would be captured in this log file. The following is a sample of some of the log messages that can be found in /var/log/messages: Dec 24 18:03:51 localhost systemd: Starting Network Manager Script Dispatcher Service... Dec 24 18:03:51 localhost dbus-daemon: dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost systemd: Started Network Manager Script Dispatcher Service. Dec 24 18:06:06 localhost kernel: e1000: enp0s3 NIC Link is Down Dec 24 18:06:06 localhost kernel: e1000: enp0s8 NIC Link is Down Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (deferring action for 4 seconds) Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (deferring action for 4 seconds) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (calling deferred action) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (calling deferred action) Dec 24 18:06:12 localhost kernel: e1000: enp0s3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost kernel: e1000: enp0s8 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s3): link connected Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s8): link connected Dec 24 18:06:39 localhost kernel: atkbd serio0: Spurious NAK on isa0060/serio0. Some program might be trying to access hardware directly. Dec 24 18:07:10 localhost systemd: Starting Session 53 of user root. Dec 24 18:07:10 localhost systemd: Started Session 53 of user root. Dec 24 18:07:10 localhost systemd-logind: New session 53 of user root. As we can see, there are more than a few log messages within this sample that could be useful while troubleshooting issues. Finding logs that are not in the default location Many times log files are not in /var/log/, which can be either because someone modified the log location to some place apart from the default, or simply because the service in question defaults to another location. In general, there are three ways to find log files not in /var/log/. Checking syslog configuration If you know a service is using syslog for its logging, the best place to check to find which log file its messages are being written to is the rsyslog configuration files. The rsyslog service has two locations for configuration. The first is the /etc/rsyslog.d directory. The /etc/rsyslog.d directory is an include directory for custom rsyslog configurations. The second is the /etc/rsyslog.conf configuration file. This is the main configuration file for rsyslog and contains many of the default syslog configurations. The following is a sample of the default contents of /etc/rsyslog.conf: #### RULES #### # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron By reviewing the contents of this file, it is fairly easy to identify which log files contain the information required, if not, at least, the possible location of syslog managed log files. Checking the application's configuration Not every application utilizes syslog; for those that don't, one of the easiest ways to find the application's log file is to read the application's configuration files. A quick and useful method for finding log file locations from configuration files is to use the grep command to search the file for the word log: $ grep log /etc/samba/smb.conf # files are rotated when they reach the size specified with "max log size". # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 The grep command is a very useful command that can be used to search files or directories for specific strings or patterns. The simplest command can be seen in the preceding snippet where the grep command is used to search the /etc/samba/smb.conf file for any instance of the pattern "log". After reviewing the output of the preceding grep command, we can see that the configured log location for samba is /var/log/samba/log.%m. It is important to note that %m, in this example, is actually replaced with a "machine name" when creating the file. This is actually a variable within the samba configuration file. These variables are unique to each application but this method for making dynamic configuration values is a common practice. Other examples The following are examples of using the grep command to search for the word "log" in the Apache and MySQL configuration files: $ grep log /etc/httpd/conf/httpd.conf # ErrorLog: The location of the error log file. # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. ErrorLog "logs/error_log" $ grep log /etc/my.cnf # log_bin log-error=/var/log/mysqld.log In both instances, this method was able to identify the configuration parameter for the service's log file. With the previous three examples, it is easy to see how effective searching through configuration files can be. Using the find command The find command, is another useful method for finding log files. The find command is used to search a directory structure for specified files. A quick way of finding log files is to simply use the find command to search for any files that end in ".log": # find /opt/appxyz/ -type f -name "*.log" /opt/appxyz/logs/daily/7-1-15/alert.log /opt/appxyz/logs/daily/7-2-15/alert.log /opt/appxyz/logs/daily/7-3-15/alert.log /opt/appxyz/logs/daily/7-4-15/alert.log /opt/appxyz/logs/daily/7-5-15/alert.log The preceding is generally considered a last resort solution, and is mostly used when the previous methods do not produce results. When executing the find command, it is considered a best practice to be very specific about which directory to search. When being executed against very large directories, the performance of the server can be degraded. Configuration files As discussed previously, configuration files for an application or service can be excellent sources of information. While configuration files won't provide you with specific errors such as log files, they can provide you with critical information (for example, enabled/disabled features, output directories, and log file locations). Default system configuration directory In general, system, and service configuration files are located within the /etc/ directory on most Linux distributions. However, this does not mean that every configuration file is located within the /etc/ directory. In fact, it is not uncommon for applications to include a configuration directory within the application's home directory. So how do you know when to look in the /etc/ versus an application directory for configuration files? A general rule of thumb is, if the package is part of the RHEL distribution, it is safe to assume that the configuration is within the /etc/ directory. Anything else may or may not be present in the /etc/ directory. For these situations, you simply have to look for them. Finding configuration files In most scenarios, it is possible to find system configuration files within the /etc/ directory with a simple directory listing using the ls command: $ ls -la /etc/ | grep my -rw-r--r--. 1 root root 570 Nov 17 2014 my.cnf drwxr-xr-x. 2 root root 64 Jan 9 2015 my.cnf.d The preceding code snippet uses ls to perform a directory listing and redirects that output to grep in order to search the output for the string "my". We can see from the output that there is a my.cnf configuration file and a my.cnf.d configuration directory. The MySQL processes use these for its configuration. We were able to find these by assuming that anything related to MySQL would have the string "my" in it. Using the rpm command If the configuration files were deployed as part of a RPM package, it is possible to use the rpm command to identify configuration files. To do this, simply execute the rpm command with the –q (query) flag, and the –c (configfiles) flag, followed by the name of the package: $ rpm -q -c httpd /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.modules.d/00-base.conf /etc/httpd/conf.modules.d/00-dav.conf /etc/httpd/conf.modules.d/00-lua.conf /etc/httpd/conf.modules.d/00-mpm.conf /etc/httpd/conf.modules.d/00-proxy.conf /etc/httpd/conf.modules.d/00-systemd.conf /etc/httpd/conf.modules.d/01-cgi.conf /etc/httpd/conf/httpd.conf /etc/httpd/conf/magic /etc/logrotate.d/httpd /etc/sysconfig/htcacheclean /etc/sysconfig/httpd The rpm command is used to manage RPM packages and is a very useful command when troubleshooting. We will cover this command further as we explore commands for troubleshooting. Using the find command Much like finding log files, to find configuration files on a system, it is possible to utilize the find command. When searching for log files, the find command was used to search for all files where the name ends in ".log". In the following example, the find command is being used to search for all files where the name begins with "http". This find command should return at least a few results, which will provide configuration files related to the HTTPD (Apache) service: # find /etc -type f -name "http*" /etc/httpd/conf/httpd.conf /etc/sysconfig/httpd /etc/logrotate.d/httpd The preceding example searches the /etc directory; however, this could also be used to search any application home directory for user configuration files. Similar to searching for log files, using the find command to search for configuration files is generally considered a last resort step and should not be the first method used. The proc filesystem An extremely useful source of information is the proc filesystem. This is a special filesystem that is maintained by the Linux kernel. The proc filesystem can be used to find useful information about running processes, as well as other system information. For example, if we wanted to identify the filesystems supported by a system, we could simply read the /proc/filesystems file: $ cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cgroup nodev cpuset nodev tmpfs nodev devtmpfs nodev debugfs nodev securityfs nodev sockfs nodev pipefs nodev anon_inodefs nodev configfs nodev devpts nodev ramfs nodev hugetlbfs nodev autofs nodev pstore nodev mqueue nodev selinuxfs xfs nodev rpc_pipefs nodev nfsd This filesystem is extremely useful and contains quite a bit of information about a running system. The proc filesystem will be used throughout the troubleshooting steps. It is used in various ways while troubleshooting everything from specific processes, to read-only filesystems. Troubleshooting commands This section will cover frequently used troubleshooting commands that can be used to gather information from the system or a running service. While it is not feasible to cover every possible command, the commands used do cover fundamental troubleshooting steps for Linux systems. Command-line basics The troubleshooting steps used are primarily command-line based. While it is possible to perform many of these things from a graphical desktop environment, the more advanced items are command-line specific. As such, the reader has at least a basic understanding of Linux. To be more specific, we assumes that the reader has logged into a server via SSH and is familiar with basic commands such as cd, cp, mv, rm, and ls. For those who might not have much familiarity, I wanted to quickly cover some basic command-line usage that will be required. Command flags Many readers are probably familiar with the following command: $ ls -la total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c Most should recognize that this is the ls command and it is used to perform a directory listing. What might not be familiar is what exactly the –la part of the command is or does. To understand this better, let's look at the ls command by itself: $ ls app.c application app.py bomber.py index.html lookbusy-1.4 lookbusy-1.4.tar.gz lotsofiles The previous execution of the ls command looks very different from the previous. The reason for this is because the latter is the default output for ls. The –la portion of the command is what is commonly referred to as command flags or options. The command flags allow a user to change the default behavior of the command providing it with specific options. In fact, the –la flags are two separate options, –l and –a; they can even be specified separately: $ ls -l -a total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c We can see from the preceding snippet that the output of ls –la is exactly the same as ls –l –a. For common commands, such as the ls command, it does not matter if the flags are grouped or separated, they will be parsed in the same way. Will show both grouped and ungrouped. If grouping or ungrouping is performed for any specific reason it will be called out; otherwise, the grouping or ungrouping used for visual appeal and memorization. In addition to grouping and ungrouping, we will also show flags in their long format. In the previous examples, we showed the flag -a, this is known as a short flag. This same option can also be provided in the long format --all: $ ls -l --all total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c The –a and the --all flags are essentially the same option; it can simply be represented in both short and long form. One important thing to remember is that not every short flag has a long form and vice versa. Each command has its own syntax, some commands only support the short form, others only support the long form, but many support both. In most cases, the long and short flags will both be documented within the commands man page. Piping command output Another common command-line practice that will be used several times is piping output. Specifically, examples such as the following: $ ls -l --all | grep app -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c -rwxrwxr-x. 1 vagrant vagrant 29390 May 18 00:47 application -rw-rw-r--. 1 vagrant vagrant 1198 Jun 10 17:03 app.py In the preceding example, the output of the ls -l --all command is piped to the grep command. By placing | or the pipe character between the two commands, the output of the first command is "piped" to the input for the second command. The example preceding the ls command will be executed; with that, the grep command will then search that output for any instance of the pattern "app". Piping output to grep will actually be used quite often, as it is a simple way to trim the output into a maintainable size. Many times the examples will also contain multiple levels of piping: $ ls -la | grep app | awk '{print $4,$9}' vagrant app.c vagrant application vagrant app.py In the preceding code the output of ls -la is piped to the input of grep; however, this time, the output of grep is also piped to the input of awk. While many commands can be piped to, not every command supports this. In general, commands that accept user input from files or command-line also accept piped input. As with the flags, a command's man page can be used to identify whether the command accepts piped input or not. Gathering general information When managing the same servers for a long time, you start to remember key information about those servers. Such as the amount of physical memory, the size and layout of their filesystems, and what processes should be running. However, when you are not familiar with the server in question it is always a good idea to gather this type of information. The commands in this section are commands that can be used to gather this type of general information. w – show who is logged on and what they are doing Early in my systems administration career, I had a mentor who used to tell me I always run w when I log into a server. This simple tip has actually been very useful over and over again in my career. The w command is simple; when executed it will output information such as system uptime, load average, and who is logged in: # w 04:07:37 up 14:26, 2 users, load average: 0.00, 0.01, 0.05 USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash root pts/0 20:47 1.00s 0.21s 0.19s -bash This information can be extremely useful when working with unfamiliar systems. The output can be useful even when you are familiar with the system. With this command, you can see: When this system was last rebooted:04:07:37 up 14:26:This information can be extremely useful; whether it is an alert for a service like Apache being down, or a user calling in because they were locked out of the system. When these issues are caused by an unexpected reboot, the reported issue does not often include this information. By running the w command, it is easy to see the time elapsed since the last reboot. The load average of the system:load average: 0.00, 0.01, 0.05:The load average is a very important measurement of system health. To summarize it, the load average is the average number of processes in a wait state over a period of time. The three numbers in the output of w represent different times.The numbers are ordered from left to right as 1 minute, 5 minutes, and 15 minutes. Who is logged in and what they are running: USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash The final piece of information that the w command provides is users that are currently logged in and what command they are executing. This is essentially the same output as the who command, which includes the user logged in, when they logged in, how long they have been idle, and what command their shell is running. The last item in that list is extremely important. Oftentimes, when working with big teams, it is common for more than one person to respond to an issue or ticket. By running the w command immediately after login, you will see what other users are doing, preventing you from overriding any troubleshooting or corrective steps the other person has taken. rpm – RPM package manager The rpm command is used to manage Red Hat package manager (RPM). With this command, you can install and remove RPM packages, as well as search for packages that are already installed. We saw earlier how the rpm command can be used to look for configuration files. The following are several additional ways we can use the rpm command to find critical information. Listing all packages installed Often when troubleshooting services, a critical step is identifying the version of the service and how it was installed. To list all RPM packages installed on a system, simply execute the rpm command with -q (query) and -a (all): # rpm -q -a kpatch-0.0-1.el7.noarch virt-what-1.13-5.el7.x86_64 filesystem-3.2-18.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 hicolor-icon-theme-0.12-7.el7.noarch The rpm command is a very diverse command with many flags. In the preceding example the -q and -a flags are used. The -q flag tells the rpm command that the action being taken is a query; you can think of this as being put into a "search mode". The -a or --all flag tells the rpm command to list all packages. A useful feature is to add the --last flag to the preceding command, as this causes the rpm command to list the packages by install time with the latest being first. Listing all files deployed by a package Another useful rpm function is to show all of the files deployed by a specific package: # rpm -q --filesbypkg kpatch-0.0-1.el7.noarch kpatch /usr/bin/kpatch kpatch /usr/lib/systemd/system/kpatch.service In the preceding example, we again use the -q flag to specify that we are running a query, along with the --filesbypkg flag. The --filesbypkg flag will cause the rpm command to list all of the files deployed by the specified package. This example can be very useful when trying to identify a service's configuration file location. Using package verification In this third example, we are going to use an extremely useful feature of rpm, verify. The rpm command has the ability to verify whether or not the files deployed by a specified package have been altered from their original contents. To do this, we will use the -V (verify) flag: # rpm -V httpd S.5....T. c /etc/httpd/conf/httpd.conf In the preceding example, we simply run the rpm command with the -V flag followed by a package name. As the -q flag is used for querying, the -V flag is for verifying. With this command, we can see that only the /etc/httpd/conf/httpd.conf file was listed; this is because rpm will only output files that have been altered. In the first column of this output, we can see which verification checks the file failed. While this column is a bit cryptic at first, the rpm man page has a useful table (as shown in the following list) explaining what each character means: S: This means that the file size differs M: This means that the mode differs (includes permissions and file type) 5: This means that the digest (formerly MD5 sum) differs D: This means indicates the device major/minor number mismatch L: This means indicates the readLink(2) path mismatch U: This means that the user ownership differs G: This means that the group ownership differs T: This means that mTime differs P: This means that caPabilities differs Using this list we can see that the httpd.conf's file size, MD5 sum, and mtime (Modify Time) are not what was deployed by httpd.rpm. This means that it is highly likely that the httpd.conf file has been modified after installation. While the rpm command might not seem like a troubleshooting command at first, the preceding examples show just how powerful of a troubleshooting tool it can be. With these examples, it is simple to identify important files and whether or not those files have been modified from the deployed version. Summary Overall we learned that log files, configuration files, and the /proc filesystem are key sources of information during troubleshooting. We also covered the basic use of many fundamental troubleshooting commands. You also might have noticed that quite a few commands are also used in day-to-day life for nontroubleshooting purposes. While these commands might not explain the issue themselves, they can help gather information about the issue, which leads to a more accurate and quick resolution. Familiarity with these fundamental commands is critical to your success during troubleshooting. Resources for Article: Further resources on this subject: Linux Shell Scripting[article] Embedded Linux and Its Elements[article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 9330
Packt
15 Oct 2015
7 min read
Save for later

Mobile Phone Forensics: A First Step into Android Forensics

Packt
15 Oct 2015
7 min read
In this article by Michael Spreitzenbarth and Johann Uhrmann, authors of the book Mastering Python Forensics, we will see that even though the forensic analysis of the standard computer hardware—such as hard disks—has developed into a stable discipline with a lot of reference work, the techniques used to analyze the non-standard hardware or transient evidence are debatable. Despite their increasing role in digital investigations, smartphones are yet to be considered non-standard due of their heterogeneity. In all investigations, it is necessary to follow the basic forensic principles. The two main principles are as follows: Greatest care must be taken to ensure that the evidence is manipulated or changed as little as possible. The course of a digital investigation must be understandable and open to scrutiny. At best, the results of the investigation must be reproducible by independent investigators. The first principle is a challenge while the setting smartphones as most of them employ specific operating systems and hardware protection methods that prevent unrestricted access to the data on the system. The preservation of the data from the hard disks is, in most cases, a simple and well-known procedure. An investigator removes the hard disk from the computer or notebook, connects it to his workstation with the help of a write blocker (for example, Tableau's TK35) and starts analyzing it with well-known and certified software solutions. When this is compared to the smartphone world, it becomes clear that here is no such procedure. Nearly, every smartphone has its own way to build its storage; however, the investigators need their own way to get the storage dump on each smartphone. While it is difficult to get the data from a smartphone, one can get much more data with reference to the diversity of the data. Smartphones store, besides the usual data (for example, pictures and documents), the data such as GPS coordinates or the position of a mobile cell that the smartphone was connected to before it was switched off. Considering the resulting opportunities, it turns out that it is worth the extra expense for an investigator. In the following sections, we will show you how to start such analysis on an Android phone. Circumventing the screen lock It is always a good point to start the analysis by circumventing the screen lock. As you may know, it is very easy to get around the pattern lock on an Android smartphone. In this brief article, we will describe the following two ways to get around it: On a rooted smartphone With the help of the JTAG interface Some background information The pattern lock is entered by the users on joining the points on a 3×3 matrix in their chosen order. Since Android 2.3.3, this pattern involves a minimum of 4 points (on older Android versions, the minimum was 3 points) and each point can only be used once. The points of the matrix are registered in a numbered order starting from 0 in the upper left corner and ending with 8 in the bottom right corner. So the pattern of the lock screen in the following figure would be 0 – 3 – 6 – 7 – 8: Android stores this pattern in a special file called gesture.key in the /data/system/ directory. As storing the pattern in plain text is not safe, Android only stores an unsalted SHA1-hashsum of this pattern (refer to the following code snippet). Accordingly, our pattern is stored as the following hashsum: c8c0b24a15dc8bbfd411427973574695230458f0 The code snippet is as follows: private static byte[] patternToHash(List pattern) { if (pattern == null) { return null; } final int patternSize = pattern.size(); byte[] res = new byte[patternSize]; for (int i = 0; i &lt; patternSize; i++) { LockPatternView.Cell cell = pattern.get(i); res[i] = (byte) (cell.getRow() * 3 + cell.getColumn()); } try { MessageDigest md = MessageDigest.getInstance("SHA-1"); byte[] hash = md.digest(res); return hash; } catch (NoSuchAlgorithmException nsa) { return res; } } Due to the fact that the pattern has a finite and a very small number of possible combinations, the use of an unsalted hash isn't very safe. It is possible to generate a dictionary (rainbow table) with all possible hashes and compare the stored hash with this dictionary within a few seconds. To ensure some more safety, Android stores the gesture.key file in a restricted area of the filesystem, which cannot be accessed by a normal user. If you have to get around it, there are two ways available to choose from. On a rooted smartphone If you are dealing with a rooted smartphone and the USB debugging is enabled, cracking the pattern lock is quite easy. You just have to dump the /data/system/gesture.key file and compare the bytes of this file with your dictionary. These steps can be automated with the help of the following Python script: #!/usr/bin/python # import hashlib, sqlite3 from binascii import hexlify SQLITE_DB = "GestureRainbowTable.db" def crack(backup_dir): # dumping the system file containing the hash print "Dumping gesture.key ..." saltdb = subprocess.Popen(['adb', 'pull', '/data/system/gesture.key', backup_dir], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) gesturehash = open(backup_dir + "/gesture.key", "rb").readline() lookuphash = hexlify(gesturehash).decode() print "HASH: \033[0;32m" + lookuphash + "\033[m" conn = sqlite3.connect(SQLITE_DB) cur = conn.cursor() cur.execute("SELECT pattern FROM RainbowTable WHERE hash = ?", (lookuphash,)) gesture = cur.fetchone()[0] return gesture if __name__ == '__main__': # check if device is connected and adb is running as root if subprocess.Popen(['adb', 'get-state'], stdout=subprocess.PIPE).communicate(0)[0].split("\n")[0] == "unknown": print "no device connected - exiting..." sys.exit(2) # starting to create the output directory and the crack file used for hashcat backup_dir = sys.argv[1] try: os.stat(backup_dir) except: os.mkdir(backup_dir) gesture = crack(backup_dir) print "Screenlock Gesture: \033[0;32m" + gesture + "\033[m"" With the help of the JTAG Interface If you are dealing with a stock or at least an unrooted smartphone, the whole process is a bit more complicated. First of all, you need special hardware such as a Riff-Box and an JIG adapter or some soldering skills. After you have gained a physical dump of the complete memory chip, the chase for the pattern lock can start. In our experiment, we used HTC Wildfire with a custom ROM and Android 2.3.3 flashed on it and the pattern lock that we just saw in this article. After looking at the dump, we noticed that every chunk has an exact size of 2048 bytes. Since we know that our gesture.key file has 20 bytes of data (SHA1-hashsum), we searched for this combination of bytes and noticed that there is one chunk starting with these 20 bytes, followed by 2012 bytes of zeroes and 16 random bytes (this seems to be some meta file system information). We did a similar search on several other smartphone dumps (mainly older Android versions on the same phone) and it turned out that this method is the easiest way to get the pattern lock without being root. In other words, we do the following: Try to get a physical dump of the complete memory Search for a chunk with the size of 2048 bytes, starting with 20 bytes random, followed by 2012 bytes zeroes and 16 bytes random Extract the 20 bytes random at the beginning of the chunk Compare these 20 bytes with your dictionary Type in the corresponding pattern on the smartphone Summary In this article, we demonstrated how to get around the screen lock on an Android smartphone if the user has chosen to use a pattern to protect the smartphone against unauthorized access. Circumventing this protection can help the investigators during the analysis of a smartphone in question, as they are now able to navigate through the apps and smartphone UI in order to search for interesting sources or to prove the findings. This step is only the first part of an investigation, if you would like to get more details about further steps and investigations on the different systems, the book Mastering Python Forensics could be a great place to start. Resources for Article: Further resources on this subject: Evidence Acquisition and Analysis from iCloud [article] BackTrack Forensics [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 14968

article-image-internet-things
Packt
15 Oct 2015
13 min read
Save for later

The Internet of Things

Packt
15 Oct 2015
13 min read
In this article by Charles Hamilton, author of the book BeagleBone Black Cookbook, we will cover location-based recipes and how to hook up GPS. (For more resources related to this topic, see here.) Introduction The Internet of Things is a large basketful of things. In fact, it is so large that no one can see the edges of it yet. It is an evolving and quickly expanding repo of products, concepts, fledgling business ventures, prototypes, middleware, ersatz systems, and hardware. Some define IoT as connecting things that are not normally connected, thus making them a bit more useful than they were as unconnected devices. We will not show you how to turn off the lights in your house using the BBB, or how to auto raise the garage door when you turn onto your street while driving. There are a bunch of tutorials that do that already. Instead, we will take a look at some of the recipes that provide some fundamental elements to build IoT-centric prototypes or use cases. Location-based recipes – hooking up GPS A common question in the IoT realm is where is that darn thing? That Internet of Things thing? Being able to track and pinpoint the location of a device is one of the more typical features of many IoT use cases. So, we will first take a look at a recipe on how to for use everyone's favorite location tech: GPS. Then, we will explore one of the newer innovations spun out of Bluetooth 4.0, a way to capture more precise location-based data rather than GPS using beacons. The UART background In the galaxy of embedded systems, developers use dozens of different serial protocols. On the more common side, consumers are familiar with things such as USB and Ethernet. Then, there are protocols familiar to engineers, such as SPI and I2C, which we have already explored in this book. For this recipe, we will use yet another flavor of serial, UART, an asynchronous or clock-less protocol. This comes in handy in a variety of scenarios to connect IoT-centric devices. Universal asynchronous receiver/transmitter (UART) is a common circuit block used to manage serial data and hardware. As UART does not require a clock signal, it uses fewer wires and pins. In fact, UART uses only two serial wires: RX to receive packets and TX to transmit them. The framework for this recipe comes from AdaFruit's tutorial for the RPi. However, the differences between these two boards are nontrivial, so this recipe needs quite a few more ingredients than the RPi version. Getting ready You will need the following components for this recipe: GPS PCB: You can probably find cheaper versions, but we will use AdaFruit's well regarded and ubiquitous PCB (http://www.adafruit.com/product/746 at around USD $40.00). Antenna: Again, Adafruit's suggested SMA to the uFL adapter antenna is the simplest and cheap at USD $3.00 (https://www.adafruit.com/product/851). 5V power: Powering via the 5V DC in lieu of simply connecting via the mini USB is advisable. The GPS modules consume a good bit of power, a fact apparent to all of us, given how the GPS functionality is a well-known drain on our smartphones. Internet connectivity, either via WiFi or Ethernet Breadboard 4x jumper wires How to do it… For the GPS setup, the steps are as follows: Insert the PCB pins into the breadboard and wire the pins according to the following fritzing diagram: P9_11 (blue wire): This denotes RX on BBB and TX on GPS PCB. At first, it may seem confusing to not wire TX to TX, and so on. However, once you understand the pin's function, the logic is clear: a transmit (TX) pin pairs with a pin that can receive data (RX); a receive pin pairs with a pin that transmits data. P9_13 (green wire): This specifies TX on BBB and RX on GPS PCB. P9_1: This indicates GND. P9_3: This denotes 3.3V. Carefully attach the antenna to the board's uFL connector. Now, power your BBB. Here's where it gets a bit tricky. When your BBB starts up, you will immediately see the Fix button on the GPS board that will begin to flash quickly, around 1x per second. We will come back to check the integrity of the module's satellite connection in a later step. In order to gain access to the UART pins on the BBB, we have to enable them with a Device Tree overlay. Until recently, this was a multistep process. However, now that the BeagleBone universal I/O package comes preloaded on the current versions of the firmware, enabling the pins—in the case, UART4—is a snap. Let's begin by logging in as root with the following command: $ sudo -i Then, run the relevant universal I/O command and check whether it went to the right place, as shown in the following code: # config-pin overlay BB-UART4 # cat /sys/devices/bone_capemgr.*/slots Now, reboot your BBB, then check whether the device is present in the device list using the following command: $ ls -l /dev/ttyO* crw-rw---- 1 root tty 247, 0 Mar 1 20:46 /dev/ttyO0 crw-rw---T 1 root dialout 247, 4 Jul 13 02:12 /dev/ttyO4 Finally, check whether it is loading properly with the following command: $ dmesg This is how the output looks: [ 188.335168] bone-capemgr bone_capemgr.9: part_number 'BB-UART4', version 'N/A' [ 188.335235] bone-capemgr bone_capemgr.9: slot #7: generic override [ 188.335250] bone-capemgr bone_capemgr.9: bone: Using override eeprom data at slot 7 [ 188.335266] bone-capemgr bone_capemgr.9: slot #7: 'Override Board Name,00A0,Override Manuf,BB-UART4' [ 188.335355] bone-capemgr bone_capemgr.9: slot #7: Requesting part number/version based 'BB-UART4-00A0.dtbo [ 188.335370] bone-capemgr bone_capemgr.9: slot #7: Requesting firmware 'BB-UART4-00A0.dtbo' for board-name 'Override Board Name', version '00A0' [ 188.335400] bone-capemgr bone_capemgr.9: slot #7: dtbo 'BB-UART4-00A0.dtbo' loaded; converting to live tree [ 188.335673] bone-capemgr bone_capemgr.9: slot #7: #2 overlays [ 188.343353] 481a8000.serial: ttyO4 at MMIO 0x481a8000 (irq = 45) is a OMAP UART4 [ 188.343792] bone-capemgr bone_capemgr.9: slot #7: Applied #2 overlays. Tips to get a GPS fix Your best method to get the GPS module connected is to take it outdoors. However, as that's not a likely option when you are developing a project, putting it against or even just outside a window will often suffice. If it is cloudy, and if you don't have a reasonably clear sky view from your module's antenna, do not expect a quick connection. Be patient. When a fix is made, the flashing LED will cycle very slowly at about 15-second intervals. Even if the GPS modules do not have a fix, be aware that they will still send data. This can be confusing because you may run some of the following commands and think that your connection is fine, but you just keep getting junk (blank) data. However, to reiterate that the flashing LED has to have slowed down to 15-second intervals to confirm that you have a fix. Although the output is not pretty, the following command is a useful first step in making sure that your devices are hooked up because it will show the raw NMEA data coming out of the GPS: $ cat /dev/ttyO4 The National Marine Electronics Association uses a GPS language protocol standard. Verify that your wiring is correct and that the module is generating the data properly (irrespective of a satellite fix) as follows: $ sudo screen /dev/ttyO4 9600 The output should immediately begin and look something similar to this: $GPGGA,163356.000,4044.0318,N,07400.1854,W,1,5,2.65,4.0,M,-34.2,M,,*67 $GPGSA,A,3,13,06,10,26,02,,,,,,,,2.82,2.65,0.95*04 $GPRMC,163356.000,A,4044.0318,N,07400.1854,W,2.05,68.70,031214,,,A*46 $GPVTG,68.70,T,,M,2.05,N,3.81,K,A*09 $GPGGA,163357.000,4044.0322,N,07400.1853,W,1,5,2.65,3.7,M,-34.2,M,,*68 $GPGSA,A,3,13,06,10,26,02,,,,,,,, Now quit the program using one of the the following methods: Press Ctrl + a, enter or copy and paste :quit with the colon to the highlighted box at the bottom, or press Ctrl + a, k, y. Installing the GPS toolset Perform the following steps to install the GPS toolset: The next set of ingredients in the recipe consists of installing and testing a common toolset to parse GPS on Linux. As always, before installing something new, it is good practice to update your repos with the following command: $ sudo apt-get update Install the tools, including gpsd, a service daemon to monitor your GPS receiver. The package exposes all the data on location, course, and velocity on the TCP port 2947 of your BBB and efficiently parses the NMEA text pouring out of the GPS receiver, as shown in the following command: $ sudo apt-get install gpsd gpsd-clients python-gps For the preceding command, gpsd-clients installs some test clients, and python-gps installs the required Python library so that it can communicate with gpsd via Python scripts. After the installation, you may find it useful to run gpsd and review the package's well-written and informative manual. It provides not only the details around what you just installed, but also the general GPS-related context as well. If the planets or communication satellites are aligned, you can run this command from the newly installed toolset and begin displaying the GPS data: $ sudo gpsmon /dev/ttyO4 You should see a terminal GUI that looks similar to the following screenshot: To quit, press Ctrl + c or type q and then press return (Enter) key. Now, we will test the other principal tool that you just installed with the following command: $ sudo cgps -s The output includes the current date and time in UTC, the latitude and longitude, and the approximate altitude. Troubleshooting: Part 1 You may run into problems here. Commonly, on a first time set up and running, cgps may time out, close by itself, and lead you to believe that there is a problem with your setup. If so, the next steps can lead you back on the path to GPS nirvana: We will begin by stopping all the running instances of GPS, as shown in the following code: $ sudo killall gpsd Now, let's get rid of any sockets that the gpsd commands may have left behind with the following command: $ sudo rm /var/run/gpsd.sock There is a systemd bug that we will typically need to address. Here is how we fix it: Open the systemd GPSD service using the following command: $ sudo nano /lib/systemd/system/gpsd.service Paste it to the window with the following script: [Unit] Description=GPS (Global Positioning System) Daemon Requires=gpsd.socket [Service] ExecStart=/usr/sbin/gpsd -n -N /dev/ttyO4 [Install] Also=gpsd.socket Then, restart the systemd service as follows: $ sudo service gpsd start You should now be able to run either of the following service again: $ sudo gpsmon /dev/ttyO4 Alternatively, you can also run the following command: $ sudo cgps -s Troubleshooting: Part 2 Sometimes, the preceding fixes don't fix it. Here are several more suggestions for troubleshooting purposes: Set up a control socket for GPS with the following command: $ sudo gpsd -N -D3 -F /var/run/gpsd.sock The explanation of the command-line flags or options are as follows: -N: This tells gpsd to immediately post the GPS data. Although, this is useful for testing purposes, it can also be a power consumption, so leave it off if your use case is battery-powered. -F: This creates a control socket for device addition and removal. The option requires a valid pathname on your local filesystem, which is why our command is appended with /var/run/gpsd.sock. We may also need to install a package that lets us examine any port conflict that could be occurring, a shown in the following code: $ sudo apt-get install lsof This installed utility will open and display the system files, including disk files, named pipes, network sockets, and devices opened by all the processes. There are multiple uses for the tool. However, we only want to determine whether the GPS module is speaking correctly to port 2947 and if there are any conflicts. So, we will run the following command: $ sudo lsof -i :2947 This is how the output should look: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME systemd 1 root 24u IPv4 6907 0t0 TCP localhost:gpsd (LISTEN) gpsd 5960 nobody 4u IPv4 6907 0t0 TCP localhost:gpsd (LISTEN) You may also want to check whether any instances of the GPS are running and then kill them with the following code: $ ps aux |grep gps $ sudo killall gpsd For a final bit of cooking with the GPS board, we want to run a Python script and display the data in tidy, parsed output. The code was originally written for the RPi, but it is useable on the BBB as well. Go, get it using the following command: $ git clone https://github.com/HudsonWerks/gps-tests.git Now, browse to the new directory that we just created and take a look at the file that we will use with the following command: $ cd gps-tests $ sudo nano GPStest1.py Then, open a new Python file and paste the following code to the window: $ sudo nano GPStest1.py The script requires a number of Python libraries, as shown in the following code: import os from gps import * from time import * import time import threading Keep in mind that getting a fix and then obtaining a good GPS data can take several moments before the system settles into a comfortable flow, as shown in the following code: #It may take a second or two to get good data #print gpsd.fix.latitude,', ',gpsd.fix.longitude,' Time: ',gpsd.utc If you find the output overwhelming, you can always modify the print commands to simplify the display as follows: print print ' GPS reading' print '----------------------------------------' print 'latitude ' , gpsd.fix.latitude print 'longitude ' , gpsd.fix.longitude print 'time utc ' , gpsd.utc,' + ', gpsd.fix.time print 'altitude (m)' , gpsd.fix.altitude print 'eps ' , gpsd.fix.eps print 'epx ' , gpsd.fix.epx print 'epv ' , gpsd.fix.epv print 'ept ' , gpsd.fix.ept print 'speed (m/s) ' , gpsd.fix.speed print 'climb ' , gpsd.fix.climb print 'track ' , gpsd.fix.track print 'mode ' , gpsd.fix.mode print print 'sats ' , gpsd.satellites time.sleep(5) Now, close the script and run the following command: $ python GPStest1.py In a few seconds, nicely formatted GPS data should be spitting out in your terminal window. There's more... Sparkfun's tutorial on GPS is definitely worth the read at https://learn.sparkfun.com/tutorials/gps-basics/all For further GPSD troubleshooting, refer to http://www.catb.org/gpsd/troubleshooting.html Summary In this article we discussed how to hook up GPS in detail as one of the major features of IoT. Resources for Article: Further resources on this subject: Learning BeagleBone Python Programming [Article] Learning BeagleBone [Article] Protecting GPG Keys in BeagleBone [Article]
Read more
  • 0
  • 0
  • 13077
Modal Close icon
Modal Close icon