Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-synchronizing-tests
Packt
04 Nov 2015
9 min read
Save for later

Synchronizing Tests

Packt
04 Nov 2015
9 min read
In this article by Unmesh Gundecha, author of Selenium Testing Tools Cookbook Second Edition, you will cover the following topics: Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions While building automated scripts for a complex web application using Selenium WebDriver, we need to ensure that the test flow is maintained for reliable test automation. When tests are run, the application may not always respond with the same speed. For example, it might take a few seconds for a progress bar to reach 100 percent, a status message to appear, a button to become enabled, and a window or pop-up message to open. You can handle these anticipated timing problems by synchronizing your test to ensure that Selenium WebDriver waits until your application is ready before performing the next step. There are several options that you can use to synchronize your test. In this article, we will see various features of Selenium WebDriver to implement synchronization in tests. (For more resources related to this topic, see here.) Synchronizing a test with an implicit wait The Selenium WebDriver provides an implicit wait for synchronizing tests. When an implicit wait is implemented in tests, if WebDriver cannot find an element in the Document Object Model (DOM), it will wait for a defined amount of time for the element to appear in the DOM. Once the specified wait time is over, it will try searching for the element once again. If the element is not found in specified time, it will throw NoSuchElement exception. In other terms, an implicit wait polls the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object's instance. In this recipe, we will briefly explore the use of an implicit wait; however, it is recommended to avoid or minimize the use of an implicit wait. How to do it... Let's create a test on a demo AJAX-enabled application as follows: @Test public void testWithImplicitWait() { //Go to the Demo AjAX Application WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); //Set the Implicit Wait time Out to 10 Seconds driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); try { //Get link for Page 4 and click on it WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); //Get an element with id page4 and verify it's text WebElement message = driver.findElement(By.id("page4")); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the Timeouts interface for configuring the implicit wait. The Timeouts interface provides an implicitlyWait() method, which accepts the time the driver should wait when searching for an element. In this example, a test will wait for an element to appear in DOM for 10 seconds: driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); Until the end of a test or an implicit wait is set back to 0, every time an element is searched using the findElement() method, the test will wait for 10 seconds for an element to appear. Using implicit wait may slow down tests when an application responds normally, as it will wait for each element appearing in the DOM and increase the overall execution time. Minimize or avoid using an implicit wait. Use Explicit wait, which provides more control when compared with an implicit wait. See also Synchronizing a test with an explicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with an explicit wait The Selenium WebDriver provides an explicit wait for synchronizing tests, which provides a better way to wait over an implicit wait. Unlike an implicit wait, you can use predefined conditions or custom conditions or wait before proceeding further in the code. The Selenium WebDriver provides WebDriverWait and ExpectedConditions classes for implementing an explicit wait. The ExpectedConditions class provides a set of predefined conditions to wait before proceeding further in the code. The following table shows some common conditions that we frequently come across when automating web browsers supported by the ExpectedConditions class:   Predefined condition    Selenium method An element is visible and enabled elementToBeClickable(By locator) An element is selected elementToBeSelected(WebElement element) Presence of an element presenceOfElementLocated(By locator) Specific text present in an element textToBePresentInElement(By locator, java.lang.String text) Element value textToBePresentInElementValue(By locator, java.lang.String text) Title titleContains(java.lang.String title)  For more conditions, visit http://seleniumhq.github.io/selenium/docs/api/java/index.html. In this recipe, we will explore some of these conditions with the WebDriverWait class. How to do it... Let's implement a test that uses the ExpectedConditions.titleContains() method to implement an explicit wait as follows: @Test public void testExplicitWaitTitleContains() { //Go to the Google Home Page WebDriver driver = new FirefoxDriver(); driver.get("http://www.google.com"); //Enter a term to search and submit WebElement query = driver.findElement(By.name("q")); query.sendKeys("selenium"); query.click(); //Create Wait using WebDriverWait. //This will wait for 10 seconds for timeout before title is updated with search term //If title is updated in specified time limit test will move to the text step //instead of waiting for 10 seconds WebDriverWait wait = new WebDriverWait(driver, 10); wait.until(ExpectedConditions.titleContains("selenium")); //Verify Title assertTrue(driver.getTitle().toLowerCase().startsWith("selenium")); driver.quit(); } How it works... We can define an explicit wait for a set of common conditions using the ExpectedConditions class. First, we need to create an instance of the WebDriverWait class by passing the driver instance and timeout for a wait as follows: WebDriverWait wait = new WebDriverWait(driver, 10); Next, ExpectedCondition is passed to the wait.until() method as follows: wait.until(ExpectedConditions.titleContains("selenium")); The WebDriverWait object will call the ExpectedConditions class object every 500 milliseconds until it returns successfully. See also Synchronizing a test with an implicit wait Synchronizing a test with custom-expected conditions Synchronizing a test with custom-expected conditions With the explicit wait mechanism, we can also build custom-expected conditions along with common conditions using the ExpectedConditions class. This comes in handy when a wait cannot be handled with a common condition supported by the ExpectedConditions class. In this recipe, we will explore how to create a custom condition. How to do it... We will create a test that will create a wait until an element appears on the page using the ExpectedCondition class as follows: @Test public void testExplicitWait() { WebDriver driver = new FirefoxDriver(); driver.get("http://dl.dropbox.com/u/55228056/AjaxDemo.html"); try { WebElement page4button = driver.findElement(By.linkText("Page 4")); page4button.click(); WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); assertTrue(message.getText().contains("Nunc nibh tortor")); } catch (NoSuchElementException e) { fail("Element not found!!"); e.printStackTrace(); } finally { driver.quit(); } } How it works... The Selenium WebDriver provides the ability to implement the custom ExpectedCondition interface along with the WebDriverWait class for creating a custom-wait condition, as needed by a test. In this example, we created a custom condition, which returns a WebElement object once the inner findElement() method locates the element within a specified timeout as follows: WebElement message = new WebDriverWait(driver, 5) .until(new ExpectedCondition<WebElement>(){ @Override public WebElement apply(WebDriver d) { return d.findElement(By.id("page4")); }}); There's more... A custom wait can be created in various ways. In the following section, we will explore some common examples for implementing a custom wait. Waiting for element's attribute value update Based on the events and actions performed, the value of an element's attribute might change at runtime. For example, a disabled textbox gets enabled based on the user's rights. A custom wait can be created on the attribute value of the element. In the following example, the ExpectedCondition waits for a Boolean return value, based on the attribute value of an element: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("userName")).getAttribute("readonly").contains("true"); }}); Waiting for an element's visibility Developers hide or display elements based on the sequence of actions, user rights, and so on. The specific element might exist in the DOM, but are hidden from the user, and when the user performs a certain action, it appears on the page. A custom-wait condition can be created based on the element's visibility as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return d.findElement(By.id("page4")).isDisplayed(); }}); Waiting for DOM events The web application may be using a JavaScript framework such as jQuery for AJAX and content manipulation. For example, jQuery is used to load a big JSON file from the server asynchronously on the page. While jQuery is reading and processing this file, a test can check its status using the active attribute. A custom wait can be implemented by executing the JavaScript code and checking the return value as follows: new WebDriverWait(driver, 10).until(new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { JavascriptExecutor js = (JavascriptExecutor) d; return (Boolean)js.executeScript("return jQuery.active == 0"); }}); See also Synchronizing a test with an implicit wait Synchronizing a test with an explicit wait Summary In this article, you have learned how the Selenium WebDriver helps in maintaining a reliable automated test. Using the Selenium WebDriver, you also learned how you can synchronize a test using the implicit and the explicit wait methods. You also saw how to synchronize a test with custom-expected conditions. Resources for Article: Further resources on this subject: Javascript Execution With Selenium [article] Learning Selenium Testing Tools With Python [article] Cross-Browser Tests Using Selenium Webdriver [article]
Read more
  • 0
  • 0
  • 1806

article-image-restservices-finagle-and-finch
Packt
03 Nov 2015
9 min read
Save for later

RESTServices with Finagle and Finch

Packt
03 Nov 2015
9 min read
In this article by Jos Dirksen, the author of RESTful Web Services with Scala, we'll only be talking about Finch. Note, though, that most of the concepts provided by Finch are based on the underlying Finagle ideas. Finch just provides a very nice REST-based set of functions to make working with Finagle very easy and intuitive. (For more resources related to this topic, see here.) Finagle and Finch are two different frameworks that work closely together. Finagle is an RPC framework, created by Twitter, which you can use to easily create different types of services. On the website (https://github.com/twitter/finagle), the team behind Finagle explains it like this: Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle's code is protocol agnostic, simplifying the implementation of new protocols. So, while Finagle provides the plumbing required to create highly scalable services, it doesn't provide direct support for specific protocols. This is where Finch comes in. Finch (https://github.com/finagle/finch) provides an HTTP REST layer on top of Finagle. On their website, you can find a nice quote that summarizes what Finch aims to do: Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable REST APIs. Its mission is to provide the developers simple and robust REST API primitives being as close as possible to the bare metal Finagle API. Your first Finagle and Finch REST service Let's start by building a minimal Finch REST service. The first thing we need to do is to make sure we have the correct dependencies. To use Finch, all you have to do is to add the following dependency to your SBT file: "com.github.finagle" %% "finch-core" % "0.7.0" With this dependency added, we can start coding our very first Finch service. The next code fragment shows a minimal Finch service, which just responds with a Hello, Finch! message: package org.restwithscala.chapter2.gettingstarted import io.finch.route._ import com.twitter.finagle.Httpx object HelloFinch extends App { Httpx.serve(":8080", (Get / "hello" />"Hello, Finch!").toService) println("Press <enter> to exit.") Console.in.read.toChar } When this service receives a GET request on the URL path hello, it will respond with a Hello, Finch! message. Finch does this by creating a service (using the toService function) from a route (more on what a route is will be explained in the next section) and using the Httpx.serve function to host the created service. When you run this example, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.gettingstarted.HelloFinch Jun 26, 2015 9:38:00 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. At this point, we have an HTTP server running on port 8080. When we make a call to http://localhost:8080/hello, this server will respond with the Hello, Finch! message. To test this service, you can make a HTTP requests in Postman like this: If you don't want to use a GUI to make the requests, you can also use the following Curl command: curl'http://localhost:8080/hello' HTTP verb and URL matching An important part of every REST framework is the ability to easily match HTTP verbs and the various path segments of the URL. In this section, we'll look at the tools Finch provides us with. Let's look at the code required to do this (the full source code for this example can be found at https://github.com/josdirksen/rest-with-scala/blob/master/chapter-02/src/main/scala/org/restwithscala/chapter2/steps/FinchStep1.scala): package org.restwithscala.chapter2.steps import com.twitter.finagle.Httpx import io.finch.request._ import io.finch.route._ import io.finch.{Endpoint => _} object FinchStep1 extends App { // handle a single post using a RequestReader valtaskCreateAPI = Post / "tasks" /> ( for { bodyContent<- body } yield s"created task with: $bodyContent") // Use matchers and extractors to determine which route to call // For more examples see the source file. valtaskAPI = Get / "tasks" /> "Get a list of all the tasks" | Get / "tasks" / long /> ( id =>s"Get a single task with id: $id" ) | Put / "tasks" / long /> ( id =>s"Update an existing task with id $id to " ) | Delete / "tasks" / long /> ( id =>s"Delete an existing task with $id" ) // simple server that combines the two routes and creates a val server = Httpx.serve(":8080", (taskAPI :+: taskCreateAPI).toService ) println("Press <enter> to exit.") Console.in.read.toChar server.close() } In this code fragment, we created a number of Router instances that process the requests, which we sent from Postman. Let's start by looking at one of the routes of the taskAPI router: Get / "tasks" / long /> (id =>s"Get a single task with id: $id"). The following table explains the various parts of the route: Part Description Get While writing routers, usually the first thing you do is determine which HTTP verb you want to match. In this case, this route will only match the GET verb. Besides the Get matcher, Finch also provides the following matchers: Post, Patch, Delete, Head, Options, Put, Connect, and Trace. "tasks" The next part of the route is a matcher that matches a URL path segment. In this case, we match the following URL: http://localhost:8080/tasks. Finch will use an implicit conversion to convert this String object to a finch Matcher object. Finch also has two wildcard Matchers: * and **. The * matcher allows any value for a single path segment, and the ** matcher allows any value for multiple path segments. long The next part in the route is called an Extractor. With an extractor, you turn part of the URL into a value, which you can use to create the response (for example, retrieve an object from the database using the extracted ID). The long extractor, as the name implies, converts the matching path segment to a long value. Finch also provides an int, string, and Boolean extractor. long =>B The last part of the route is used to create the response message. Finch provides different ways of creating the response, which we'll show in the other parts of this article. In this case, we need to provide Finch with a function that transforms the long value we extracted, and return a value Finch can convert to a response (more on this later). In this example, we just return a String.  If you've looked closely at the source code, you would have probably noticed that Finch uses custom operators to combine the various parts of a route. Let's look a bit closer at those. With Finch, we get the following operators (also called combinators in Finch terms): / or andThen: With this combinatory, you sequentially combine various matchers and extractors together. Whenever the first part matches, the next one is called. For instance: Get / "path" / long. | or orElse: This combinator allows you to combine two routers (or parts thereof) together as long as they are of the same type. So, we could do (Get | Post) to create a matcher, which matches the GET and POST HTTP verbs. In the code sample, we've also used this to combine all the routes that returned a simple String into the taskAPI router. /> or map: With this combinatory, we pass the request and any extracted values from the path to a function for further processing. The result of the function that is called is returned as the HTTP response. As you'll see in the rest of the article, there are different ways of processing the HTTP request and creating a response. :+:: The final combinator allows you to combine two routers together of different types. In the example, we have two routers. A taskAPI, that returns a simple String, and a taskCreateAPI, which uses a RequestReader (through the body function) to create the response. We can't combine these with | since the result is created using two different approaches, so we use the :+:combinator. We just return simple Strings whenever we get a request. In the next section, we'll look at how you can use RequestReader to convert the incoming HTTP requests to case classes and use those to create a HTTP response. When you run this service, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.steps.FinchStep1 Jun 26, 2015 10:19:11 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. Once the server is started, you can once again use Postman(or any other REST client) to make requests to this service (example requests can be found at https://github.com/josdirksen/rest-with-scala/tree/master/common): And once again, you don't have to use a GUI to make the requests. You can test the service with Curl as follows: # Create task curl 'http://localhost:8080/tasks' -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Update task curl 'http://localhost:8080/tasks/1' -X PUT -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Get all tasks curl'http://localhost:8080/tasks' # Get single task curl'http://localhost:8080/tasks/1' Summary This article only showed a couple of the features Finch provides. But it should give you a good head start toward working with Finch. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [article] Creating a RESTful API [article] Scalability, Limitations, and Effects [article]
Read more
  • 0
  • 0
  • 3675

article-image-big-data-analytics
Packt
03 Nov 2015
10 min read
Save for later

Big Data Analytics

Packt
03 Nov 2015
10 min read
In this article, Dmitry Anoshin, the author of Learning Hunk will talk about Hadoop—how to extract Hunk to VM to set up a connection with Hadoop to create dashboards. We are living in a century of information technology. There are a lot of electronic devices around us that generate a lot of data. For example, you can surf on the Internet, visit a couple news portals, order new Airmax on the web store, write a couple of messages to your friend, and chat on Facebook. Every action produces data; we can multiply these actions with the number of people who have access to the Internet or just use a mobile phone and we will get really big data. Of course, you have a question, how big is it? I suppose, now it starts from terabytes or even petabytes. The volume is not the only issue; we struggle with a variety of data. As a result, it is not enough to analyze only structure data. We should dive deep into the unstructured data, such as machine data, that are generated by various machines. World famous enterprises try to collect this extremely big data in order to monetize it and find business insights. Big data offers us new opportunities, for example, we can enrich customer data via social networks using the APIs of Facebook or Twitter. We can build customer profiles and try to predict customer wishes in order to sell our product or improve customer experience. It is easy to say, but difficult to do. However, organizations try to overcome these challenges and use big data stores, such as Hadoop. (For more resources related to this topic, see here.) The big problem Hadoop is a distributed file system and framework to compute. It is relatively easy to get data into Hadoop. There are plenty of tools to get data into different formats. However, it is extremely difficult to get value out of these data that you put into Hadoop. Let's look at the path from data to value. First, we have to start at the collection of data. Then, we also spend a lot of time preparing and making sure this data is available for analysis while being able to ask questions to this data. It looks as follows: Unfortunately, the questions that you asked are not good or the answers that you got are not clear, and you have to repeat this cycle over again. Maybe, you have transformed and formatted your data. In other words, it is a long and challenging process. What you actually want is something to collect data; spend some time preparing the data, then you would able to ask question and get answers from data repetitively. Now, you can spend a lot of time asking multiple questions. In addition, you are able to iterate with data on those questions to refine the answers that you are looking for. The elegant solution What if we could take Splunk and put it on top of all these data stored in Hadoop? And it was, what the Splunk company actually did. The following figure shows how we got Hunk as name of the new product: Let's discuss some solution goals Hunk inventors were thinking about when they were planning Hunk: Splunk can take data from Hadoop via the Splunk Hadoop Connection app. However, it is a bad idea to copy massive data from Hadoop to Splunk. It is much better to process data in place because Hadoop provides both storage and computation and why not take advantage of both. Splunk has extremely powerful Splunk Processing Language (SPL) and it is a kind of advantage of Splunk, because it has a wide range of analytic functions. This is why it is a good idea to keep SPL in the new product. Splunk has true schema on the fly. The data that we store in Hadoop changes constantly. So, Hunk should be able to build schema on the fly, independent from the format of the data. It's a very good idea to have the ability to make previews. As you know, when a search is going on, you would able to get incremental results. It can dramatically reduce the outage. For example, we don't need to wait till the MapReduce job is finished. We can look at the incremental result and, in the case of a wrong result, restart a search query. The deployment of Hadoop is not easy; Splunk tries to make the installation and configuration of Hunk easy for us. Getting up Hunk In order to start exploring the Hadoop data, we have to install Hunk on the top of our Hadoop cluster. Hunk is easy to install and configure. Let's learn how to deploy Hunk Version 6.2.1 on top of the existing CDH cluster. It's assumed that your VM is up and running. Extracting Hunk to VM To extract Hunk to VM, perform the following steps: Open the console application. Run ls -la to see the list of files in your Home directory: [cloudera@quickstart ~]$ cd ~ [cloudera@quickstart ~]$ ls -la | grep hunk -rw-r--r--   1 root     root     113913609 Mar 23 04:09 hunk-6.2.1-249325-Linux-x86_64.tgz Unpack the archive: cd /opt sudo tar xvzf /home/cloudera/hunk-6.2.1-249325-Linux-x86_64.tgz -C /opt Setting up Hunk variables and configuration files Perform the following steps to set up the Hunk variables and configuration files It's time to set the SPLUNK_HOME environment variable. This variable is already added to the profile; it is just to bring to your attention that this variable must be set: export SPLUNK_HOME=/opt/hunk Use default splunk-launch.conf. This is the basic properties file used by the Hunk service. We don't have to change there something special, so let's use the default settings: sudocp /opt/hunk/etc/splunk-launch.conf.default /opt/hunk//etc/splunk-launch.conf Running Hunk for the first time Perform the following steps to run Hunk: Run Hunk: sudo /opt/hunk/bin/splunk start --accept-license Here is the sample output from the first run: sudo /opt/hunk/bin/splunk start --accept-license This appears to be your first time running this version of Splunk. Copying '/opt/hunk/etc/openldap/ldap.conf.default' to '/opt/hunk/etc/openldap/ldap.conf'. Generating RSA private key, 1024 bit long modulus Some output lines were deleted to reduce amount of log text Waiting for web server at http://127.0.0.1:8000 to be available.... Done If you get stuck, we're here to help. Look for answers here: http://docs.splunk.com The Splunk web interface is at http://vm-cluster-node1.localdomain:8000 Setting up a data provider and virtual index for the CDR data We need to accomplish two tasks: provide a technical connector to the underlying data storage and create a virtual index for the data on this storage. Log in to http://quickstart.cloudera:8000. The system would ask you to change the default admin user password. I did set it to admin: Setting up a connection to Hadoop Right now, we are ready to set up the integration between Hadoop and Hunk. At first, we need to specify the way Hunk connects to the current Hadoop installation. We are using the most recent way: YARN with MR2. Then, we have to point virtual indexes to the data stored on Hadoop. To do this, perform the following steps: Click on Explore Data. Click on Create a provider: Let's fill the form to create the data provider: Property name Value Name hadoop-hunk-provider Java home /usr/java/jdk1.7.0_67-cloudera Hadoop home /usr/lib/hadoop Hadoop version Hadoop 2.x, (Yarn) filesystem hdfs://quickstart.cloudera:8020 Resource Manager Address quickstart.cloudera:8032 Resource Scheduler Address quickstart.cloudera:8030 HDFS Working Directory /user/hunk Job Queue default You don't have to modify any other properties. The HDFS working directory has been created for you in advance. You can create it using the following command: sudo -u hdfshadoop fs -mkdir -p /user/hunk If you did everything correctly, you should see a screen similar to the following screenshot: Let's discuss briefly what we have done: We told Hunk where Hadoop home and Java are. Hunk uses Hadoop streaming internally, so it needs to know how to call Java and Hadoop streaming. You can inspect the submitted jobs from Hunk (discussed later) and see the following lines: /opt/hunk/bin/jars/sudobash /usr/bin/hadoop jar "/opt/hunk/bin/jars/SplunkMR-s6.0-hy2.0.jar" "com.splunk.mr.SplunkMR" MapReduce JAR is submitted by Hunk. Also, we need to tell Hunk where the YARN Resource Manager and Scheduler are located. These services allow us to ask for cluster resources and run jobs. Job queue could be useful in the production environment. You could have several queues for cluster resource distribution in real life. We would set queue name as default, since we are not discussing cluster utilization and load balancing. Setting up a virtual index for the data stored in Hadoop Now it's time to create virtual index. We are going to add the dataset with the avro files to the virtual index as an example data. Click on Explore Data and then click on Create a virtual index: You'll get a message telling that there are no indexes: Just click on New Virtual Index. A virtual index is a metadata. It tells Hunk where the data is located and what provider should be used to read the data. Property name Value Name milano_cdr_aggregated_10_min_activity Path to data in HDFS /masterdata/stream/milano_cdr Here is an example screen you should see after you create your first virtual index: Accessing data through the virtual index To access data through the virtual index, perform the following steps: Click on Explore Data and select a provider and virtual index: Select part-m-00000.avro by clicking on it. The Next button will be activated after you pick up a file: Preview data in the Preview Data tab. You should see how Hunk automatically for timestamp from our CDR data: Pay attention to the Time column and the field named Time_interval from the Event column. The time_interval column keeps the time of record. Hunk should automatically use that field as a time field: Save the source type by clicking on Save As and then Next: In the Entering Context Settings page, select search in the App context drop-down box. Then, navigate to Sharing context | All apps and then click on Next. The last step allows you to review what we've done: Click on Finish to create the finalized wizard. Creating a dashbord Now it's time to see how the dashboards work. Let's find the regions where the visitors face problems (status = 500) while using our online store: index="digital_analytics" status=500 | iplocation clientip | geostats latfield=lat longfield=lon count by Country You should see the map and the portions of error for the countries: Now let's save it as dashboard. Click on Save as and select Dashboard panel from drop-down menu. Name it as Web Operations. You should get a new dashboard with a single panel and our report on it. We have several previously created reports. Let's add them to the newly created dashboard using separate panels: Click on Edit and then Edit panels. Select Add new panel and then New from report, and add one of our previous reports. Summary In this article, you learned how to extract Hunk to VM. We also saw how to set up Hunk variables and configuration files. You learned how to run Hunk and how to set up the data provided and a virtual index for the CDR data. Setting up a connection to Hadoop and a virtual index for the data stored in Hadoop were also covered in detail. Apart from these, you also learned how to create a dashboard. Resources for Article: Further resources on this subject: Identifying Big Data Evidence in Hadoop [Article] Big Data [Article] Understanding Hadoop Backup and Recovery Needs [Article]
Read more
  • 0
  • 0
  • 2963

article-image-working-xamarinandroid
Packt
03 Nov 2015
10 min read
Save for later

Working with Xamarin.Android

Packt
03 Nov 2015
10 min read
In this article written by Matthew Leibowitz, author of the book Xamarin Mobile Development for Android Cookbook, wants us to learn about the Android version that can be used as support for your project. (For more resources related to this topic, see here.) Supporting all Android versions As the Android operating system evolves, many new features are added and older devices are often left behind. How to do it... In order to add the new features of the later versions of Android to the older versions of Android, all we need to do is add a small package: An Android app has three platform versions to be set. The first is the API features that are available to code against. We set this to always be the latest in the Target Framework dropdown of the project options. The next version to set (via Minimum Android version) is the lowest version of the OS that the app can be installed on. When using the support libraries, we can usually target versions down to version 2.3. Lastly, the Target Android version dropdown specifies how the app should behave when installed on a later version of the OS. Typically, this should always be the latest so that the app will always function as the user expects. If we want to add support for the new UI paradigm that uses fragments and action bars, we need to install two of the Android support packages: Create or open a project in Xamarin Studio. Right-click on the project folder in the Solution Explorer list. Select Add and then Add Packages…. In the Add Packages dialog that is displayed, search for Xamarin.Android.Support. Select both Xamarin Support Library v4 and Xamarin Support Library v7 AppCompat. Click on Add Package. There are several support library packages, each adding other types of forward compatibility, but these two are the most commonly used. Once the packages are installed, our activities can inherit from the AppCompatActivity type instead of the usual Activity type: public class MyActivity : AppCompatActivity { } We specify that the activity theme be one of the AppCompat derivatives using the Theme property in the [Activity] attribute: [Activity(..., Theme = "@style/Theme.AppCompat", ...)] If we need to access the ActionBar instance, it is available via the SupportActionBar property on the activity: SupportActionBar.Title = "Xamarin Cookbook"; By simply using the action bar, all the options menu items are added as action items. However, all of them are added under the action bar overflow menu: The XML for action bar items is exactly the same as the options menu: <menu ... >   <item     android_id="@+id/action_refresh"     android_icon="@drawable/ic_action_refresh"     android_title="@string/action_refresh"/> </menu> To get the menu items out of the overflow and onto the actual action bar, we can customize the items to be displayed and how they are displayed: To add action items with images to the actual action bar as well as more complex items, all that is needed is an attribute in the XML, showAsAction: <menu ... >   <item ... app_showAsAction="ifRoom"/> </menu> Sometimes, we may wish to only display the icon initially and then, when the user taps the icon, expand the item to display the action view: <menu ... >   <item ... app_showAsAction="ifRoom|collapseActionView"/> </menu> If we wish to add custom views, such as a search box, to the action bar, we make use of the actionViewClass attribute: <menu ... >   <item ...   app_actionViewClass="android.support.v7.widget.SearchView"/> </menu> If the view is in a layout resource file, we use the actionLayout attribute: <menu ... >   <item ... app_actionLayout="@layout/action_rating"/> </menu> How it works... As Android is developed, new features are added and designs change. We want to always provide the latest features to our users, but some users either haven't upgraded or can't upgrade to the latest version of Android. Xamarin.Android provides three version numbers to specify which types can be used and how they can be used. The target framework version specifies what types are available for consumption as well as what toolset to use during compilation. This should be the latest as we always want to use the latest tools. However, this will make some types and members available to apps even if they aren't actually available on the Android version that the user is using. For example, it will make the ActionBar type available to apps running on Android version 2.3. If the user were to run the app, it would probably crash. In these instances, we can set the minimum Android version to be a version that supports these types and members. But, this will then reduce the number of devices that we can install our app on. This is why we use the support libraries; they allow the types to be used on most versions of Android. Setting the minimum Android version for an app will prevent the app from being installed on devices with earlier versions of the OS. The support libraries By including the Android Support Libraries in our app, we can make use of the new features but still support the old versions. Types from the Android Support Library are available to almost all versions of Android currently in use. The Android Support Libraries provide us with a type that we know we can use everywhere, and then that base type manages the features to ensure that they function as expected. For example, we can use the ActionBar type on most versions of Android because the support library made it available through the AppCompatActivity type. Because the AppCompatActivity type is an adaptive extension for the traditional Activity type, we have to use a different theme. This theme adjusts so that the new look and feel of the UI gets carried all the way back to the old Android versions. When using the AppCompatActivity type, the activity theme must be one of the AppCompat theme variations. There are a few differences in the use when using the support library. With native support for the action bar, the AppCompatActivity type has a property named ActionBar; however, in the support library, the property is named SupportActionBar. This is just a property name change but the functionality is the same. Sometimes ,features have to be added to the existing types that are not in the support libraries. In these cases, static methods are provided. The native support for custom views in menu items includes a method named SetActionView(): menuItem.SetActionView(someView); This method does not exist on the IMenuItem type for the older versions of Android, so we make use of the static method on the MenuItemCompat type: MenuItemCompat.SetActionView(menuItem, someView); The action bar While adding an action bar on older Android versions, it is important to inherit it from the AppCompatActivity type. This type includes all the logic required for including an action bar in the app. It also provides many different methods and properties for accessing and configuring the action bar. In newer versions of Android, all the features are included in the Activity type. Although the functionality is the same, we do have to access the various pieces using the support members when using the support libraries. An example would be to use the SupportActionBar property instead of the ActionBar property. If we use the ActionBar property, the app will crash on devices that don't natively support the ActionBar property. In order to render the action bar, the activity needs to use a theme that contains a style for the action bar or one that inherits from such a theme. For the older versions of Android, we can use the AppCompat themes, such as Theme.AppCompat. The toolbar With the release of Android version 5.0, Google introduced a new style of action bar. The new Toolbar type performs the same function as the action bar but can be placed anywhere on the screen. The action bar is always placed at the top of the screen, but a toolbar is not restricted to that location and can even be placed inside other layouts. To make use of the Toolbar type, we can either use the native type, or we can use the type found in the support libraries. Like any Android View, we can add the ToolBar type to the layout: <android.support.v7.widget.Toolbar   android_id="@+id/my_toolbar"   android_layout_width="match_parent"   android_layout_height="?attr/actionBarSize"   android_background="?attr/colorPrimary"   android_elevation="4dp"   android_theme="@style/ThemeOverlay.AppCompat.ActionBar"   app_popupTheme="@style/ThemeOverlay.AppCompat.Light"/> The difference is in how the activity is set up. First, as we are not going to use the default ActionBar property, we can use the Theme.AppCompat.NoActionBar theme. Then, we have to let the activity know which view is used as the Toolbar type: var toolbar = FindViewById<Toolbar>(Resource.Id.toolbar); SetSupportActionBar(toolbar); The action bar items Action item buttons are just traditional options menu items but are optionally always visible on the action bar. The underlying logic to handle item selections is the same as that for the traditional options menu. No change is required to be made to the existing code inside the OnOptionsItemSelected() method. The value of the showAsAction attribute can be ifRoom, never, or always. This value can optionally be combined, using a pipe, with withText and/or collapseActionView. There's more... Besides using the Android Support Libraries to handle different versions, there is another way to handle different versions at runtime. Android provides the version number of the current operating system through the Build.VERSION type. This type has a property, SdkInt, which we use to detect the current version. It represents the current API level of the version. Each version of Android has a series of updates and new features. For example, Android 4 has numerous updates since its initial release, new features being added each time. Sometimes, the support library cannot cover all the cases, and we have to write specific code for particular versions: int apiLevel = (int)Build.VERSION.SdkInt; if (Build.VERSION.SdkInt >= BuildVersionCodes.IceCreamSandwich) {   // Android version 4.0 and above } else {   // Android versions below version 4.0 } Although the preceding can be done, it introduces spaghetti code and should be avoided. In addition to different code, the app may behave differently on different versions, even if the support library could have handled it. We will now have to manage these differences ourselves each time a new version of Android is released. Summary In this article, we learned that as the technology grows, new features are added and released in Android and older devices are often left behind. Thus, using the given steps we can add the new features of the later versions of Android to the older versions of Android, all we need to do is add packages by following the simple steps given in here. Resources for Article: Further resources on this subject: Creating the POI ListView layout [article] Code Sharing Between iOS and Android [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 4064

article-image-setting-citrix-components
Packt
03 Nov 2015
4 min read
Save for later

Setting Up the Citrix Components

Packt
03 Nov 2015
4 min read
In this article by Sunny Jha, the author of the book Mastering XenApp, we are going to implement the Citrix XenApp infrastructure components, which are going to work together to deliver the applications. The components we will be implementing are as follows: Setting up Citrix License Server Setting up Delivery Controller Setting up Director Setting up StoreFront Setting up Studio Once you will complete this article, you will be able to understand how to install the Citrix XenApp infrastructure components for the effective delivery of applications. (For more resources related to this topic, see here.) Setting up the Citrix infrastructure components You must be aware of the fact that Citrix reintroduced Citrix XenApp in the version of Citrix XenApp 7.5 with the new FMA-based architecture, replacing IMA. In this article, we will be setting up different Citrix components so that they can deliver the applications. As this is the proof of concept, I will be setting up almost all the Citrix components on the single Microsoft Windows 2012 R2 machine, where it is recommended that in the production environment, you should keep the Citrix components such as License Server, Delivery Controller, and StoreFront. These need to be installed on the separate servers to avoid the single point of failure and better performance. The components that we will be setting up in this article are: Delivery Controller: This Citrix component will act as broker, and the main function is to assign users to a server, based on their selection of application published. License Server: This will assign the license to the Citrix components as every Citrix product requires license in order to work. Studio: This will act as control panel for Citrix XenApp 7.6 delivery. Inside Citrix, studio administrator makes all the configuration and changes. Director: This component is basically for monitoring and troubleshooting, which is web-based application. StoreFront: This is the frontend of the Citrix infrastructure by which users connect to their application, either via receiver or web based. Installing of Citrix components In order to start the installation, we need the Citrix XenApp 7.6 DVD or ISO image. You can always download, from the Citrix website, all you need to have in the MyCitrix account. Follow these steps: Mount the disc/ISO you have downloaded. When you will double-click on the mounted disc, it will bring up a nice screen where you have to make the selection between XenApp Deliver applications or XenDesktop Deliver application and desktops: Once you have made the selection, it will show you the next option related to the product. Here, we need to select XenApp. Choose Delivery Controller from the options: The next screen will show you the License Agreement. You can go through it and accept the terms and click on Next: As described earlier, this is the proof of concept. We will install all the components on single server, but it is recommended to put each component on different server for better performance. Select all the components and click on Next: The next screen will show you the features that can be installed. As we have already installed the SQL server, we don't have to select the SQL Express, but we will choose Install Windows Remote Assistance. Click on Next: The next screen will show you the firewall ports that needs to be allowed to communicate, and it can be adjusted by Citrix as well. Click on Next: The next screen will show you the summary of your selection. Here, you can review your selection and click on Install to install the components: After you click on Install, it will go through the installation procedure, and once the installation is complete, click on Next. By following these steps, we completed the installation of the Citrix components such as Delivery Controller, Studio, Director, and StoreFront. We also adjusted the firewall ports as per the Citrix XenApp requirement. Summary In this article, you learned about setting up the Citrix infrastructure components and also how to install Citrix Director, License Server, Citrix Studio, and Citrix Director, and Citrix StoreFront. Resources for Article: Further resources on this subject: Getting Started – Understanding Citrix XenDesktop and its Architecture [article] High Availability, Protection, and Recovery using Microsoft Azure [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 8390

article-image-moving-spatial-data-one-format-another
Packt
03 Nov 2015
29 min read
Save for later

Moving Spatial Data From One Format to Another

Packt
03 Nov 2015
29 min read
In this article by Michael Diener, author of the Python Geospatial Analysis Cookbook, we will cover the following topics: Converting a Shapefile to a PostGIS table using ogr2ogr Batch importing a folder of Shapefiles into PostGIS using ogr2ogr Batch exporting a list of tables from PostGIS to Shapefiles Converting an OpenStreetMap (OSM) XML to a Shapefile Converting a Shapefile (vector) to a GeoTiff (raster) Converting a GeoTiff (raster) to a Shapefile (vector) using GDAL (For more resources related to this topic, see here.) Introduction Geospatial data comes in hundreds of formats and massaging this data from one format to another is a simple task. The ability to convert data types, such as rasters or vectors, belongs to data wrangling tasks that are involved in geospatial analysis. Here is an example of a raster and vector dataset so you can see what I am talking about: Source: Michael Diener drawing The best practice is to run analysis functions or models on data stored in a common format, such as a Postgresql PostGIS database or a set of Shapefiles, in a common coordinate system. For example, running analysis on input data stored in multiple formats is also possible, but you can expect to find the devil in the details of your results if something goes wrong or your results are not what you expect. This article looks at some common data formats and demonstrates how to move these formats around from one to another with the most common tools. Converting a Shapefile to a PostGIS table using ogr2ogr The simplest way to transform data from one format to another is to directly use the ogr2ogr tool that comes with the installation of GDAL. This powerful tool can convert over 200 geospatial formats. In this solution, we will execute the ogr2ogr utility from within a Python script to execute generic vector data conversions. The python code is, therefore, used to execute this command-line tool and pass around variables that are needed to create your own scripts for data imports or exports. The use of this tool is also recommended if you are not really interested in coding too much and simply want to get the job done to move your data. A pure python solution is, of course, possible but it is definitely targeted more at developers (or python purists). Getting ready To run this script, you will need the GDAL utility application installed on your system. Windows users can visit OSGeo4W (http://trac.osgeo.org/osgeo4w) and download the 32-bit or 64-bit Windows installer as follows: Simply double-click on the installer to start it. Navigate to the bottommost option, Advanced Installation | Next. Click on Next to download from the Internet (this is the first default option). Click on Next to accept default location of path or change to your liking. Click on Next to accept the location of local saved downloads (default). Click on Next to accept the direct connection (default). Click on Next to select a default download site. Now, you should finally see the menu. Click on + to open the command-line utilities and you should see the following: Now, select gdal. The GDAL/OGR library and command line tools to install it. Click on Next to start downloading it, and then install it. For Ubuntu Linux users, use the following steps for installation: Execute this simple one-line command: $ sudo apt-get install gdal-bin This will get you up and running so that you can execute ogr2ogr directly from your terminal. Next, set up your Postgresql database using the PostGIS extension. First, we will create a new user to manage our new database and tables: Sudo su createuser  –U postgres –P pluto Enter a password for the new role. Enter the password again for the new role. Enter a password for postgres users since you're going to create a user with the help of the postgres user.The –P option prompts you to give the new user, called pluto, a password. For the following examples, our password is stars; I would recommend a much more secure password for your production database. Setting up your Postgresql database with the PostGIS extension in Windows is the same as setting it up in Ubuntu Linux. Perform the following steps to do this: Navigate to the c:Program FilesPostgreSQL9.3bin folder. Then, execute this command and follow the on-screen instructions as mentioned previously: Createuser.exe –U postgres –P pluto To create the database, we will use the command-line createdb command similar to the postgres user to create a database named py_geoan_cb. We will then assign the pluto user to be the database owner; here is the command to do this: $ sudo su createdb –O pluto –U postgres py_geoan_cb Windows users can visit the c:Program FilesPostgreSQL9.3bin and execute the createdb.exe command: createdb.exe –O pluto –U postgres py_geoan_cb Next, create the PostGIS extension for our newly created database: psql –U postgres -d py_geoan_cb -c "CREATE EXTENSION postgis;" Windows users can also execute psql from within the c:Program FilesPostgreSQL9.3bin folder: psql.exe –U postgres –d py_geoan_cb –c "CREATE EXTENSION postgis;" Lastly, create a schema called geodata to store the new spatial table. It is common to store spatial data in another schema outside the Postgresql default schema, public. Create the schema as follows: For Ubuntu Linux users: sudo -u postgres psql -d py_geoan_cb -c "CREATE SCHEMA geodata AUTHORIZATION pluto;" For Windows users: psql.exe –U postgres –d py_geoan_cb –c "CREATE SCHEMA geodata AUTHORIZATION pluto;" How to do it... Now let's get into the actual importing of our Shapefile into a PostGIS database that will automatically create a new table from our Shapefile: #!/usr/bin/env python # -*- coding: utf-8 -*- import subprocess # database options db_schema = "SCHEMA=geodata" overwrite_option = "OVERWRITE=YES" geom_type = "MULTILINESTRING" output_format = "PostgreSQL" # database connection string db_connection = """PG:host=localhost port=5432   user=pluto dbname=py_test password=stars""" # input shapefile input_shp = "../geodata/bikeways.shp" # call ogr2ogr from python subprocess.call(["ogr2ogr","-lco", db_schema, "-lco", overwrite_option, "-nlt", geom_type, "-f", output_format, db_connection,  input_shp]) Now we can call our script from the command line: $ python ch03-01_shp2pg.py How it works... We begin with importing the standard python module subprocess that will call the ogr2ogr command-line tool. Next, we'll set a range of variables that are used as input arguments and various options for ogr2ogr to execute. Starting with the SCHEMA=geodata Postgresql database, we'll set a nondefault database schema for the destination of our new table. It is best practice to store your spatial data tables in a separate schema outside the "public" schema, which is set as the default. This practice will make backups and restores much easier and keep your database better organized. Next, we'll create a overwrite_option variable that's set to "yes" so that we can overwrite any table with the same name when its created. This is helpful when you want to completely replace the table with new data, otherwise, it is recommended to use the -append option. We'll also specify the geometry type because, sometimes, ogr2ogr does not always guess the correct geometry type of our Shapefile so setting this value saves you any worry. Now, setting our output_format variable with the PostgreSQL keyword tells ogr2ogr that we want to output data into a Postgresql database. This is then followed by the db_connection variable, which specifies our database connection information. Do not forget that the database must already exist along with the "geodata" schema, otherwise, we will get an error. The last input_shp variable gives the full path to our Shapefile, including the .shp file ending. Now, we will call the subprocess module ,which will then call the ogr2ogr command-line tool and pass along the variable options required to run the tool. We'll pass this function an array of arguments, the first object in the array being the ogr2ogr command-line tool name. After this, we'll pass each option after another in the array to complete the call. Subprocess can be used to call any command-line tool directly. It takes a list of parameters that are separated by spaces. This passing of parameters is quite fussy, so make sure you follow along closely and don't add any extra spaces or commas. Last but not least, we need to execute our script from the command line to actually import our Shapefile by calling the python interpreter and passing the script. Then, head over to the PgAdmin Postgresql database viewer and see if it worked, or even better, open up Quantum GIS (www.qgis.org) and take a look at the newly created tables. See also If you would like to see the full list of options available with the ogr2ogr command, simple enter the following in the command line: $ ogr2ogr –help You will see the full list of options that are available. Also, visit http://gdal.org/ogr2ogr.html to read the required documentation. Batch importing a folder of Shapefiles into PostGIS using ogr2ogr We would like to extend our last script to loop over a folder full of Shapefiles and import them into PostGIS. Most importing tasks involve more than one file to import so this makes it a very practical task. How to do it... The following steps will batch import a folder of Shapefiles into PostGIS using ogr2ogr: Our script will reuse the previous code in the form of a function so that we can batch process a list of Shapefiles to import into the Postgresql PostGIS database. We will create our list of Shapefiles from a single folder for the sake of simplicity: #!/usr/bin/env python # -*- coding: utf-8 -*- import subprocess import os import ogr def discover_geom_name(ogr_type):     """     :param ogr_type: ogr GetGeomType()     :return: string geometry type name     """     return {ogr.wkbUnknown            : "UNKNOWN",             ogr.wkbPoint              : "POINT",             ogr.wkbLineString         : "LINESTRING",             ogr.wkbPolygon            : "POLYGON",             ogr.wkbMultiPoint         : "MULTIPOINT",             ogr.wkbMultiLineString    : "MULTILINESTRING",             ogr.wkbMultiPolygon       : "MULTIPOLYGON",             ogr.wkbGeometryCollection : "GEOMETRYCOLLECTION",             ogr.wkbNone               : "NONE",             ogr.wkbLinearRing         : "LINEARRING"}.get(ogr_type) def run_shp2pg(input_shp):     """     input_shp is full path to shapefile including file ending     usage:  run_shp2pg('/home/geodata/myshape.shp')     """     db_schema = "SCHEMA=geodata"     db_connection = """PG:host=localhost port=5432                     user=pluto dbname=py_geoan_cb password=stars"""     output_format = "PostgreSQL"     overwrite_option = "OVERWRITE=YES"     shp_dataset = shp_driver.Open(input_shp)     layer = shp_dataset.GetLayer(0)     geometry_type = layer.GetLayerDefn().GetGeomType()     geometry_name = discover_geom_name(geometry_type)     print (geometry_name)     subprocess.call(["ogr2ogr", "-lco", db_schema, "-lco", overwrite_option,                      "-nlt", geometry_name, "-skipfailures",                      "-f", output_format, db_connection, input_shp]) # directory full of shapefiles shapefile_dir = os.path.realpath('../geodata') # define the ogr spatial driver type shp_driver = ogr.GetDriverByName('ESRI Shapefile') # empty list to hold names of all shapefils in directory shapefile_list = [] for shp_file in os.listdir(shapefile_dir):     if shp_file.endswith(".shp"):         # apped join path to file name to outpout "../geodata/myshape.shp"         full_shapefile_path = os.path.join(shapefile_dir, shp_file)         shapefile_list.append(full_shapefile_path) # loop over list of Shapefiles running our import function for each_shapefile in shapefile_list:     run_shp2pg(each_shapefile)  print ("importing Shapefile: " + each_shapefile) Now, we can simply run our new script from the command line once again: $ python ch03-02_batch_shp2pg.py How it works... Here, we will reuse our code from the previous script but have converted it into a python function called run_shp2pg (input_shp), which takes exactly one argument to complete the path to the Shapefile we want to import. The input argument must include a Shapefile ending with .shp. We have a helper function that will get the geometry type as a string by reading in the Shapefile feature layer and outputting the geometry type so that the ogr commands know what to expect. This does not always work and some errors can occur. The skipfailures option will plow over any errors that are thrown during insert and will still populate our tables. To begin with, we need to define the folder that contains all our Shapefiles to be imported. Next up, we'll create an empty list object called shapefile_list that will hold a list of all the Shapefiles we want to import. The first for loop is used to get the list of all the Shapefiles in the directory specified using the standard python os.listdir() function. We do not want all the files in this folder; we only want files with ending with .shp, hence, the if statement will evaluate to True if the file ends with .shp. Once the .shp file is found, we need to append the file path to the file name to create a single string that holds the path plus the Shapefile name, and this is our variable called full_shapefile_path. The final part of this is to add each new file with its attached path to our shapefile_list list object. So, we now have our final list to loop through. It is time to loop through each Shapefile in our new list and run our run_shp2pg(input_shp) function for each Shapefile in the list by importing it into our Postgresql PostGIS database. See also If you have a lot of Shapefiles, and by this I mean mean hundred or more Shapefiles, performance will be one consideration and will, therefore, indicate that there are a lot of machines with free resources. Batch exporting a list of tables from PostGIS to Shapefiles We will now change directions and take a look at how we can batch export a list of tables from our PostGIS database into a folder of Shapefiles. We'll again use the ogr2ogr command-line tool from within a python script so that you can include it in your application programming workflow. Near the end, you can also see how this all of works in one single command line. How to do it... The script will fire the ogr2ogr command and loop over a list of tables to export the Shapefile format into an existing folder. So, let's take a look at how to do this using the following code: #!/usr/bin/env python # -*- coding: utf-8 -*- # import subprocess import os # folder to hold output Shapefiles destination_dir = os.path.realpath('../geodata/temp') # list of postGIS tables postgis_tables_list = ["bikeways", "highest_mountains"] # database connection parameters db_connection = """PG:host=localhost port=5432 user=pluto         dbname=py_geoan_cb password=stars active_schema=geodata""" output_format = "ESRI Shapefile" # check if destination directory exists if not os.path.isdir(destination_dir):     os.mkdir(destination_dir)     for table in postgis_tables_list:         subprocess.call(["ogr2ogr", "-f", output_format, destination_dir,                          db_connection, table])         print("running ogr2ogr on table: " + table) else:     print("oh no your destination directory " + destination_dir +           " already exist please remove it then run again") # commandline call without using python will look like this # ogr2ogr -f "ESRI Shapefile" mydatadump # PG:"host=myhost user=myloginname dbname=mydbname password=mypassword"   neighborhood parcels Now, we'll call our script from the command line as follows: $ python ch03-03_batch_postgis2shp.py How it works... Beginning with a simple import of our subprocess and os modules, we'll immediately define our destination directory where we want to store the exported Shapefiles. This variable is followed by the list of table names that we want to export. This list can only include files located in the same Postgresql schema. The schema is defined as the active_schema so that ogr2ogr knows where to find the tables to be exported. Once again, we'll define the output format as ESRI Shapefile. Now, we'll check whether the destination folder exists. If it does, continue and call our loop. Then, loop through the list of tables stored in our postgis_tables_list variable. If the destination folder does not exist, you will see an error printed on the screen. There's more... If you are programming an application, then executing the ogr2ogr command from inside your script is definitely quick and easy. On the other hand, for a one-off job, simply executing the command-line tool is what you want when you export your list of Shapefiles. To do this in a one-liner, please take a look at the following information box. A one line example of calling the ogr2ogr batch of the PostGIS table to Shapefiles is shown here if you simply want to execute this once and not in a scripting environment: ogr2ogr -f "ESRI Shapefile" /home/ch03/geodata/temp PG:"host=localhost user=pluto dbname=py_geoan_cb password=stars" bikeways highest_mountains The list of tables you want to export is located as a list separated by spaces. The destination location of the exported Shapefiles is ../geodata/temp. Note this this /temp directory must exist. Converting an OpenStreetMap (OSM) XML to a Shapefile OpenStreetMap (OSM) has a wealth of free data, but to use it with most other applications, we need to convert it to another format, such as a Shapefile or Postgresql PostGIS database. This recipe will use the ogr2ogr tool to do the conversion for us within a python script. The benefit derived here is simplicity. Getting ready To get started, you will need to download the OSM data at http://www.openstreetmap.org/export#map=17/37.80721/-122.47305 and saving the file (.osm) to your /ch03/geodata directory. The download button is located on the bar on the left-hand side, and when pressed, it should immediately start the download (refer to the following image). The area we are testing is from San Francisco, just before the golden gate bridge. If you choose to download another area from OSM feel free to, but make sure that you take a small area like preceding example link. If you select a larger area, the OSM web tool will give you a warning and disable the download button. The reason is simple: if the dataset is very large, it will most likely be better suited for another tool, such as osm2pgsql (http://wiki.openstreetmap.org/wiki/Osm2pgsql), for you conversion. If you need to get OSM data for a large area and want to export to Shapefile, it would be advisable to use another tool, such as osm2pgsql, which will first import your data to a Postgresql database. Then, export from the PostGIS database to Shapefile using the pgsql2shp tool.  A python tool to import OSM data into a PostGIS database is also available and is called imposm (located here at http://imposm.org/). Version 2 of it is written in python and version 3 is written in the "go" programming language if you want to give it a try. How to do it... Using the subprocess module, we will execute ogr2ogr to convert the OSM data that we downloaded into a new Shapefile: #!/usr/bin/env python # -*- coding: utf-8 -*- # convert / import osm xml .osm file into a Shapefile import subprocess import os import shutil # specify output format output_format = "ESRI Shapefile" # complete path to input OSM xml file .osm input_osm = '../geodata/OSM_san_francisco_westbluff.osm' # Windows users can uncomment these two lines if needed # ogr2ogr = r"c:/OSGeo4W/bin/ogr2ogr.exe" # ogr_info = r"c:/OSGeo4W/bin/ogrinfo.exe" # view what geometry types are available in our OSM file subprocess.call([ogr_info, input_osm]) destination_dir = os.path.realpath('../geodata/temp') if os.path.isdir(destination_dir):     # remove output folder if it exists     shutil.rmtree(destination_dir)     print("removing existing directory : " + destination_dir)     # create new output folder     os.mkdir(destination_dir)     print("creating new directory : " + destination_dir)     # list of geometry types to convert to Shapefile     geom_types = ["lines", "points", "multilinestrings", "multipolygons"]     # create a new Shapefile for each geometry type     for g_type in geom_types:         subprocess.call([ogr2ogr,                "-skipfailures", "-f", output_format,                  destination_dir, input_osm,                  "layer", g_type,                  "--config","OSM_USE_CUSTOM_INDEXING", "NO"])         print("done creating " + g_type) # if you like to export to SPATIALITE from .osm # subprocess.call([ogr2ogr, "-skipfailures", "-f", #         "SQLITE", "-dsco", "SPATIALITE=YES", #         "my2.sqlite", input_osm]) Now we can call our script from the command line: $ python ch03-04_osm2shp.py Go and have a look at your ../geodata folder to see the newly created Shapefiles, and try to open them up in Quantum GIS, which is a free GIS software (www.qgis.org) How it works... This script should be clear as we are using the subprocess module call to fire our ogr2ogr command-line tool. Specify our OSM dataset as an input file, including the full path to the file. The Shapefile name is not supplied as ogr2ogr and will output a set of Shapefiles, one for each geometry shape according to the geometry type it finds inside the OSM file. We only need to specify the name of the folder where we want ogr2ogr to export the Shapefiles to, automatically creating the folder if it does not exist. Windows users: if you do not have your ogr2ogr tool mapped to your environment variables, you can simply uncomment at lines 16 and 17 in the preceding code and replace the path shown with the path on your machine to the Windows executables. The first subprocess call prints out the screen that the geometry types have found inside the OSM file. This is helpful in most cases to help you identify what is available. Shapefiles can only support one geometry type per file, and this is why ogr2ogr outputs a folder full of Shapefiles, each one representing a separate geometry type. Lastly, we'll call subprocess to execute ogr2ogr, passing in the output "ESRI Shapefile" file type, output folder, and the name of the OSM dataset. Converting a Shapefile (vector) to a GeoTiff (raster) Moving data from format to format also includes moving from vector to raster or the other way around. In this recipe, we move from a vector (Shapefile) to a raster (GeoTiff) with the python gdal and ogr modules. Getting ready We need to be inside our virtual environment again, so fire it up so that we can access our gdal and ogr python modules. As usual, enter your python virtual environment with the workon pygeoan_cb command: $ source venvs/pygeoan_cb/bin/activate How to do it... Let's dive in and convert our golf course polygon Shapefile into a GeoTif; here is the code to do this: Import the ogr and gdal libraries, and then define the output pixel size along with the value that will be assigned to null: #!/usr/bin/env python # -*- coding: utf-8 -*- from osgeo import ogr from osgeo import gdal # set pixel size pixel_size = 1no_data_value = -9999 Set up the input Shapefile that we want to convert alongside the new GeoTiff raster that will be created when the script is executed: # Shapefile input name # input projection must be in cartesian system in meters # input wgs 84 or EPSG: 4326 will NOT work!!! input_shp = r'../geodata/ply_golfcourse-strasslach3857.shp' # TIF Raster file to be created output_raster = r'../geodata/ply_golfcourse-strasslach.tif' Now we need to create the input Shapefile object, so get the layer information and finally set the extent values: # Open the data source get the layer object # assign extent coordinates open_shp = ogr.Open(input_shp) shp_layer = open_shp.GetLayer() x_min, x_max, y_min, y_max = shp_layer.GetExtent() Here, we need to calculate the resolution distance to pixel value: # calculate raster resolution x_res = int((x_max - x_min) / pixel_size) y_res = int((y_max - y_min) / pixel_size) Our new raster type is a GeoTiff so we must explicitly tell this gdal to get the driver. The driver is then able to create a new GeoTiff by passing in the filename or the new raster we want to create. The x direction resolution is followed by the y direction resolution, and then our number of bands, which is, in this case, 1. Lastly, we'll set the new type of GDT_Byte raster: # set the image type for export image_type = 'GTiff' driver = gdal.GetDriverByName(image_type)   new_raster = driver.Create(output_raster, x_res, y_res, 1, gdal.GDT_Byte) new_raster.SetGeoTransform((x_min, pixel_size, 0, y_max, 0, -   pixel_size)) Now we can access the new raster band and assign the no data values and the inner data values for the new raster. All the inner values will receive a value of 255 similar to what we set for the burn_values variable: # get the raster band we want to export too raster_band = new_raster.GetRasterBand(1) # assign the no data value to empty cells raster_band.SetNoDataValue(no_data_value) # run vector to raster on new raster with input Shapefile gdal.RasterizeLayer(new_raster, [1], shp_layer, burn_values=[255]) Here we go! Lets run the script to see what our new raster looks like: $ python ch03-05_shp2raster.py The resulting raster should look like this if you open it  using QGIS (http://www.qgis.org): How it works... There are several steps involved in this code so please follow along as some points could lead to trouble if you are not sure what values to input. We start with the import of the gdal and ogr modules, respectively, since they will do the work for us by inputting a Shapefile (vector) and outputting a GeoTiff (raster). The pixel_size variable is very important since it will determine the size of the new raster we will create. In this example, we only have two polygons, so we'll set pixel_size = 1 to keep a fine border. If you have many polygons stretching across the globe in one Shapefile, it is wiser to set this value to 25 or more. Otherwise, you could end up with a 10 GB raster and your machine will run all night long! The no_data_value parameter is needed to tell GDAL what values to set in the empty space around our input polygons, and we set these to -9999 in order to be easily identified. Next, we'll simply set the input Shapefile stored in the EPSG:3857 web mercator and output GeoTiff. Check to make sure that you change the file names accordingly if you want to use some other dataset. We start by working with the OGR module to open the Shapefile and retrieve its layer information and the extent information. The extent is important because it is used to calculate the size of the output raster width and height values, which must be integers that are represented by the x_res and y_res variables. Note that the projection of your Shapefile must be in meters not degrees. This is very important since this will NOT work in EPSG:4326 or WGS 84, for example. The reason is that the coordinate units are LAT/LON. This means that WGS84 is not a flat plane projection and cannot be drawn as is. Our x_res and y_res values would evaluate to 0 since we cannot get a real ratio using degrees. This is a result of use not being able to simply subtract coordinate x from coordinate y because the units are in degrees and not in a flat plane meter projection. Now moving on to the raster setup, we'll define the type of raster we want to export as a Gtiff. Then, we can get the correct GDAL driver by the raster type. Once the raster type is set, we can create a new empty raster dataset, passing in the raster file name, width, and height of the raster in pixels, number of raster bands, and finally, the type of raster in GDAL terms, that is, the gdal.GDT_Byte. These five parameters are mandatory to create a new raster. Next, we'll call SetGeoTransform that handles transforming between pixel/line raster space and projection coordinates space. We want to activate the band 1 as it is the only band we have in our raster. Then, we'll assign the no data value for all our empty space around the polygon. The final step is to call the gdal.RasterizeLayer() function and pass in our new raster band Shapefile and the value to assign to the inside of our raster. The value of all the pixels inside our polygon will be 255. See also If you are interested, you can visit the command-line tool gdal_rasterize at http://www.gdal.org/gdal_rasterize.html. You can run this straight from the command line. Converting a raster (GeoTiff) to a vector (Shapefile) using GDAL We have now looked at how we can go from vector to raster, so it is time to go from raster to vector. This method is much more common because most of our vector data is derived from remotely sensed data such as satellite images, orthophotos, or some other remote sensing dataset such as lidar. Getting ready As usual, please enter your python virtual environment with the help of the workon pygeoan_cb command: $ source venvs/pygeoan_cb/bin/activate How to do it... Now let's begin: Import the ogr and gdal modules. Go straight ahead and open the raster that we want to convert by passing it the file name on disk. Then, get the raster band: #!/usr/bin/env python # -*- coding: utf-8 -*- from osgeo import ogr from osgeo import gdal #  get raster datasource open_image = gdal.Open( "../geodata/cadaster_borders-2tone-black-   white.png") input_band = open_image.GetRasterBand(3) Setup the output vector file as a Shapefile with output_shp, and then get the Shapefile driver. Now, we can create the output from our driver and create the layer: #  create output datasource output_shp = "../geodata/cadaster_raster" shp_driver = ogr.GetDriverByName("ESRI Shapefile") # create output file name output_shapefile = shp_driver.CreateDataSource( output_shp + ".shp" ) new_shapefile = output_shapefile.CreateLayer(output_shp, srs = None ) Our final step is to run the gdal.Polygonize function that does the heavy lifting by converting our raster to vector. gdal.Polygonize(input_band, None, new_shapefile, -1, [], callback=None) new_shapefile.SyncToDisk() Execute the new script. $ python ch03-06_raster2shp.py How it works... Working with ogr and gdal is similar in all our recipes; we must define the inputs and get the appropriate file driver to open the files. The GDAL library is very powerful and in only one line of code, we can convert a raster to a vector with the gdal. Polygonize function. All the preceding code is simply setup code to define which format we want to work with. We can then set up the appropriate driver to input and output our new file. Summary In this article we covered converting a Shapefile to a PostGIS table using ogr2ogr, batch importing a folder of Shapefiles into PostGIS using ogr2ogr, batch exporting a list of tables from PostGIS to Shapefiles, converting an OpenStreetMap (OSM) XML to a Shapefile, converting a Shapefile (vector) to a GeoTiff (raster), and converting a GeoTiff (raster) to a Shapefile (vector) using GDAL Resources for Article: Further resources on this subject: The Essentials of Working with Python Collections[article] Symbolizers[article] Preparing to Build Your Own GIS Application [article]
Read more
  • 0
  • 0
  • 4135
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 11520

article-image-html5-apis
Packt
03 Nov 2015
6 min read
Save for later

HTML5 APIs

Packt
03 Nov 2015
6 min read
 In this article by Dmitry Sheiko author of the book JavaScript Unlocked we will create our first web component. (For more resources related to this topic, see here.) Creating the first web component You might be familiar with HTML5 video element (http://www.w3.org/TR/html5/embedded-content-0.html#the-video-element). By placing a single element in your HTML, you will get a widget that runs a video. This element accepts a number of attributes to set up the player. If you want to enhance this, you can use its public API and subscribe listeners on its events (http://www.w3.org/2010/05/video/mediaevents.html). So, we reuse this element whenever we need a player and only customize it for project-relevant look and feel. If only we had enough of these elements to pick every time we needed a widget on a page. However, this is not the right way to include any widget that we may need in an HTML specification. However, the API to create custom elements, such as video, is already there. We can really define an element, package the compounds (JavaScript, HTML, CSS, images, and so on), and then just link it from the consuming HTML. In other words, we can create an independent and reusable web component, which we then use by placing the corresponding custom element (<my-widget />) in our HTML. We can restyle the element, and if needed, we can utilize the element API and events. For example, if you need a date picker, you can take an existing web component, let's say the one available at http://component.kitchen/components/x-tag/datepicker. All that we have to do is download the component sources (for example, using browser package manager) and link to the component from our HTML code: <link rel="import" href="bower_components/x-tag-datepicker/src/datepicker.js"> Declare the component in the HTML code: <x-datepicker name="2012-02-02"></x-datepicker> This is supposed to go smoothly in the latest versions of Chrome, but this won't probably work in other browsers. Running a web component requires a number of new technologies to be unlocked in a client browser, such as Custom Elements, HTML Imports, Shadow DOM, and templates. The templates include the JavaScript templates. The Custom Element API allows us to define new HTML elements, their behavior, and properties. The Shadow DOM encapsulates a DOM subtree required by a custom element. And support of HTML Imports assumes that by a given link the user-agent enables a web-component by including its HTML on a page. We can use a polyfill (http://webcomponents.org/) to ensure support for all of the required technologies in all the major browsers: <script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script> Do you fancy writing your own web components? Let's do it. Our component acts similar to HTML's details/summary. When one clicks on summary, the details show up. So we create x-details.html, where we put component styles and JavaScript with component API: x-details.html <style> .x-details-summary { font-weight: bold; cursor: pointer; } .x-details-details { transition: opacity 0.2s ease-in-out, transform 0.2s ease-in-out; transform-origin: top left; } .x-details-hidden { opacity: 0; transform: scaleY(0); } </style> <script> "use strict"; /** * Object constructor representing x-details element * @param {Node} el */ var DetailsView = function( el ){ this.el = el; this.initialize(); }, // Creates an object based in the HTML Element prototype element = Object.create( HTMLElement.prototype ); /** @lend DetailsView.prototype */ Object.assign( DetailsView.prototype, { /** * @constracts DetailsView */ initialize: function(){ this.summary = this.renderSummary(); this.details = this.renderDetails(); this.summary.addEventListener( "click", this.onClick.bind( this ), false ); this.el.textContent = ""; this.el.appendChild( this.summary ); this.el.appendChild( this.details ); }, /** * Render summary element */ renderSummary: function(){ var div = document.createElement( "a" ); div.className = "x-details-summary"; div.textContent = this.el.dataset.summary; return div; }, /** * Render details element */ renderDetails: function(){ var div = document.createElement( "div" ); div.className = "x-details-details x-details-hidden"; div.textContent = this.el.textContent; return div; }, /** * Handle summary on click * @param {Event} e */ onClick: function( e ){ e.preventDefault(); if ( this.details.classList.contains( "x-details-hidden" ) ) { return this.open(); } this.close(); }, /** * Open details */ open: function(){ this.details.classList.toggle( "x-details-hidden", false ); }, /** * Close details */ close: function(){ this.details.classList.toggle( "x-details-hidden", true ); } }); // Fires when an instance of the element is created element.createdCallback = function() { this.detailsView = new DetailsView( this ); }; // Expose method open element.open = function(){ this.detailsView.open(); }; // Expose method close element.close = function(){ this.detailsView.close(); }; // Register the custom element document.registerElement( "x-details", { prototype: element }); </script> Further in JavaScript code, we create an element based on a generic HTML element (Object.create( HTMLElement.prototype )). Here we could inherit from a complex element (for example, video) if needed. We register a x-details custom element using the earlier one created as prototype. With element.createdCallback, we subscribe a handler that will be called when a custom element created. Here we attach our view to the element to enhance it with the functionality that we intend for it. Now we can use the component in HTML, as follows: <!DOCTYPE html> <html> <head> <title>X-DETAILS</title> <!-- Importing Web Component's Polyfill --> <!-- uncomment for non-Chrome browsers script src="./bower_components/webcomponentsjs/webcomponents.min.js"></script--> <!-- Importing Custom Elements --> <link rel="import" href="./x-details.html"> </head> <body> <x-details data-summary="Click me"> Nunc iaculis ac erat eu porttitor. Curabitur facilisis ligula et urna egestas mollis. Aliquam eget consequat tellus. Sed ullamcorper ante est. In tortor lectus, ultrices vel ipsum eget, ultricies facilisis nisl. Suspendisse porttitor blandit arcu et imperdiet. </x-details> </body> </html> Summary This article covered basically how we can create our own custom advanced elements that can be easily reused, restyled, and enhanced. The assets required to render such elements are HTML, CSS, JavaScript, and images are bundled as Web Components. So, we literally can build the Web now from the components similar to how buildings are made from bricks. Resources for Article: Further resources on this subject: An Introduction to Kibana [article] Working On Your Bot [article] Icons [article]
Read more
  • 0
  • 0
  • 2369

article-image-how-keep-simple-django-app-and-running
Liz Tom
02 Nov 2015
4 min read
Save for later

How to Keep a Simple Django App Up and Running

Liz Tom
02 Nov 2015
4 min read
Welcome back. You might have seen my last blog post on how to deploy a simple Django app using AWS. Quick Summary: 1. Spin up an EC2 instance 2. Install nginx, django, gunicorn on your EC2 instance 3. Turn on gunicorn and nginx 4. Success. Well, it is success until you terminate your connection to your EC2 instance. How do we keep the app running even when we terminate gunicorn? Based on the recommendations from those wiser than I, we're going to experiment with Upstart today. From the Upstart website: > Upstart is an event-based replacement for the /sbin/init daemon > which handles starting of tasks and services during boot, stopping > them during shutdown and supervising them while the system is running. Basically, you can write jobs in Upstart not only to keep your app running but you'll be able to do asynchronous boot sequences instead of synchronous sequences. Let's get Started Make sure you have your EC2 instance configured as described in my last blog post. Also make sure nginx and gunicorn are both not running. nginx starts automatically so make sure you run: sudo service nginx stop Since we're using Ubuntu, Upstart comes already installed. You can check to see which version you have by running: initctl --version You should see a little something like this: initctl (upstart 1.12.1) Copyright (C) 2006-2014 Canonical Ltd., 2011 Scott James Remnant This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Time to make our First Job First step is to cd into /etc/init. If you ls you'll notice that there are a bunch of .conf files. We're going to be making our own! vimyjob.conf Hello World In order to write an Upstart job you need to make sure you either have a script block (called a stanza) or an exec script. description "My first Upstart job" start on runlevel [2345] stop on runlevel [!2345] script echo 'hello world' end script Now you can start this by running: sudo service myjob start To see your awesome handiwork: cd /var/log/upstart cat myjob.log You'll see your very first Upstart job. Something Useful Now we'll actually get something running. description "Gunicorn application server for Todo" start on runlevel [2345] stop on runlevel [!2345] respawn setuidubuntu setgid www-data chdir /home/ubuntu/project-folder exec projectvirtualenv/bin/gunicorn --workers 2 project.wsgi:application Save your file and now try: sudo service myjob start Visit your public IP address and blamo! You've got your Django app live for the world to see. Close out your terminal window. Is your app still running? It should be. Let's go over a few lines of what your job is doing. start on runlevel [2345] stop on runlevel [!2345] Basically this means we're going to run our service when the system is at runlevels 2, 3, 4 or 5. Then when the system is not at any of those (rebooting, shutting down, etc) we'll stop running our service. respawn This tells Upstart to restart our job if it fails. This means we don't need to worry about rerunning all of our commands every single time something goes down. In our case, everytime something fails, it will restart our todo app. setuidubuntu setgid www-data chdir /home/ubuntu/project-folder Next we're setting our user and the group owners and changing directories into our project directory so we can run gunicorn from the right place. execprojectvirtualenv/bin/gunicorn --workers 2 project.wsgi:application Since this is an Upstart job, we need to have at least one script stanza or one exec script. So we have our exec script that basically starts gunicorn with 2 workers. We can set all sorts of configuration for gunicorn here as well. If you're ever wondering if something went wrong and you want to try to troubleshoot, just check out your log: /var/log/upstart/myjob.conf If you want to find out more about Upstart, you should visit their site. This tutorial brushes just the tiniest surface ever of Upstart, there's a bunch more that you can have it do for you but every project has it's own needs. Hopefully this tutorial inspires you to go out there and figure out what else you can achieve with some fancy Upstart jobs of your own! About the Author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C.  Liz’s passion for full stack development and digital media makes her a natural fit at ISL.  Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 4780

article-image-interactive-documents
Packt
02 Nov 2015
5 min read
Save for later

Interactive Documents

Packt
02 Nov 2015
5 min read
This article by Julian Hillebrand and Maximilian H. Nierhoff authors of the book Mastering RStudio for R Development covers the following topics: The two main ways to create interactive R Markdown documents Creating R Markdown and Shiny documents and presentations Using the ggvis package with R Markdown Embedding different types of interactive charts in documents Deploying interactive R Markdown documents (For more resources related to this topic, see here.) Creating interactive documents with R Markdown In this article, we want to focus on the opportunities to create interactive documents with R Markdown and RStudio. This is, of course, particularly interesting for the readers of a document, since it enables them to interact with the document by changing chart types, parameters, values, or other similar things. In principle, there are two ways to make an R Markdown document interactive. Firstly, you can use the Shiny web application framework of RStudio, or secondly, there is the possibility of incorporating various interactive chart types by using corresponding packages. Using R Markdown and Shiny Besides building complete web applications, there is also the possibility of integrating entire Shiny applications into R Markdown documents and presentations. Since we have already learned all the basic functions of R Markdown, and the use and logic of Shiny, we will focus on the following lines of integrating a simple Shiny app into an R Markdown file. In order for Shiny and R Markdown to work together, the argument, runtime: shiny must be added to the YAML header of the file. Of course, the RStudio IDE offers a quick way to create a new Shiny document presentation. Click on the new file, choose R Markdown, and in the popup window, select Shiny from the left-hand side menu. In the Shiny menu, you can decide whether you want to start with a Shiny Document option or a Shiny Presentation option: Shiny Document After choosing the Shiny Document option, a prefilled .Rmd file opens. It is different from the known R Markdown interface in that there is the Run Document button instead of the knit button and icon. The prefilled .Rmd file produces an R Markdown document with a working and interactive Shiny application. You can change the number of bins in the plot and also adjust the bandwidth. All these changes get rendered in real time, directly in your document. Shiny Presentation Also, when you click on Shiny Presentation in the selection menu, a prefilled .Rmd file opens. Because it is a presentation, the output format is changed to ioslides_presentation in the YAML header. The button in the code pane is now called Run Presentation: Otherwise, Shiny Presentation looks just like the normal R Markdown presentations. The Shiny app gets embedded in a slide and you can again interact with the underlying data of the application: Dissembling a Shiny R Markdown document Of course, the questions arises that how is it possible to embed a whole Shiny application onto an R Markdown document without the two usual basic files, ui.R and server.R? In fact, the rmarkdown package creates an invisible server.R file by extracting the R code from the code chunks. Reactive elements get placed into the index.html file of the HTML output, while the whole R Markdown document acts as the ui.R file. Embedding interactive charts into R Markdown The next way is to embed interactive chart types into R Markdown documents by using various R packages that enable us to create interactive charts. Some packages are as follows: ggvis rCharts googleVis dygraphs Therefore, we will not introduce them again, but will introduce some more packages that enable us to build interactive charts. They are: threejs networkD3 metricsgraphics plotly Please keep in mind that the interactivity logically only works with the HTML output of R Markdown. Using ggvis for interactive R Markdown documents Broadly speaking, ggvis is the successor of the well-known graphic package, ggplot2. The interactivity options of ggvis, which are based on the reactive programming model of the Shiny framework, are also useful for creating interactive R Markdown documents. To create an interactive R markdown document with ggvis, you need to click on the new file, then on R Markdown..., choose Shiny in the left menu of the new window, and finally, click on OK to create the document. As told before, since ggvis uses the reactive model of Shiny, we need to create an R Markdown document with ggvis this way. If you want to include an interactive ggvis plot within a normal R Markdown file, make sure to include the runtime: shiny argument in the YAML header. As shown, readers of this R Markdown document can easily adjust the bandwidth, and also, the kernel model. The interactive controls are created with input_. In our example, we used the controls, input_slider() and input_select(). For example, some of the other controls are input_checkbox(), input_numeric(), and so on. These controls have different arguments depending on the type of input. For both controls in our example, we used the label argument, which is just a text label shown next to the controls. Other arguments are ID (a unique identifier for the assigned control) and map (a function that remaps the output). Summary In this article, we have learned the two main ways to create interactive R Markdown documents. On the one hand, there is the versatile, usable Shiny framework. This includes the inbuilt Shiny documents and presentations options in RStudio, and also the ggvis package, which takes the advantages of the Shiny framework to build its interactivity. On the other hand, we introduced several already known, and also some new, R packages that make it possible to create several different types of interactive charts. Most of them achieve this by binding R to Existing JavaScript libraries. Resources for Article: Further resources on this subject: Jenkins Continuous Integration [article] Aspects of Data Manipulation in R [article] Find Friends on Facebook [article]
Read more
  • 0
  • 0
  • 12269
article-image-relational-databases-sqlalchemy
Packt
02 Nov 2015
28 min read
Save for later

Relational Databases with SQLAlchemy

Packt
02 Nov 2015
28 min read
In this article by Matthew Copperwaite, author of the book Learning Flask Framework, he talks about how relational databases are the bedrock upon which almost every modern web applications are built. Learning to think about your application in terms of tables and relationships is one of the keys to a clean, well-designed project. We will be using SQLAlchemy, a powerful object relational mapper that allows us to abstract away the complexities of multiple database engines, to work with the database directly from within Python. In this article, we shall: Present a brief overview of the benefits of using a relational database Introduce SQLAlchemy, The Python SQL Toolkit and Object Relational Mapper Configure our Flask application to use SQLAlchemy Write a model class to represent blog entries Learn how to save and retrieve blog entries from the database Perform queries—sorting, filtering, and aggregation Create schema migrations using Alembic (For more resources related to this topic, see here.) Why use a relational database? Our application's database is much more than a simple record of things that we need to save for future retrieval. If all we needed to do was save and retrieve data, we could easily use flat text files. The fact is, though, that we want to be able to perform interesting queries on our data. What's more, we want to do this efficiently and without reinventing the wheel. While non-relational databases (sometimes known as NoSQL databases) are very popular and have their place in the world of web, relational databases long ago solved the common problems of filtering, sorting, aggregating, and joining tabular data. Relational databases allow us to define sets of data in a structured way that maintains the consistency of our data. Using relational databases also gives us, the developers, the freedom to focus on the parts of our app that matter. In addition to efficiently performing ad hoc queries, a relational database server will also do the following: Ensure that our data conforms to the rules set forth in the schema Allow multiple people to access the database concurrently, while at the same time guaranteeing the consistency of the underlying data Ensure that data, once saved, is not lost even in the event of an application crash Relational databases and SQL, the programming language used with relational databases, are topics worthy of an entire book. Because this book is devoted to teaching you how to build apps with Flask, I will show you how to use a tool that has been widely adopted by the Python community for working with databases, namely, SQLAlchemy. SQLAlchemy abstracts away many of the complications of writing SQL queries, but there is no substitute for a deep understanding of SQL and the relational model. For that reason, if you are new to SQL, I would recommend that you check out the colorful book Learn SQL the Hard Way, Zed Shaw available online for free at http://sql.learncodethehardway.org/. Introducing SQLAlchemy SQLAlchemy is an extremely powerful library for working with relational databases in Python. Instead of writing SQL queries by hand, we can use normal Python objects to represent database tables and execute queries. There are a number of benefits to this approach which are listed as follows: Your application can be developed entirely in Python. Subtle differences between database engines are abstracted away. This allows you to do things just like a lightweight database, for instance, use SQLite for local development and testing, then switch to the databases designed for high loads (such as PostgreSQL) in production. Database errors are less common because there are now two layers between your application and the database server: the Python interpreter itself (which will catch the obvious syntax errors), and SQLAlchemy, which has well-defined APIs and it's own layer of error-checking. Your database code may become more efficient, thanks to SQLAlchemy's unit-of-work model which helps reduce unnecessary round-trips to the database. SQLAlchemy also has facilities for efficiently pre-fetching related objects known as eager loading. Object Relational Mapping (ORM) makes your code more maintainable, an asperation known as don't repeat yourself, (DRY). Suppose you add a column to a model. With SQLAlchemy it will be available whenever you use that model. If, on the other hand, you had hand-written SQL queries strewn throughout your app, you would need to update each query, one at a time, to ensure that you were including the new column. SQLAlchemy can help you avoid SQL injection vulnerabilities. Excellent library support: There are a multitude of useful libraries that can work directly with your SQLAlchemy models to provide things like maintenance interfaces and RESTful APIs. I hope you're excited after reading this list. If all the items in this list don't make sense to you right now, don't worry. Now that we have discussed some of the benefits of using SQLAlchemy, let's install it and start coding. If you'd like to learn more about SQLAlchemy, there is an article devoted entirely to its design in The Architecture of Open-Source Applications, available online for free at http://aosabook.org/en/sqlalchemy.html. Installing SQLAlchemy We will use pip to install SQLAlchemy into the blog app's virtualenv. To activate your virtualenv, change directories to source the activate script as follows: $ cd ~/projects/blog $ source bin/activate (blog) $ pip install sqlalchemy Downloading/unpacking sqlalchemy … Successfully installed sqlalchemy Cleaning up... You can check if your installation succeeded by opening a Python interpreter and checking the SQLAlchemy version; note that your exact version number is likely to differ. $ python >>> import sqlalchemy >>> sqlalchemy.__version__ '0.9.0b2' Using SQLAlchemy in our Flask app SQLAlchemy works very well with Flask on its own, but the author of Flask has released a special Flask extension named Flask-SQLAlchemy that provides helpers with many common tasks, and can save us from having to re-invent the wheel later on. Let's use pip to install this extension: (blog) $ pip install flask-sqlalchemy … Successfully installed flask-sqlalchemy Flask provides a standard interface for the developers who are interested in building extensions. As the framework has grown in popularity, the number of high quality extensions has increased. If you'd like to take a look at some of the more popular extensions, there is a curated list available on the Flask project website at http://flask.pocoo.org/extensions/. Choosing a database engine SQLAlchemy supports a multitude of popular database dialects, including SQLite, MySQL, and PostgreSQL. Depending on the database you would like to use, you may need to install an additional Python package containing a database driver. Listed next are several popular databases supported by SQLAlchemy and the corresponding pip-installable driver. Some databases have multiple driver options, so I have listed the most popular one first. Database Driver Package(s) SQLite Not needed, part of the Python standard library since version 2.5 MySQL MySQL-python, PyMySQL (pure Python), OurSQL PostgreSQL psycopg2 Firebird fdb Microsoft SQL Server pymssql, PyODBC Oracle cx-Oracle SQLite comes as standard with Python and does not require a separate server process, so it is perfect for getting up and running quickly. For simplicity in the examples that follow, I will demonstrate how to configure the blog app for use with SQLite. If you have a different database in mind that you would like to use for the blog project, feel free to use pip to install the necessary driver package at this time. Connecting to the database Using your favorite text editor, open the config.py module for our blog project (~/projects/blog/app/config.py). We are going to add an SQLAlchemy specific setting to instruct Flask-SQLAlchemy how to connect to our database. The new lines are highlighted in the following: class Configuration(object): APPLICATION_DIR = current_directory DEBUG = True SQLALCHEMY_DATABASE_URI = 'sqlite:///%s/blog.db' % APPLICATION_DIR The SQLALCHEMY_DATABASE_URIis comprised of the following parts: dialect+driver://username:password@host:port/database Because SQLite databases are stored in local files, the only information we need to provide is the path to the database file. On the other hand, if you wanted to connect to PostgreSQL running locally, your URI might look something like this: postgresql://postgres:secretpassword@localhost:5432/blog_db If you're having trouble connecting to your database, try consulting the SQLAlchemy documentation on the database URIs:http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html Now that we've specified how to connect to the database, let's create the object responsible for actually managing our database connections. This object is provided by the Flask-SQLAlchemy extension and is conveniently named SQLAlchemy. Open app.py and make the following additions: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) These changes instruct our Flask app, and in turn SQLAlchemy, how to communicate with our application's database. The next step will be to create a table for storing blog entries and to do so, we will create our first model. Creating the Entry model A model is the data representation of a table of data that we want to store in the database. These models have attributes called columns that represent the data items in the data. So, if we were creating a Person model, we might have columns for storing the first and last name, date of birth, home address, hair color, and so on. Since we are interested in creating a model to represent blog entries, we will have columns for things like the title and body content. Note that we don't say a People model or Entries model – models are singular even though they commonly represent many different objects. With SQLAlchemy, creating a model is as easy as defining a class and specifying a number of attributes assigned to that class. Let's start with a very basic model for our blog entries. Create a new file named models.py in the blog project's app/ directory and enter the following code: import datetime, re from app import db def slugify(s): return re.sub('[^w]+', '-', s).lower() class Entry(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) modified_timestamp = db.Column( db.DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now) def __init__(self, *args, **kwargs): super(Entry, self).__init__(*args, **kwargs) # Call parent constructor. self.generate_slug() def generate_slug(self): self.slug = '' if self.title: self.slug = slugify(self.title) def __repr__(self): return '<Entry: %s>' % self.title There is a lot going on, so let's start with the imports and work our way down. We begin by importing the standard library datetime and re modules. We will be using datetime to get the current date and time, and re to do some string manipulation. The next import statement brings in the db object that we created in app.py. As you recall, the db object is an instance of the SQLAlchemy class, which is a part of the Flask-SQLAlchemy extension. The db object provides access to the classes that we need to construct our Entry model, which is just a few lines ahead. Before the Entry model, we define a helper function slugify, which we will use to give our blog entries some nice URLs. The slugify function takes a string like A post about Flask and uses a regular expression to turn a string that is human-readable in a URL, and so returns a-post-about-flask. Next is the Entry model. Our Entry model is a normal class that extends db.Model. By extending the db.Model our Entry class will inherit a variety of helpers which we'll use to query the database. The attributes of the Entry model, are a simple mapping of the names and data that we wish to store in the database and are listed as follows: id: This is the primary key for our database table. This value is set for us automatically by the database when we create a new blog entry, usually an auto incrementing number for each new entry. While we will not explicitly set this value, a primary key comes in handy when you want to refer one model to another. title: The title for a blog entry, stored as a String column with a maximum length of 100. slug: The URL-friendly representation of the title, stored as a String column with a maximum length of 100. This column also specifies unique=True, so that no two entries can share the same slug. body: The actual content of the post, stored in a Text column. This differs from the String type of the Title and Slug as you can store as much text as you like in this field. created_timestamp: The time a blog entry was created, stored in a DateTime column. We instruct SQLAlchemy to automatically populate this column with the current time by default when an entry is first saved. modified_timestamp: The time a blog entry was last updated. SQLAlchemy will automatically update this column with the current time whenever we save an entry. For short strings such as titles or names of things, the String column is appropriate, but when the text may be especially long it is better to use a Text column, as we did for the entry body. We've overridden the constructor for the class (__init__) so that when a new model is created, it automatically sets the slug for us based on the title. The last piece is the __repr__ method which is used to generate a helpful representation of instances of our Entry class. The specific meaning of __repr__ is not important but allows you to reference the object that the program is working with, when debugging. A final bit of code needs to be added to main.py, the entry-point to our application, to ensure that the models are imported. Add the highlighted changes to main.py as follows: from app import app, db import models import views if __name__ == '__main__': app.run() Creating the Entry table In order to start working with the Entry model, we first need to create a table for it in our database. Luckily, Flask-SQLAlchemy comes with a nice helper for doing just this. Create a new sub-folder named scripts in the blog project's app directory. Then create a file named create_db.py: (blog) $ cd app/ (blog) $ mkdir scripts (blog) $ touch scripts/create_db.py Add the following code to the create_db.py module. This function will automatically look at all the code that we have written and create a new table in our database for the Entry model based on our models: from main import db if __name__ == '__main__': db.create_all() Execute the script from inside the app/ directory. Make sure the virtualenv is active. If everything goes successfully, you should see no output. (blog) $ python create_db.py (blog) $ If you encounter errors while creating the database tables, make sure you are in the app directory, with the virtualenv activated, when you run the script. Next, ensure that there are no typos in your SQLALCHEMY_DATABASE_URI setting. Working with the Entry model Let's experiment with our new Entry model by saving a few blog entries. We will be doing this from the Python interactive shell. At this stage let's install IPython, a sophisticated shell with features like tab-completion (that the default Python shell lacks): (blog) $ pip install ipython Now check if we are in the app directory and let's start the shell and create a couple of entries as follows: (blog) $ ipython In []: from models import * # First things first, import our Entry model and db object. In []: db # What is db? Out[]: <SQLAlchemy engine='sqlite:////home/charles/projects/blog/app/blog.db'> If you are familiar with the normal Python shell but not IPython, things may look a little different at first. The main thing to be aware of is that In[] refers to the code you type in, and Out[] is the output of the commands you put in to the shell. IPython has a neat feature that allows you to print detailed information about an object. This is done by typing in the object's name followed by a question-mark (?). Introspecting the Entry model provides a bit of information, including the argument signature and the string representing that object (known as the docstring) of the constructor: In []: Entry? # What is Entry and how do we create it? Type: _BoundDeclarativeMeta String Form:<class 'models.Entry'> File: /home/charles/projects/blog/app/models.py Docstring: <no docstring> Constructor information: Definition:Entry(self, *args, **kwargs) We can create Entry objects by passing column values in as the keyword-arguments. In the preceding example, it uses **kwargs; this is a shortcut for taking a dict object and using it as the values for defining the object, as shown next: In []: first_entry = Entry(title='First entry', body='This is the body of my first entry.') In order to save our first entry, we will to add it to the database session. The session is simply an object that represents our actions on the database. Even after adding it to the session, it will not be saved to the database yet. In order to save the entry to the database, we need to commit our session: In []: db.session.add(first_entry) In []: first_entry.id is None # No primary key, the entry has not been saved. Out[]: True In []: db.session.commit() In []: first_entry.id Out[]: 1 In []: first_entry.created_timestamp Out[]: datetime.datetime(2014, 1, 25, 9, 49, 53, 1337) As you can see from the preceding code examples, once we commit the session, a unique id will be assigned to our first entry and the created_timestamp will be set to the current time. Congratulations, you've created your first blog entry! Try adding a few more on your own. You can add multiple entry objects to the same session before committing, so give that a try as well. At any point while you are experimenting, feel free to delete the blog.db file and re-run the create_db.py script to start over with a fresh database. Making changes to an existing entry In order to make changes to an existing Entry, simply make your edits and then commit. Let's retrieve our Entry using the id that was returned to use earlier, make some changes and commit it. SQLAlchemy will know that it needs to be updated. Here is how you might make edits to the first entry: In []: first_entry = Entry.query.get(1) In []: first_entry.body = 'This is the first entry, and I have made some edits.' In []: db.session.commit() And just like that your changes are saved. Deleting an entry Deleting an entry is just as easy as creating one. Instead of calling db.session.add, we will call db.session.delete and pass in the Entry instance that we wish to remove: In []: bad_entry = Entry(title='bad entry', body='This is a lousy entry.') In []: db.session.add(bad_entry) In []: db.session.commit() # Save the bad entry to the database. In []: db.session.delete(bad_entry) In []: db.session.commit() # The bad entry is now deleted from the database. Retrieving blog entries While creating, updating, and deleting are fairly straightforward operations, the real fun starts when we look at ways to retrieve our entries. We'll start with the basics, and then work our way up to more interesting queries. We will use a special attribute on our model class to make queries: Entry.query. This attribute exposes a variety of APIs for working with the collection of entries in the database. Let's simply retrieve a list of all the entries in the Entry table: In []: entries = Entry.query.all() In []: entries # What are our entries? Out[]: [<Entry u'First entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>, <Entry u'Fourth entry'>] As you can see, in this example, the query returns a list of Entry instances that we created. When no explicit ordering is specified, the entries are returned to us in an arbitrary order chosen by the database. Let's specify that we want the entries returned to us in an alphabetical order by title: In []: Entry.query.order_by(Entry.title.asc()).all() Out []: [<Entry u'First entry'>, <Entry u'Fourth entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>] Shown next is how you would list your entries in reverse-chronological order, based on when they were last updated: In []: oldest_to_newest = Entry.query.order_by(Entry.modified_timestamp.desc()).all() Out []: [<Entry: Fourth entry>, <Entry: Third entry>, <Entry: Second entry>, <Entry: First entry>] Filtering the list of entries It is very useful to be able to retrieve the entire collection of blog entries, but what if we want to filter the list? We could always retrieve the entire collection and then filter it in Python using a loop, but that would be very inefficient. Instead we will rely on the database to do the filtering for us, and simply specify the conditions for which entries should be returned. In the following example, we will specify that we want to filter by entries where the title equals 'First entry'. In []: Entry.query.filter(Entry.title == 'First entry').all() Out[]: [<Entry u'First entry'>] If this seems somewhat magical to you, it's because it really is! SQLAlchemy uses operator overloading to convert expressions like <Model>.<column> == <some value> into an abstracted object called BinaryExpression. When you are ready to execute your query, these data-structures are then translated into SQL. A BinaryExpression is simply an object that represents the logical comparison and is produced by over-riding the standards methods that are typically called on an object when comparing values in Python. In order to retrieve a single entry, you have two options, .first() and .one(). Their differences and similarities are summarized in the following table: Number of matching rows first() behavior one() behavior 1 Return the object. Return the object. 0 Return None. Raise sqlalchemy.orm.exc.NoResultFound 2+ Return the first object (based on either explicit ordering or the ordering chosen by the database). Raise sqlalchemy.orm.exc.MultipleResultsFound Let's try the same query as before, but instead of calling .all(), we will call .first() to retrieve a single Entry instance: In []: Entry.query.filter(Entry.title == 'First entry').first() Out[]: <Entry u'First entry'> Notice how previously .all() returned a list containing the object, whereas .first() returned just the object itself. Special lookups In the previous example we tested for equality, but there are many other types of lookups possible. In the following table, have listed some that you may find useful. A complete list can be found in the SQLAlchemy documentation. Example Meaning Entry.title == 'The title' Entries where the title is "The title", case-sensitive. Entry.title != 'The title' Entries where the title is not "The title". Entry.created_timestamp < datetime.date(2014, 1, 25) Entries created before January 25, 2014. For less than or equal, use <=. Entry.created_timestamp > datetime.date(2014, 1, 25) Entries created after January 25, 2014. For greater than or equal, use >=. Entry.body.contains('Python') Entries where the body contains the word "Python", case-sensitive. Entry.title.endswith('Python') Entries where the title ends with the string "Python", case-sensitive. Note that this will also match titles that end with the word "CPython", for example. Entry.title.startswith('Python') Entries where the title starts with the string "Python", case-sensitive. Note that this will also match titles like "Pythonistas". Entry.body.ilike('%python%') Entries where the body contains the word "python" anywhere in the text, case-insensitive. The "%" character is a wild-card. Entry.title.in_(['Title one', 'Title two']) Entries where the title is in the given list, either 'Title one' or 'Title two'. Combining expressions The expressions listed in the preceding table can be combined using bitwise operators to produce arbitrarily complex expressions. Let's say we want to retrieve all blog entries that have the word Python or Flask in the title. To accomplish this, we will create two contains expressions, then combine them using Python's bitwise OR operator which is a pipe| character unlike a lot of other languages that use a double pipe || character: Entry.query.filter(Entry.title.contains('Python') | Entry.title.contains('Flask')) Using bitwise operators, we can come up with some pretty complex expressions. Try to figure out what the following example is asking for: Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) As you probably guessed, this query returns all entries where the title contains either Python or Flask, and which were created within the last 30 days. We are using Python's bitwise OR and AND operators to combine the sub-expressions. For any query you produce, you can view the generated SQL by printing the query as follows: In []: query = Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) In []: print str(query) SELECT entry.id AS entry_id, ... FROM entry WHERE ( (entry.title LIKE '%%' || :title_1 || '%%') OR (entry.title LIKE '%%' || :title_2 || '%%') ) AND entry.created_timestamp > :created_timestamp_1 Negation There is one more piece to discuss, which is negation. If we wanted to get a list of all blog entries which did not contain Python or Flask in the title, how would we do that? SQLAlchemy provides two ways to create these types of expressions, using either Python's unary negation operator (~) or by calling db.not_(). This is how you would construct this query with SQLAlchemy: Using unary negation: In []: Entry.query.filter(~(Entry.title.contains('Python') | Entry.title.contains('Flask')))   Using db.not_(): In []: Entry.query.filter(db.not_(Entry.title.contains('Python') | Entry.title.contains('Flask'))) Operator precedence Not all operations are considered equal to the Python interpreter. This is like in math class, where we learned that expressions like 2 + 3 * 4 are equal to 14 and not 20, because the multiplication operation occurs first. In Python, bitwise operators all have a higher precedence than things like equality tests, so this means that when you are building your query expression, you have to pay attention to the parentheses. Let's look at some example Python expressions and see the corresponding query: Expression Result (Entry.title == 'Python' | Entry.title == 'Flask') Wrong! SQLAlchemy throws an error because the first thing to be evaluated is actually the 'Python' | Entry.title! (Entry.title == 'Python') | (Entry.title == 'Flask') Right. Returns entries where the title is either "Python" or "Flask". ~Entry.title == 'Python' Wrong! SQLAlchemy will turn this into a valid SQL query, but the results will not be meaningful. ~(Entry.title == 'Python') Right. Returns entries where the title is not equal to "Python". If you find yourself struggling with the operator precedence, it's a safe bet to put parentheses around any comparison that uses ==, !=, <, <=, >, and >=. Making changes to the schema The final topic we will discuss in this article is how to make modifications to an existing Model definition. From the project specification, we know we would like to be able to save drafts of our blog entries. Right now we don't have any way to tell whether an entry is a draft or not, so we will need to add a column that let's us store the status of our entry. Unfortunately, while db.create_all() works perfectly for creating tables, it will not automatically modify an existing table; to do this we need to use migrations. Adding Flask-Migrate to our project We will use Flask-Migrate to help us automatically update our database whenever we change the schema. In the blog virtualenv, install Flask-migrate using pip: (blog) $ pip install flask-migrate The author of SQLAlchemy has a project called alembic; Flask-Migrate makes use of this and integrates it with Flask directly, making things easier. Next we will add a Migrate helper to our app. We will also create a script manager for our app. The script manager allows us to execute special commands within the context of our app, directly from the command-line. We will be using the script manager to execute the migrate command. Open app.py and make the following additions: from flask import Flask from flask.ext.migrate import Migrate, MigrateCommand from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) migrate = Migrate(app, db) manager = Manager(app) manager.add_command('db', MigrateCommand) In order to use the manager, we will add a new file named manage.py along with app.py. Add the following code to manage.py: from app import manager from main import * if __name__ == '__main__': manager.run() This looks very similar to main.py, the key difference being that instead of calling app.run(), we are calling manager.run(). Django has a similar, although auto-generated, manage.py file that serves a similar function. Creating the initial migration Before we can start changing our schema, we need to create a record of its current state. To do this, run the following commands from inside your blog's app directory. The first command will create a migrations directory inside the app folder which will track the changes we make to our schema. The second command db migrate will create a snapshot of our current schema so that future changes can be compared to it. (blog) $ python manage.py db init Creating directory /home/charles/projects/blog/app/migrations ... done ... (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. Generating /home/charles/projects/blog/app/migrations/versions/535133f91f00_.py ... done Finally, we will run db upgrade to run the migration which will indicate to the migration system that everything is up-to-date: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 535133f91f00, empty message Adding a status column Now that we have a snapshot of our current schema, we can start making changes. We will be adding a new column named status, which will store an integer value corresponding to a particular status. Although there are only two statuses at the moment (PUBLIC and DRAFT), using an integer instead of a Boolean gives us the option to easily add more statuses in the future. Open models.py and make the following additions to the Entry model: class Entry(db.Model): STATUS_PUBLIC = 0 STATUS_DRAFT = 1 id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) status = db.Column(db.SmallInteger, default=STATUS_PUBLIC) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) ... From the command-line, we will once again be running db migrate to generate the migration script. You can see from the command's output that it found our new column: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added column 'entry.status' Generating /home/charles/projects/blog/app/migrations/versions/2c8e81936cad_.py ... done Because we have blog entries in the database, we need to make a small modification to the auto-generated migration to ensure the statuses for the existing entries are initialized to the proper value. To do this, open up the migration file (mine is migrations/versions/2c8e81936cad_.py) and change the following line: op.add_column('entry', sa.Column('status', sa.SmallInteger(), nullable=True)) The replacement of nullable=True with server_default='0' tells the migration script to not set the column to null by default, but instead to use 0: op.add_column('entry', sa.Column('status', sa.SmallInteger(), server_default='0')) Finally, run db upgrade to run the migration and create the status column: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade 535133f91f00 -> 2c8e81936cad, empty message Congratulations, your Entry model now has a status field! Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. We will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding. Resources for Article:   Further resources on this subject: Man, Do I Like Templates! [article] Snap – The Code Snippet Sharing Application [article] Deploying on your own server [article]
Read more
  • 0
  • 0
  • 14720

article-image-exciting-features-haxeflixel
Packt
02 Nov 2015
4 min read
Save for later

The Exciting Features of HaxeFlixel

Packt
02 Nov 2015
4 min read
This article by Jeremy McCurdy, the author of the book Haxe Game Development Essentials, uncovers the exciting features of HaxeFlixel. When getting into cross-platform game development, it's often difficult to pick the best tool. There are a lot of engines and languages out there to do it, but when creating 2D games, one of the best options out there is HaxeFlixel. HaxeFlixel is a game engine written in the Haxe language. It is powered by the OpenFL framework. Haxe is a cross-platform language and compiler that allows you to write code and have it run on a multitude of platforms. OpenFL is a framework that expands the Haxe API and allows you to have easy ways to handle things such as rendering an audio in a uniform way across different platforms. Here's a rundown of what we'll look at: Core features Display Audio Input Other useful features Multiplatform support Advanced user interface support Visual effects  (For more resources related to this topic, see here.)  Core features HaxeFlixel is a 2D game engine, originally based off the Flash game engine Flixel. So, what makes it awesome? Let's start with the basic things you need: display, audio, and input. Display In HaxeFlixel, most visual elements are represented by objects using the FlxSprite class. This can be anything from spritesheet animations to shapes drawn through code. This provides you with a simple and consistent way of working with visual elements. Here's an example of how the FlxSprite objects are used: You can handle things such as layering by using the FlxGroup class, which does what its name implies—it groups things together. The FlxGroup class also can be used for collision detection (check whether objects from group A hit objects from group B). It also acts an object pool for better memory management. It's really versatile without feeling bloated. Everything visual is displayed by using the FlxCamera class. As the name implies, it's a game camera. It allows you to do things such as scrolling, having fullscreen visual effects, and zooming in or out of the page. Audio Sound effects and music are handled using a simple but effective sound frontend. It allows you to play sound effects and loop music clips with easy function calls. You can also manage the volume on a per sound basis, via global volume controls, or a mix of both. Input HaxeFlixel supports many methods of input. You can use mouse, touch, keyboard, or gamepad input. This allows you to support players on every platform easily. On desktop platforms, you can easily customize the mouse cursor without the need to write special functionalities. The built-in gamepad support covers mappings for the following controllers: Xbox PS3 PS4 OUYA Logitech Other useful features HaxeFlixel has a bunch of other cool features. This makes it a solid choice as a game engine. Among these are multiplatform support, advanced user interface support, and visual effects. Multi-platform support HaxeFlixel can be built for many different platforms. Much of this comes from it being built using OpenFL and its stellar cross-platform support. You can build desktop games that will work natively on Windows, Mac, and Linux. You can build mobile games for Android and iOS with relative ease. You can also target the Web by using Flash or the experimental support for HTML5. Advanced user interface support By using the flixel-ui add-on library, you can create complex game user interfaces. You can define and set up these interfaces with this by using XML configuration files. The flixel-ui library gives you access to a lot of different control types, such as 9-sliced images, the check/toggle buttons, text input, tabs, and drop-down menus. You can even localize UI text into different languages by using the firetongue library of Haxe. Visual effects Another add-on is the effects library. It allows you to warp and distort sprites by using the FlxGlitchSprite and FlxWaveSprite classes. You can also add trails to objects by using the FlxTrail class. Aside from the add-on library, HaxeFlixel also has built-in support for 2D particle effects, camera effects such as screen flashes and fades, and screen shake for an added impact. Summary In this article, we discussed several features of HaxeFlixel. This includes the core features of display, audio, and input. We also covered the additional features of multiplatform support, advanced user interface support, and visual effects. Resources for Article: Further resources on this subject: haXe 2: The Dynamic Type and Properties [article] Being Cross-platform with haXe [article] haXe 2: Using Templates [article]
Read more
  • 0
  • 0
  • 23481

Packt
02 Nov 2015
28 min read
Save for later

Let's Get Physical – Using GameMaker's Physics System

Packt
02 Nov 2015
28 min read
 In this article by Brandon Gardiner and Julián Rojas Millán, author of the book GameMaker Cookbook we'll cover the following topics: Creating objects that use physics Alternating the gravity Applying a force via magnets Creating a moving platform Making a rope (For more resources related to this topic, see here.) The majority of video games are ruled by physics in one way or another. 2D platformers require coded movement and jump physics. Shooters, both 2D and 3D, use ballistic calculators that vary in sophistication to calculate whether you shot that guy or missed him and he's still coming to get you. Even Pong used rudimentary physics to calculate the ball's trajectory after bouncing off of a paddle or wall. The next time you play a 3D shooter or action-adventure game, check whether or not you see the logo for Havok, a physics engine used in over 500 games since it was introduced in 2000. The point is that physics, however complex, is important in video games. GameMaker comes with its own engine that can be used to recreate physics-based sandbox games, such as The Incredible Machine, or even puzzle games, such as Cut the Rope or Angry Birds. Let's take a look at how elements of these games can be accomplished using GameMaker's built-in physics engine. Physics engine 101 In order to use GameMaker's physics engine, we first need to set it up. Let's create and test some basic physics before moving on to something more complicated. Gravity and force One of the things that we learned with regards to GameMaker physics was to create our own simplistic gravity. Now that we've set up gravity using the physics engine, let's see how we can bend it according to our requirements. Physics in the environment GameMaker's physics engine allows you to choose not only the objects that are affected by external forces but also allows you to see how they are affected. Let's take a look at how this can be applied to create environmental objects in your game. Advanced physics-based objects Many platforming games, going all the way back to Pitfall!, have used objects, such as a rope as a gameplay feature. Pitfall!, mind you, uses static rope objects to help the player avoid crocodiles, but many modern games use dynamic ropes and chains, among other things, to create a more immersive and challenging experience. Creating objects that use physics There's a trend in video games where developers create products that have less games than play areas; worlds and simulators in which a player may or may not be given an objective and it wouldn't matter either way. These games can take on a life of their own; Minecraft is essentially a virtual game of building blocks and yet has become a genre of its own, literally making its creator, Markus Persson (also known as Notch), a billionaire in the process. While it is difficult to create, the fun in games such as Minecraft is designed by the player. If you give a player a set of tools or objects to play with, you may end up seeing an outcome you hadn't initially thought of and that's a good thing. The reason why I have mentioned all of this is to show you how it binds to GameMaker and what we can do with it. In a sense, GameMaker is a lot like Minecraft. It is a set of tools, such as the physics engine we're about to use, that the user can employ if he/she desires (of course, within limits), in order to create something funny or amazing or both. What you do with these tools is up to you, but you have to start somewhere. Let's take a look at how to build a simple physics simulator. Getting ready The first thing you'll need is a room. Seems simple enough, right? Well, it is. One difference, however, is that you'll need to enable physics before we begin. With the room open, click on the Physics tab and make sure that the box marked Room is Physics World is checked. After this, we'll need some sprites and objects. For sprites, you'll need a circle, triangle, and two squares, each of a different color. The circle is for obj_ball. The triangle is for obj_poly. One of the squares is for obj_box, while the other is for obj_ground. You'll also need four objects without sprites: obj_staticParent, obj_dynamicParent, obj_button, and obj_control. How to do it Open obj_staticParent and add two collision events: one with itself and one with obj_dynamicParent. In each of the collision events, drag and drop a comment from the Control tab to the Actions box. In each comment, write Collision. Close obj_staticParent and repeat steps 1-3 for obj_dynamicParent. In obj_dynamicParent, click on Add Event, and then click on Other and select Outside Room. From the Main1 tab, drag and drop Destroy Instance in the Actions box. Select Applies to Self. Open obj_ground and set the parent to obj_staticParent. Add a create event with a code block containing the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open the room that you created and start placing instances of obj_ground around it to create platforms, stairs, and so on. This is how mine looked like: Open obj_ball and set the parent to obj_dynamicParent. Add a create event and enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ball) / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_box, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.5); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.01); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_poly, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_polygon_shape(fixture); physics_fixture_add_point(fixture, 0, -(sprite_height / 2)); physics_fixture_add_point(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_add_point(fixture, -(sprite_width / 2), sprite_height / 2); physics_fixture_set_density(fixture, 0.01); physics_fixture_set_restitution(fixture, 0.1); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_control and add a create event using the following code: globalvar shape_select; globalvar shape_output; shape_select = 0; Add a Step and add the following code to a code block: if mouse_check_button(mb_left) && alarm[0] < 0 && !place_meeting(x, y, obj_button) { instance_create(mouse_x, mouse_y, shape_output); alarm[0] = 5; } if mouse_check_button_pressed(mb_right) { shape_select += 1; } Now, add an event to alarm[0] and give it a comment stating Set Timer. Place an instance of obj_control in the room that you created, but make sure that it is placed in the coordinates (0, 0). Open obj_button and add a step event. Drag a code block to the Actions tab and input the following code: if shape_select > 2 { shape_select = 0; } if shape_select = 0 { sprite_index = spr_ball; shape_output = obj_ball; } if shape_select = 1 { sprite_index = spr_box; shape_output = obj_box; } if shape_select = 2 { sprite_index = spr_poly; shape_output = obj_poly; } Once these steps are completed, you can test your physics environment. Use the right mouse button to select the shape you would like to create, and use the left mouse button to create it. Have fun! How it works While not overly complicated, there is a fair amount of activity in this recipe. Let's take a quick look at the room itself. When you created this room, you checked the box for Room is Physics World. This does exactly what it says it does; it enables physics in the room. If you have any physics-enabled objects in a room that is not a physics world, errors will occur. In the same menu, you have the gravity settings (which are vector-based) and pixels to meters, which sets the scale of objects in the room. This setting is important as it controls how each object is affected by the coded physics. YoYo Games based GameMaker's physics on the real world (as they should) and so GameMaker needs to know how many meters are represented by each pixel. The higher the number, the larger the world in the room. If you place an object in two different rooms with different pixel to meter settings, even though the objects have the same settings, GameMaker will apply physics to them differently because it views them as being of differing size and weight. Let's take a look at the objects in this simulation. Firstly, you have two parent objects: one static and the other dynamic. The static object is the only parent to one object: obj_ground. The reason for this is that static objects are not affected by outside forces in a physics world, that is, the room you built. Because of this, the ground pieces are able to ignore gravity and forces applied by other objects that collide with them. Now, neither obj_staticParent nor obj_dynamicParent contain any physics code; we saved this for our other objects. We use our parent objects to govern our collision groups using two objects instead of coding collisions in each object. So, we use drag and drop collision blocks to ensure that any children can collide with instances of one another and with themselves. Why did you drag comment blocks into these collision events? We did this so that GameMaker doesn't ignore them; the contents of each comment block are irrelevant. Also, the dynamic parent has an event that destroys any instance of its children that end up outside the room. The reason for this is simply to save memory. Otherwise, each object, even those off-screen, will be accounted for calculations at every step and this will slow everything down and eventually crash the program. Now, as we're using physics-enabled objects, let's see how each one differs from the others. When working with the object editor, you may have noticed the checkbox labelled Uses Physics. This checkbox will automatically set up the basic physics code within the selected object, but only after assuming that you're using the drag and drop method of programming. If you click on it, you'll see a new menu with basic collision options as well as several values and associated options: Density: Density in GameMaker works exactly as it does in real life. An object with a high density will be much heavier and harder to move via force than a low-density object of the same size. Think of how far you can kick an empty cardboard box versus how far you can kick a cardboard box full of bricks, assuming that you don't break your foot. Restitution: Restitution essentially governs an object's bounciness. A higher restitution will cause an object to bounce like a rubber ball, whereas a lower restitution will cause an object to bounce like a box of bricks, as mentioned in the previous example. Collision group: Collision grouping tells GameMaker how certain objects react with one another. By default, all physics objects are set to collision group 0. This means that they will not collide with other objects without a specific collision event. Assigning a positive number to this setting will cause the object in question to collide with all other objects in the same collision group, regardless of collision events. Assigning a negative number will prevent the object from colliding with any objects in that group. I don't recommend that you use collision groups unless absolutely necessary, as it takes a great deal of memory to work properly. Linear damping: Linear damping works a lot like air friction in real life. This setting affects the velocity (momentum) of objects in motion over time. Imagine a military shooter where thrown grenades don't arc, they just keep soaring through the air. We don't need this. This is what rockets are for. Angular damping: Angular damping is similar to linear damping. It only affects an object's rotation. This setting keeps objects from spinning forever. Have you ever ridden the Teacup ride at Disneyland? If so, you will know that angular damping is a good thing. Friction: Friction also works in a similar way to linear damping, but it affects an object's momentum as it collides with another object or surface. If you want to create icy surfaces in a platformer, friction is your friend. We didn't use this menu in this recipe but we did set and modify these settings through code. First, in each of the objects, we set them to use physics and then declared their shapes and collision masks. We started with declaring the fixture variable because, as you can see, it is part of each of the functions we used and typing fixture is easier than typing physics_fixture_create() every time. The fixture variable that we bind to the object is what is actually being affected by forces and other physics objects, so we must set its shape and properties in order to tell GameMaker how it should react. In order to set the fixture's shape, we use physics_set_circle_shape, physics_set_box_shape, and physics_set_polygon_shape. These functions define the collision mask associated with the object in question. In the case of the circle, we got the radius from half the width of the sprite, whereas for the box, we found the outer edges used via half the width and half the height. GameMaker then uses this information to create a collision mask to match the sprite from which the information was gathered. When creating a fixture from a more complex sprite, you can either use the aforementioned methods to approximate a mask, or you can create a more complex shape using a polygon like we did for the triangle. You'll notice that the code to create the triangle fixture had extra lines. This is because polygons require you to map each point on the shape you're trying to create. You can map three to eight points by telling GameMaker where each one is situated in relation to the center of the image (0, 0). One very important detail is that you cannot create a concave shape; this will result in an error. Every fixture you create must have a convex shape. The only way to create a concave fixture is to actually create multiple fixtures in the same object. If you were to take the code for the triangle, duplicate all of it in the same code block and alter the coordinates for each point in the duplicated code; you can create concave shapes. For example, you can use two rectangles to make an L shape. This can only be done using a polygon fixture, as it is the only fixture that allows you to code the position of individual points. Once you've coded the shape of your fixture, you can begin to code its attributes. I've described what each physics option does, and you've coded and tested them using the instructions mentioned earlier. Now, take a look at the values for each setting. The ball object has a higher restitution than the rest; did you notice how it bounced? The box object has a very low friction; it slides around on platforms as though it is made of ice. The triangle has very low density and angular damping; it is easily knocked around by the other objects and spins like crazy. You can change how objects react to forces and collisions by changing one or more of these values. I definitely recommend that you play around with these settings to see what you can come up with. Remember how the ground objects are static? Notice how we still had to code them? Well, that's because they still interact with other objects but in an almost opposite fashion. Since we set the object's density to 0, GameMaker more or less views this as an object that is infinitely dense; it cannot be moved by outside forces or collisions. It can, however, affect other objects. We don't have to set the angular and linear damping values simply because the ground doesn't move. We do, however, have to set the restitution and friction levels because we need to tell GameMaker how other objects should react when they come in contact with the ground. Do you want to make a rubber wall to bounce a player off? Set the restitution to a higher level. Do you want to make that icy patch we talked about? Then, you need to lower the friction. These are some fun settings to play around with, so try it out. Alternating gravity Gravity can be a harsh mistress; if you've ever fallen from a height, you will understand what I mean. I often think it would be great if we could somehow lessen gravity's hold on us, but then I wonder what it would be like if we could just reverse it all together! Imagine flipping a switch and then walking on the ceiling! I, for one, think that it would be great. However, since we don't have the technology to do it in real life, I'll have to settle for doing it in video games. Getting ready For this recipe, let's simplify things and use the physics environment that we created in the previous recipe. How to do it In obj_control, open the code block in the create event. Add the following code: physics_world_gravity(0, -10); That's it! Test the environment and see what happens when you create your physics objects. How it works GameMaker's physics world of gravity is vector-based. This means that you simply need to change the values of x and y in order to change how gravity works in a particular room. If you take a look at the Physics tab in the room editor, you'll see that there are values under x and y. The default value is 0 for x and 10 for y. When we added this code to the control object's create event, we changed the value of y to -10, which means that it will flow in the opposite direction. You can change the direction to 360 degrees by altering both x and y, and you can change the gravity's strength by raising and lowering the values. There's more Alternating the gravity's flow can be a lot of fun in a platformer. Several games have explored this in different ways. Your character can change the gravity by hitting a switch in a game, the player can change it by pressing a button, or you can just give specific areas different gravity settings. Play around with this and see what you can create. Applying force via magnets Remember playing with magnets in a science class when you were a kid. It was fun back then, right? Well, it's still fun; powerful magnets make a great gift for your favorite office worker. What about virtual magnets, though? Are they still fun? The answer is yes. Yes, they are. Getting ready Once again, we're simply going to modify our existing physics environment in order to add some new functionality.  How to do it In obj_control, open the code block in the step event. Add the following code: if keyboard_check(vk_space) { with (obj_dynamicParent) { var dir = point_direction(x,y,mouse_x,mouse_y); physics_apply_force(x, y, lengthdir_x(30, dir), lengthdir_y(30, dir)); } } Once you close the code block, you can test your new magnet. Add some objects, hold down the spacebar, and see what happens. How it works Applying a force to a physics-enabled object in GameMaker will add a given value to the direction, rotation, and speed of the said object. Force can be used to gradually propel an object in a given direction, or through a little math, as in this case, draw objects nearer. What we're doing here is that while the Spacebar is held down, any objects in the vicinity are drawn to the magnet (in this case, your mouse). In order to accomplish this, we first declare that the following code needs to act on obj_dynamicParent, as opposed to acting on the control object where the code resides. We then set the value of a dir variable to the point_direction of the mouse, as it relates to any child of obj_dynamicParent. From there, we can begin to apply force. With physics_apply_force, the first two values represent the x and y coordinates of the object to which the force is being applied. Since the object(s) in question is/are not static, we simply set the coordinates to whatever value they have at the time. The other two values are used in tandem to calculate the direction in which the object will travel and the force propelling it in Newtons. We get these values, in this instance, by calculating the lengthdir for both x and y. The lengthdir finds the x or y value of a point at a given length (we used 30) at a given angle (we used dir, which represents point_direction, that finds the angle where the mouse's coordinates lie). If you want to increase the length value, then you need to increase the power of the magnet. Creating a moving platform We've now seen both static and dynamic physics objects in GameMaker, but what happens when we want the best of both the worlds? Let's take a look at how to create a platform that can move and affect other objects via collisions but is immune to said collisions. Getting ready Again, we'll be using our existing physics environment, but this time, we'll need a new object. Create a sprite that is128 px wide by 32 px high and assign it to an object called obj_platform. Also, create another object called obj_kinematicParent but don't give it a sprite. Add collision events to obj_staticParent, obj_dynamicParent, and itself. Make sure that there is a comment in each event. How to do it In obj_platform, add a create event. Drag a code block to the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); phy_speed_x = 5; Add a Step event with a code block containing the following code: if (x <64) or (x > room_width-64) { phy_speed_x = phy_speed_x * -1; } Place an instance of obj_platform in the room, which is slightly higher than the highest instance of obj_ground. Once this is done, you can go ahead and test it. Try dropping various objects on the platform and see what happens! How it works Kinematic objects in GameMaker's physics world are essentially static objects that can move. While the platform has a density of 0, it also has a speed of 5 along the x axis. You'll notice that we didn't just use speed equal to 5, as this would not have the desired effect in a physics world. The code in the step simply causes the platform to remain within a set boundary by multiplying its current horizontal speed by -1. Any static object to which a movement is applied automatically becomes a kinematic object. Making a rope Is there anything more useful than a rope? I mean besides your computer, your phone or even this book. Probably, a lot of things, but that doesn't make a rope any less useful. Ropes and chains are also useful in games. Some games, such as Cut the Rope, have based their entire gameplay structure around them. Let's see how we can create ropes and chains in GameMaker. Getting ready For this recipe, you can either continue using the physics environment that we've been working with, or you can simply start from scratch. If you've gone through the rest of this chapter, you should be fairly comfortable with setting up physics objects. I completed this recipe with a fresh .gmx file. Before we begin, go ahead and set up obj_dynamicParent and obj_staticParent with collision events for one another. Next, you'll need to create the obj_ropeHome, obj_rope, obj_block, and obj_ropeControl objects. The sprite for obj_rope can simply be a 4 px wide by 16 px high box, while obj_ropeHome and obj_block can be 32 px squares. Obj_ropeControl needs to use the same sprite as obj_rope, but with the y origin set to 0. Obj_ropeControl should also be invisible. As for parenting, obj_rope should be a child of obj_dynamicParent and obj_ropeHome, and obj_block should be children of obj_staticParent, and obj_ropeControl does not require any parent at all. As always, you'll also need a room in which you need to place your objects. How to do it Open obj_ropeHome and add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); In obj_rope, add a create event with a code block. Enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_ropeControl and add a create event. Drag a code block to the actions box and enter the following code: setLength = image_yscale-1; ropeLength = 16; rope1 = instance_create(x,y,obj_ropeHome2); rope2 = instance_create(x,y,obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); repeat (setLength) { ropeLength += 16; rope1 = rope2; rope2 = instance_create(x, y+ropeLength, obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); } In obj_block, add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ropeHome)/2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Now, add a step event with the following code in a code block: phy_position_x = mouse_x; phy_position_y = mouse_y; Place an instance of obj_ropeControl anywhere in the room. This will be the starting point of the rope. You can place multiple instances of the object if you wish. For every instance of obj_ropeControl you place in the room, use the bounding box to stretch it to however long you wish. This will determine the length of your rope. Place a single instance of obj_block in the room. Once you've completed these steps, you can go ahead and test them. How it works This recipe may seem somewhat complicated but it's really not. What you're doing here is that we are taking multiple instances of the same physics-enabled object and stringing them together. Since you're using instances of the same object, you only have to code one and the rest will follow. Once again, our collisions are handled by our parent objects. This way, you don't have to set collisions for each object. Also, setting the physical properties of each object is done exactly as we have done in previous recipes. By setting the density of obj_ropeHome and obj_block to 0, we're ensuring that they are not affected by gravity or collisions, but they can still collide with other objects and affect them. In this case, we set the physics coordinates of obj_block to those of the mouse so that, when testing, you can use them to collide with the rope, moving it. The most complex code takes place in the create event for obj_ropeControl. Here, we not only define how many sections of a rope or chain will be used, but we also define how they are connected. To begin, the y scale of the control object is measured in order to determine how many instances of obj_rope are required. Based on how long you stretched obj_ropeControl in the room, the rope will be longer (more instances) or shorter (fewer instances). We then set a variable (ropeLength) to the size of the sprite used for obj_rope. This will be used later to tell GameMaker where each instance of obj_rope should be so that we can connect them in a line. Next, we create the object that will hold the obj_ropeHome rope. This is a static object that will not move, no matter how much the rope moves. This is connected to the first instance of obj_rope via a revolute joint. In GameMaker, a revolute joint is used in several ways: it can act as part of a motor, moving pistons; it can act as a joint on a ragdoll body; in this case, it acts as the connection between instances of obj_rope. A revolute joint allows the programmer to code its angle and torque; but for our purposes, this isn't necessary. We declared the objects that are connected via the joint as well as the anchor location, but the other values remain null. Once the rope holder (obj_ropeHome) and initial joint are set up, we can automate the creation of the rest. Using the repeat function, we can tell GameMaker to repeat a block of code a set number of times. In this case, this number is derived from how many instances of obj_rope can fit within the distance between the y origin of obj_ropeControl and the point to which you stretched it. We subtract 1 from this number as GameMaker will calculate too many in order to cover the distance in its entirety. The code that will be repeated does a few things at once. First, it increases the value of the ropeLength variable by 16 for each instance that is calculated. Then, GameMaker changes the value of rope1 (which creates an instance of obj_ropeHome) to that of rope2 (which creates an instance of obj_rope). The rope2 variable is then reestablished to create an instance of obj_rope, but also adds the new value of ropeLength so as to move its coordinates directly below those of the previous instance, thus creating a chain. This process is repeated until the set length of the overall rope is reached. There's more Each section of a rope is a physics object and acts in the physics world. By changing the physics settings, when initially creating the rope sections, you can see how they react to collisions. How far and how quickly the rope moves when pushed by another object is very much related to the difference between their densities. If you make the rope denser than the object colliding with it, the rope will move very little. If you reverse these values, you can cause the rope to flail about, wildly. Play around with the settings and see what happens, but when placing a rope or chain in a game, you really must consider what the rope and other objects are made of. It wouldn't seem right for a lead chain to be sent flailing about by a collision with a pillow; now would it? Summary This article introduces the physics system and demonstrates how GameMaker handles gravity, friction, and so on. Learn how to implement this system to make more realistic games. Resources for Article: Further resources on this subject: Getting to Know LibGDX [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Introducing GameMaker [article]
Read more
  • 0
  • 0
  • 21186
article-image-working-local-and-remote-data-sources
Packt
02 Nov 2015
9 min read
Save for later

Working With Local and Remote Data Sources

Packt
02 Nov 2015
9 min read
In this article by Jason Kneen, the author of the book Appcelerator Titanium Smartphone Application Development Cookbook - Second Edition, we'll cover the following recipes: Reading data from remote XML via HTTPClient Displaying data using a TableView Enhancing your TableViews with custom rows Filtering your TableView with the SearchBar control Speeding up your remote data access with Yahoo! YQL and JSON Creating a SQLite database Saving data locally using a SQLite database Retrieving data from a SQLite database Creating a "pull to refresh" mechanism in iOS (For more resources related to this topic, see here.) As you are a Titanium developer, fully understanding the methods available for you to read, parse, and save data is fundamental to the success of the apps you'll build. Titanium provides you with all the tools you need to make everything from simple XML or JSON calls over HTTP, to the implementation of local relational SQL databases. In this article, we'll cover not only the fundamental methods of implementing remote data access over HTTP, but also how to store and present that data effectively using TableViews, TableRows, and other customized user interfaces. Prerequisites You should have a basic understanding of both the XML and JSON data formats, which are widely used and standardized methods of transporting data across the Web. Additionally, you should understand what Structured Query Language (SQL) is and how to create basic SQL statements such as Create, Select, Delete, and Insert. There is a great beginners' introduction to SQL at http://sqlzoo.net if you need to refer to tutorials on how to run common types of database queries. Reading data from remote XML via HTTPClient The ability to consume and display feed data from the Internet, via RSS feeds or alternate APIs, is the cornerstone of many mobile applications. More importantly, many services that you may wish to integrate into your app will probably require you to do this at some point or the other, so it is vital to understand and be able to implement remote data feeds and XML. Our first recipe in this article introduces some new functionality within Titanium to help facilitate this need. Getting ready To prepare for this recipe, open Titanium Studio, log in and create a new mobile project. Select Classic and Default Project. Then, enter MyRecipes as the name of the app, and fill in the rest of the details with your own information, as you've done previously. How to do it... Now that our project shell is set up, let's get down to business! First, open your app.js file and replace its contents with the following: // this sets the background color of the master View (when there are no windows/tab groups on it) Ti.UI.setBackgroundColor('#000'); // create tab group var tabGroup = Ti.UI.createTabGroup(); var tab1 = Ti.UI.createTab({ icon:'cake.png', title:'Recipes', window:win1 }); var tab2 = Ti.UI.createTab({ icon:'heart.png', title:'Favorites', window:win2 }); // // add tabs // tabGroup.addTab(tab1); tabGroup.addTab(tab2); // open tab group tabGroup.open(); This will get a basic TabGroup in place, but we need two windows, so we create two more JavaScript files called recipes.js and favorites.js. We'll be creating a Window instance in each file to do this we created the window2.js and chartwin.js files. In recipes.js, insert the following code. Do the same with favorites.js, ensuring that you change the title of the Window to Favorites: //create an instance of a window module.exports = (function() { var win = Ti.UI.createWindow({ title : 'Recipes', backgroundColor : '#fff' }); return win; })(); Next, go back to app.js, and just after the place where TabGroup is defined, add this code: var win1 = require("recipes"); var win2 = require("favorites"); Open the recipes.js file. This is the file that'll hold our code for retrieving and displaying recipes from an RSS feed. Type in the following code at the top of your recipes.js file; this code will create an HTTPClient and read in the feed XML from the recipe's website: //declare the http client object var xhr = Ti.Network.createHTTPClient(); function refresh() { //this method will process the remote data xhr.onload = function() { console.log(this.responseText); }; //this method will fire if there's an error in accessing the //remote data xhr.onerror = function() { //log the error to our Titanium Studio console console.log(this.status + ' - ' + this.statusText); }; //open up the recipes xml feed xhr.open('GET', 'http://rss.allrecipes.com/daily.aspx?hubID=79'); //finally, execute the call to the remote feed xhr.send(); } refresh(); Try running the emulator now for either Android or iPhone. You should see two tabs appear on the screen, as shown in the following screenshot. After a few seconds, there should be a stack of XML data printed to your Appcelerator Studio console log. How it works… If you are already familiar with JavaScript for the Web, this should make a lot of sense to you. Here, we created an HTTPClient using the Ti.Network namespace, and opened a GET connection to the URL of the feed from the recipe's website using an object called xhr. By implementing the onload event listener, we can capture the XML data that has been retrieved by the xhr object. In the source code, you'll notice that we have used console.log() to echo information to the Titanium Studio screen, which is a great way of debugging and following events in our app. If your connection and GET request were successful, you should see a large XML string output in the Titanium Studio console log. The final part of the recipe is small but very important—calling the xhr object's send() method. This kicks off the GET request; without it, your app would never load any data. It is important to note that you'll not receive any errors or warnings if you forget to implement xhr.send(), so if your app is not receiving any data, this is the first place to check. If you are having trouble parsing your XML, always check whether it is valid first! Opening the XML feed in your browser will normally provide you with enough information to determine whether your feed is valid or has broken elements. Displaying data using a TableView TableViews are one of the most commonly used components in Titanium. Almost all of the native apps on your device utilize tables in some shape or form. They are used to display large lists of data in an effective manner, allowing for scrolling lists that can be customized visually, searched through, or drilled down to expose child views. Titanium makes it easy to implement TableViews in your application, so in this recipe, we'll implement a TableView and use our XML data feed from the previous recipe to populate it with a list of recipes. How to do it... Once we have connected our app to a data feed and we're retrieving XML data via the XHR object, we need to be able to manipulate that data and display it in a TableView component. Firstly, we will need to create an array object called data at the top of our refresh function in the recipes.js file; this array will hold all of the information for our TableView in a global context. Then, we need to disseminate the XML, read in the required elements, and populate our data array object, before we finally create a TableView and set the data to be our data array. Replace the refresh function with the following code: function refresh() { var data = []; //empty data array //declare the http client object var xhr = Ti.Network.createHTTPClient(); //create the table view var tblRecipes = Ti.UI.createTableView(); win.add(tblRecipes); //this method will process the remote data xhr.onload = function() { var xml = this.responseXML; //get the item nodelist from our response xml object var items = xml.documentElement.getElementsByTagName("item"); //loop each item in the xml for (var i = 0; i < items.length; i++) { //create a table row var row = Ti.UI.createTableViewRow({ title: items.item(i).getElementsByTagName("title").item(0).text }); //add the table row to our data[] object data.push(row); } //end for loop //finally, set the data property of the tableView to our //data[] object tblRecipes.data = data; }; //open up the recipes xml feed xhr.open('GET', 'http://rss.allrecipes.com/daily.aspx?hubID=79'); //finally, execute the call to the remote feed xhr.send(); } The following screenshot shows the TableView with the titles of our recipes from the XML feed: How it works... The first thing you'll notice is that we are taking the response data, extracting all the elements that match the name item, and assigning it to items. This gives us an array that we can use to loop through and assign each individual item to the data array object that we created earlier. From there, we create our TableView by implementing the Ti.UI.createTableView() function. You should notice almost immediately that many of our regular properties are also used by tables, including width, height, and positioning. In this case, we did not specify these values, which means that by default, the TableView will occupy the screen. A TableView has an extra, and important, property—data. The data property accepts an array of data, the values of which can either be used dynamically (as we have done here with the title property) or be assigned to the subcomponent children of a TableRow. As you begin to build more complex applications, you'll be fully understanding just how flexible table-based layouts can be. Summary In this article, we covered fundamental methods of implementing remote data access over HTTP. As you are a Titanium developer, we had also understand the available methods to build a successful app. More importantly, many services that you may wish to integrate into your app will probably require you to do this at some point or the other, so it is vital to understand and be able to implement remote data feeds and XML Resources for Article: Further resources on this subject: Mobile First Bootstrap [article] Anatomy of a Sprite Kit project [article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 5495

article-image-intro-docker-part-2-developing-simple-application
Julian Gindi
30 Oct 2015
5 min read
Save for later

Intro to Docker Part 2: Developing a Simple Application

Julian Gindi
30 Oct 2015
5 min read
In my last post, we learned some basic concepts related to Docker, and we learned a few basic operations for using Docker containers. In this post, we will develop a simple application using Docker. Along the way we will learn how to use Dockerfiles and Docker's amazing 'compose' feature to link multiple containers together. The Application We will be building a simple clone of Reddit's very awesome and mysterious "The Button". The application will be written in Python using the Flask web framework, and will use Redis as it's storage backend. If you do not know Python or Flask, fear not, the code is very readable and you are not required to understand the code to follow along with the Docker-specific sections. Getting Started Before we get started, we need to create a few files and directories. First, go ahead and create a Dockerfile, requirements.txt (where we will specify project-specific dependencies), and a main app.py file. touch Dockerfile requirements.txt app.py Next we will create a simple endpoint that will return "Hello World". Go ahead and edit your app.py file to look like such: from flask import Flask app = Flask(__name__) @app.route('/') def main(): return 'Hello World!' if __name__ == '__main__': app.run('0.0.0.0') Now we need to tell Docker how to build a container containing all the dependencies and code needed to run the app. Edit your Dockerfile to look like such: 1 FROM python:2.7 2 3 RUN mkdir /code 4 WORKDIR /code 5 6 ADD requirements.txt /code/ 7 RUN pip install -r requirements.txt 8 9 ADD . /code/1011 EXPOSE 5000 Before we move on, let me explain the basics of Dockerfiles. Dockerfiles A Dockerfile is a configuration file that specifies instructions on how to build a Docker container. I will now explain each line in the Dockerfile we just created (I will reference individual lines). 1: First, we specify the base image to use as our starting point (we discussed this in more detail in the last post). Here we are using a stock Python 2.7 image. 3: Dockerfiles can container a few 'directives' that dictate certain behaviors. RUN is one such directive. It does exactly what it sounds like - runs an arbitrary command. Here, were are just making a working directory. 4: We use WORKDIR to specify the main working directory. 6: ADD allows us to selectively add files to the container during the build process. Currently, we just need to add the requirements file to tell Docker while dependencies to install. 7: We use the RUN command and python's pip package manager to install all the needed dependencies. 9: Here we add all the code in our current directory into the Docker container (add /code). 11: Finally we 'expose' the ports we will need to access. In this case, Flask will run on port 5000. Building from a Dockerfile We are almost ready to build an image from this Dockerfile, but first, let's specify the dependencies we will need in our requirements.txt file. flask==0.10.1 redis==2.10.3 I am using specific versions here to ensure that your version will work just like mine does. Once we have all these pieces in place we can build the image with the following command. > docker build -t thebutton . We are 'tagging' this image with an easy-to-remember name that we can use later. Once the build completes, we can run the container and see our message in the browser. > docker run -p 5000:5000 thebutton python app.py We are doing a few things here: The -p flag tells Docker to expose port 5000 inside the container, to port 5000 outside the container (this just makes our lives easier). Next we specify the image name (thebutton) and finally the command to run inside the container - python app.py - this will start the web server and server for our page. We are almost ready to view our page but first, we must discover which IP the site will be on. For linux-based systems, you can use localhost but for Mac you will need to run boot2docker ip to discover the IP address to visit. Navigate to your site (in my case it's 192.168.59.103:5000) and you should see "Hello World" printed. Congrats! You are running your first site from inside a Docker container. Putting it All Together Now, we are going to complete the app, and use Docker Compose to launch the entire project for us. This will contain two containers, one running our Flask app, and another running an instance of Redis. The great thing about docker-compose is that you can specify a system to create, and how to connect all the containers. Let's create our docker-compose.yml file now. redis: image: redis:2.8.19 web: build: . command: python app.py ports: - "5000:5000" links: - redis:redis This file specifies the two containers (web and redis). It specifies how to build each container (we are just using the stock redis image here). The web container is a bit more involved since we first build the container using our local Dockerfile (the build: . line). Than we expose port 5000 and link the Redis container to our web container. The awesome thing about linking containers this way, is that the web container automatically gets information about the redis container. In this case, there is an /etc/host called 'redis' that points to our Redis container. This allows us to configure Redis easily in our application: db = redis.StrictRedis('redis', 6379, 0) To test this all out, you can grab the complete source here. All you will need to run is docker-compose up and than access the site the same way we did before. Congratulations! You now have all the tools you need to use docker effectively! About the author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at [iStrategylabs](isl.co) where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 11909
Modal Close icon
Modal Close icon