Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-building-solutions-using-patterns
Packt
16 Sep 2015
6 min read
Save for later

Building Solutions Using Patterns

Packt
16 Sep 2015
6 min read
In this article by Mark Brummel, the author of Learning Dynamics NAV Patterns, we will learn how to create an application using Dynamics Nav. While creating an application, we can apply patterns and coding concepts into a new module that is recognizable for the users to be as a Microsoft Dynamics NAV application, and is easy to understand and maintain by other developers. The solution that we will make is for a small bed and breakfast (B&B), allowing them to manage their rooms and reservations. This can be integrated into the financial part of Dynamics NAV. It is not the intention of this article to make a full-featured finished product. We will discuss the basic design principles, and the decision making processes. Therefore, we simplify the functional process. One of the restrictions in our application is that we rent rooms per night. This article will be covering the following topics: Building blocks Creating the Table objects (For more resources related to this topic, see here.) Building blocks We borrowed the term classes from the object-oriented programming as a collection of things that belong together. Classes can be tables or code units in Microsoft Dynamics NAV. The first step in our process is to define the classes. These will be created as tables or code units, following the patterns that we have learned: Setup This is the generic set of parameters for the application. Guest This is the person who stays at our B&B. This can be one or two persons, or a group (family). Room Our B&B has a number of rooms with different attributes that determine the price, together with the season. Season This is the time of the year. Price This is the price for one night in a room. Reservation Rooms can be reserved on a daily basis with a starting and ending date. Stay This is the set of one or more consecutive nights at our B&B. Check-In This is the start of a stay, checking in for reservation. Check-Out At the end of a stay, we would like to send a bill. Clean Whenever a room is cleaned, we would like to register this. Evaluation Each stay can be evaluated by a customer. Invoice This generate a Sales Invoice for a Stay. Apply Architectural Patterns The second step is to decide per class which Architectural Patterns we can use. In some special cases, we might need to write down new patterns, based on the data structures that are not used in the standard application. Setup For the application setup, we will use the Singleton pattern. This allows us to define a single set of values for the entire application that is kept in memory during the lifetime of the system. Guest To register our guests, we will use the standard Customer table in Dynamics NAV. This has pros and cons. The good thing about doing this is the ability to use all the standard analysis options in the application for our customers without reinventing the wheel. Some B&B users might decide to also sell souvenirs or local products so that they can use items and the standard trade part of Dynamics NAV. We can also use the campaigns in the Relationship Management module. The bad part, or challenge, is upgradability. If we were to add fields to the customer table, or modify the standard page elements, we will have to merge these into the application each time we get a new version of the product, which is once per month. We will use the new delta file, as well as the testability framework to challenge this. Room The architectural pattern for a room is a tough decision. Most users of our system run a small B&B, so we can consider rooms to be the setup data. Number Series is not a required pattern. We will therefore decide to implement a Supplemental Table. Season Each B&B can setup their own seasons. They are used to determine price, but when not used, the system will have to work too. We implement a Supplemental Table too. Price Rooms can have a default price, or a price per season and a guest. Based on this requirement, we will implement the Rules Pattern that allows us a complex array of setup values. Reservation We want to carefully trace reservations and cancellations per room and per guest. We would like to analyze the data based on the season. For this feature, we will implement the Journal-Batch-Line pattern and introduce an Entry table that is managed by the Journal. Stay We would like to register each unique stay in our system rather than individual nights. This allows us to easily combine parameters, and generate a total price. We will implement this as a Master Data, based on the requirement to be able to use number series. The Stay does not have requirements for a lines table, nor does it represent a document in our organization. Check-In When a guest checks in to the bed and breakfast, we can check a reservation and apply the reservation to the Stay. Check-Out When a guest leaves, we would like to setup the final bill, and ask to evaluate the stay. This process will be a method on the Stay class with encapsulated functions, creating the sales invoice, and generating an evaluation document. Clean Rooms have to be cleaned each day when a guest stays, but at least once a week when the room is empty. We will use the entry pattern without a journal. Clean will be a method on the Room class. Each day we will generate entries using the Job Queue Entry pattern. The Room will also have a method that indicates if a room has been cleaned. Evaluation A Stay in our B&B can be evaluated by our guests. The evaluation has a different criteria. We will use the Document Pattern. Invoice We can create the method as an encapsulated method of the Stay class. In order to link the Sales Invoice to the Stay, we will add the Stay No. field to the Sales Header, the Sales Invoice Header, and the Sales Cr.Memo Header tables. Creating the Table Objects Based on the Architectural Patterns, we can define a set of objects that we can start working with, which is as follows: Object names are limited to 30 characters, which is challenging for naming them. The Bed and Breakfast name illustrates this challenge. Only use abbreviation when the limitation of length is a problem. Summary In this article, you learned how to define classes for building an application. You have also learned about the kinds of architectural patterns that will be involved in creating the classes in your application. Resources for Article: Further resources on this subject: Performance by Design [article] Advanced Data Access Patterns [article] Formatting Report Items and Placeholders [article]
Read more
  • 0
  • 0
  • 991

article-image-remote-desktop-your-pi-everywhere
Packt
16 Sep 2015
6 min read
Save for later

Remote Desktop to Your Pi from Everywhere

Packt
16 Sep 2015
6 min read
In this article by Gökhan Kurt, author of the book Raspberry Pi Android Projects, we will make a gentle introduction to both Pi and Android platforms to warm us up. Many users of the Pi face similar problems when they wish to administer it. You have to be near your Pi and connect a screen and a keyboard to it. We will solve this everyday problem by remotely connecting to our Pi desktop interface. The article covers the following topics: Installing necessary components in the Pi and Android Connecting the Pi and Android (For more resources related to this topic, see here.) Installing necessary components in the Pi and Android The following image shows you that the LXDE desktop manager comes with an initial setup and a few preinstalled programs: LXDE desktop management environment By clicking on the screen image on the tab bar located at top, you will be able to open a terminal screen that we will use to send commands to the Pi. The next step is to install a component called x11vnc. This is a VNC server for X, the window management component of Linux. Issue following command on the terminal: sudo apt-get install x11vnc This will download and install x11vnc to the Pi. We can even set a password to be used by VNC clients that will remote desktop to this Pi using the following command and provide a password to be used later on: x11vnc –storepasswd Next, we can get the x11vnc server running whenever the Pi is rebooted and the LXDE desktop manager starts. This can be done through the following steps: Go into the .config directory on the Pi user's home directory located at /home/pi:cd /home/pi/.config Make a subdirectory here named autostart:mkdir autostart Go into the autostart directory:cd autostart Start editing a file named x11vnc.desktop. As a terminal editor, I am using nano, which is the easiest one to use on the Pi for novice users, but there are more exciting alternatives, such as vi: nano x11vnc.desktop Add the following content into this file: [Desktop Entry] Encoding=UTF-8 Type=Application Name=X11VNC Comment= Exec=x11vnc -forever -usepw -display :0 -ultrafilexfer StartupNotify=false Terminal=false Hidden=false Save and exit using (Ctrl+X, Y, <Enter>) in order if you are using nano as the editor of your choice. Now you should reboot the Pi to get the server running using the following command: sudo reboot After rebooting, we can now find out what IP address our Pi has been given in the terminal window by issuing the ifconfig command. The IP address assigned to your Pi is to be found under the eth0 entry and is given after the inet addr keyword. Write this address down: Example output from ifconfig command The next step is to download a VNC client to your Android device.In this project, we will use a freely available client for Android, namely androidVNC or as it is named in the Play Store—VNC Viewer for Android by androidVNC team + antlersoft. The latest version in use at the writing of this book was 0.5.0. Note that in order to be able to connect your Android VNC client to the Pi, both the Pi and the Android device should be connected to the same network. Android through Wi-Fi and Pi through its Ethernet port. Connecting the Pi and Android Install and open androidVNC on your device. You will be presented with a first activity user interface asking for the details of the connection. Here, you should provide Nickname for the connection, Password you enter when you run the x11vnc –storepasswd command, and the IP Address of the Pi that you have found out using the ifconfig command. Initiate the connection by pressing the Connect button, and you should now be able to see the Pi desktop on your Android device. In androidVNC, you should be able to move the mouse pointer by clicking on the screen and under the options menu in the androidVNC app, you will find out how to send text and keys to the Pi with the help of Enter and Backspace. You may even find it convenient to connect to the Pi from another computer. I recommend using RealVNC for this purpose, which is available on Windows, Linux, and Mac OS. What if I want to use Wi-Fi on the Pi? In order to use a Wi-Fi dongle on the Pi, first of all, open the wpa-supplicant configuration file using the nano editor with the following command: sudo nano /etc/wpa_supplicant/wpa_supplicant.conf Add the following to the end of this file: network={ ssid="THE ID OF THE NETWORK YOU WANT TO CONNECT" psk="PASSWORD OF YOUR WIFI" } I assume that you have set up your wireless home network to use WPA-PSK as the authentication mechanism. If you have another mechanism, you should refer to the wpa_supplicant documentation. LXDE provides even better ways to connect to Wi-Fi networks through a GUI. It can be found on the upper-right corner of the desktop environment on the Pi. Connecting from everywhere Now, we have connected to the Pi from our device, which we need to connect to the same network as the Pi. However, most of us would like to connect to the Pi from around the world as well. To do this, first of all, we need to now the IP address of the home network assigned to us by our network provider. By going to http://whatismyipaddress.com URL, we can figure out what our home network's IP address is. The next step is to log in to our router and open up requests to the Pi from around the world. For this purpose, we will use a functionality found on most modern routers called port forwarding. Be aware of the risks contained in port forwarding. You are opening up access to your Pi from all around the world, even to malicious ones. I strongly recommend that you change the default password of the user pi before performing this step. You can change passwords using the passwd command. By logging onto a router's management portal and navigating to the Port Forwarding tab, we can open up requests to the Pi's internal network IP address, which we have figured out previously, and the default port of the VNC server, which is 5900. Now, we can provide our external IP address to androidVNC from anywhere around the world instead of an internal IP address that works only if we are on the same network as the Pi. Port forwarding settings on Netgear router administration page Refer to your router's user manual to see how to change the Port Forwarding settings. Most routers require you to connect through the Ethernet port in order to access the management portal instead of Wi-Fi. Summary In this article, we installed Raspbian, warmed up with the Pi, and connected the Pi using an Android device. Resources for Article:   Further resources on this subject: Raspberry Pi LED Blueprints [article] Color and motion finding [article] From Code to the Real World [article]
Read more
  • 0
  • 0
  • 13338

article-image-groovy-closures
Packt
16 Sep 2015
9 min read
Save for later

Groovy Closures

Packt
16 Sep 2015
9 min read
In this article by Fergal Dearle, the author of the book Groovy for Domain-Specific Languages - Second Edition, we will focus exclusively on closures. We will take a close look at them from every angle. Closures are the single most important feature of the Groovy language. Closures are the special seasoning that helps Groovy stand out from Java. They are also the single most powerful feature that we will use when implementing DSLs. In the article, we will discuss the following topics: We will start by explaining just what a closure is and how we can define some simple closures in our Groovy code We will look at how many of the built-in collection methods make use of closures for applying iteration logic, and see how this is implemented by passing a closure as a method parameter We will look at the various mechanisms for calling closures A handy reference that you might want to consider having at hand while you read this article is GDK Javadocs, which will give you full class descriptions of all of the Groovy built-in classes, but of particular interest here is groovy.lang.Closure. (For more resources related to this topic, see here.) What is a closure Closures are such an unfamiliar concept to begin with that it can be hard to grasp initially. Closures have characteristics that make them look like a method in so far as we can pass parameters to them and they can return a value. However, unlike methods, closures are anonymous. A closure is just a snippet of code that can be assigned to a variable and executed later: def flintstones = ["Fred","Barney"] def greeter = { println "Hello, ${it}" } flintstones.each( greeter ) greeter "Wilma" greeter = { } flintstones.each( greeter ) greeter "Wilma" Because closures are anonymous, they can easily be lost or overwritten. In the preceding example, we defined a variable greeter to contain a closure that prints a greeting. After greeter is overwritten with an empty closure, any reference to the original closure is lost. It's important to remember that greeter is not the closure. It is a variable that contains a closure, so it can be supplanted at any time. Because greeter has a dynamic type, we could have assigned any other object to it. All closures are a subclass of the type groovy.lang.Closure. Because groovy.lang is automatically imported, we can refer to Closure as a type within our code. By declaring our closures explicitly as Closure, we cannot accidentally assign a non-closure to them: Closure greeter = { println it } For each closure that is declared in our code, Groovy generates a Closure class for us, which is a subclass of groovy.lang.Closure. Our closure object is an instance of this class. Although we cannot predict what exact type of closure is generated, we can rely on it being a subtype of groovy.lang.Closure. Closures and collection methods We will encounter Groovy lists and see some of the iteration functions, such as the each method: def flintstones = ["Fred","Barney"] flintstones.each { println "Hello, ${it}" } This looks like it could be a specialized control loop similar to a while loop. In fact, it is a call to the each method of Object. The each method takes a closure as one of its parameters, and everything between the curly braces {} defines another anonymous closure. Closures defined in this way can look quite similar to code blocks, but they are not the same. Code defined in a regular Java or Groovy style code block is executed as soon as it is encountered. With closures, the block of code defined in the curly braces is not executed until the call() method of the closure is made: println "one" def two = { println "two" } println "three" two.call() println "four" Will print the following: one three two four Let's dig a bit deeper into the structure of the each of the calls shown in the preceding code. I refer to each as a call because that's what it is—a method call. Groovy augments the standard JDK with numerous helper methods. This new and improved JDK is referred to as the Groovy JDK, or GDK for short. In the GDK, Groovy adds the each method to the java.lang.Object class. The signature of the each method is as follows: Object each(Closure closure) The java.lang.Object class has a number of similar methods such as each, find, every, any, and so on. Because these methods are defined as part of Object, you can call them on any Groovy or Java object. They make little sense on most objects, but they do something sensible if not very useful: given: "an Integer" def number = 1 when: "we call the each method on it" number.each { println it } then: "just the object itself gets passed into the Closure" "1" == output() These methods all have specific implementations for all of the collection types, including arrays, lists, ranges, and maps. So, what is actually happening when we see the call to flintstones.each is that we are calling the list's implementation of the each method. Because each takes a Closure as its last and only parameter, the following code block is interpreted by Groovy as an anonymous Closure object to be passed to the method. The actual call to the closure passed to each is deferred until the body of the each method itself is called. The closure may be called multiple times—once for every element in the collection. Closures as method parameters We already know that parentheses around method parameters are optional, so the previous call to each can also be considered equivalent to: flintstones.each ({ println "Hello, ${it}") Groovy has a special handling for methods whose last parameter is a closure. When invoking these methods, the closure can be defined anonymously after the method call parenthesis. So, yet another legitimate way to call the preceding line is: flintstones.each() { println "hello, ${it}" } The general convention is not to use parentheses unless there are parameters in addition to the closure: given: def flintstones = ["Fred", "Barney", "Wilma"] when: "we call findIndexOf passing int and a Closure" def result = flintstones.findIndexOf(0) { it == 'Wilma'} then: result == 2 The signature of the GDK findIndexOf method is: int findIndexOf(int, Closure) We can define our own methods that accept closures as parameters. The simplest case is a method that accepts only a single closure as a parameter: def closureMethod(Closure c) { c.call() } when: "we invoke a method that accepts a closure" closureMethod { println "Closure called" } then: "the Closure passed in was executed" "Closure called" == output() Method parameters as DSL This is an extremely useful construct when we want to wrap a closure in some other code. Suppose we have some locking and unlocking that needs to occur around the execution of a closure. Rather than the writer of the code to locking via a locking API call, we can implement the locking within a locker method that accepts the closure: def locked(Closure c) { callToLockingMethod() c.call() callToUnLockingMethod() } The effect of this is that whenever we need to execute a locked segment of code, we simply wrap the segment in a locked closure block, as follows: locked { println "Closure called" } In a small way, we are already writing a mini DSL when we use these types on constructs. This call to the locked method looks, to all intents and purposes, like a new language construct, that is, a block of code defining the scope of a locking operation. When writing methods that take other parameters in addition to a closure, we generally leave the Closure argument to last. As already mentioned in the previous section, Groovy has a special syntax handling for these methods, and allows the closure to be defined as a block after the parameter list when calling the method: def closureMethodInteger(Integer i, Closure c) { println "Line $i" c.call() } when: "we invoke a method that accepts an Integer and a Closure" closureMethodInteger(1) { println "Line 2" } then: "the Closure passed in was executed with the parameter" """Line 1 Line 2""" == output() Forwarding parameters Parameters passed to the method may have no impact on the closure itself, or they may be passed to the closure as a parameter. Methods can accept multiple parameters in addition to the closure. Some may be passed to the closure, while others may not: def closureMethodString(String s, Closure c) { println "Greet someone" c.call(s) } when: "we invoke a method that accepts a String and a Closure" closureMethodString("Dolly") { name -> println "Hello, $name" } then: "the Closure passed in was executed with the parameter" """Greet someone Hello, Dolly""" == output() This construct can be used in circumstances where we have a look-up code that needs to be executed before we have access to an object. Say we have customer records that need to be retrieved from a database before we can use them: def withCustomer (id, Closure c) { def cust = getCustomerRecord(id) c.call(cust) } withCustomer(12345) { customer -> println "Found customer ${customer.name}" } We can write an updateCustomer method that saves the customer record after the closure is invoked, and amend our locked method to implement transaction isolation on the database, as follows: class Customer { String name } def locked (Closure c) { println "Transaction lock" c.call() println "Transaction release" } def update (customer, Closure c) { println "Customer name was ${customer.name}" c.call(customer) println "Customer name is now ${customer.name}" } def customer = new Customer(name: "Fred") At this point, we can write code that nests the two method calls by calling update as follows: locked { update(customer) { cust -> cust.name = "Barney" } } This outputs the following result, showing how the update code is wrapped by updateCustomer, which retrieves the customer object and subsequently saves it. The whole operation is wrapped by locked, which includes everything within a transaction: Transaction lock Customer name was Fred Customer name is now Barney Transaction release Summary In this article, we covered closures in some depth. We explored the various ways to call a closure and the means of passing parameters. We saw how we can pass closures as parameters to methods, and how this construct can allow us to appear to add mini DSL syntax to our code. Closures are the real "power" feature of Groovy, and they form the basis of most of the DSLs. Resources for Article: Further resources on this subject: Using Groovy Closures Instead of Template Method [article] Metaprogramming and the Groovy MOP [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article]
Read more
  • 0
  • 0
  • 3406

article-image-implementing-microsoft-dynamics-ax
Packt
16 Sep 2015
6 min read
Save for later

Implementing Microsoft Dynamics AX

Packt
16 Sep 2015
6 min read
 In this article by Yogesh Kasat and JJ Yadav, authors of the book Microsoft Dynamics AX Implementation Guide, you will learn one of the important topic in Microsoft Dynamics AX implementation process—configuration data management. (For more resources related to this topic, see here.) The configuration of an ERP system is one of the most important parts of the process. Configuration means setting up the base data and parameters to enable your product features such as financial, shipping, sales tax, and so on. Microsoft Dynamics AX has been developed based on the generic requirements of various organizations and contains the business processes belonging to diverse business segments. It is a very configurable product that allows the implementation team to configure features based on specific business needs. During the project, the implementation team identifies the relevant components of the system and sets up and aligns these components to meet the specific business requirements. This process starts in the analysis phase of the project carrying on through the design, development, and deployment phases. Configuration management is different from data migration. Data migration broadly covers the transactional data of the legacy system and core data such as Opening balances, Open AR, Open AP, customers, vendors, and so on. When we talk about configuration management, we are referring to items like fiscal years and periods, chart of accounts, segments, and defining applicable rules, journal types, customer groups, terms of payments, module-based parameters, workflows, number sequences, and the like. In a broader sense, configuration covers the basic parameters, setup data, and reference data which you configure for the different modules in Dynamics AX. The following diagram shows the different phases of configuration management: In any ERP implementation project, you deal with multiple environments. For example, you start with CRP, after the development you move to the test environment, and then training, UAT, and production, as shown in the following diagram: One of the biggest challenges that an implementation team faces is moving the configuration from one environment to another. If configurations keep changing in every environment, it becomes more difficult to manage them. Similar to code promotion and release management across environments, configuration changes need to be tracked through a change-control process across environments to ensure that you are testing with a consistent set of configurations. The objective is to keep track of all the configuration changes and make sure that they make it to the final cut in the production environment. The following sections outline some approaches used for configuration data management in the Dynamics AX project. The golden environment An environment that is pristine without any transactions—the golden environment—is sometimes referred to as a stage or pre-prod environment. Create the configurations from scratch and/or use various tools to create and update the configuration data. Develop a process to update the configuration in the golden environment once it has been changed and approved in the test environments. The golden environment can be turned into a production environment or the data can be copied over to the production environment using database restore. The golden environment database can be used as a starting point for every run of data migration. For example, if you are preparing for UAT, use the golden environment database as a starting point. Copy to UAT and perform data migration in your UAT environment. This would ensure time you are testing with the golden configurations (If the configuration is missing in the golden environment, you would be able to catch it during testing and fix your UAT and the golden environment too). The pros of the golden environment are given as follows: The golden environment is a single environment for controlling the configuration data It uses all the tools available for the initial configuration There are less number of chances for corruption of the configuration data The cons of the golden environment are given as follows: There is a risk of missing configuration updates due to not following the processes (as the configuration updates are made directly in the testing and UAT environments). There are chances of migrating the revision data into the production environment like workflow history, address revisions, and policies versions. There is a risk of migrating environment-specific data from the golden environment to the production environment. This is not useful for a project going live in multiple phases, as you will not be able to transfer the incremental configuration data using database restore. You must keep the environment in sync with the latest code. Copying the template company In this approach, the implementation team typically defines a template legal entity and configures the template company from scratch. Once completed, the template company's configuration data is copied over to the actual legal entity using the data export/import process. This approach is useful for projects going live in multiple phases, where a global template is created and used across different legal entities. Whereas, in AX 2012, a lot configuration data is shared and it makes it almost impossible to copy the company data. Building configuration templates In this approach, the implementation team typically builds a repository of all the configurations done in a file, imports them in each subsequent environment, and finally, in the production environment. The pros of building configuration templates are as follows: It is a clean approach. You can version-control the configuration file. This approach is very useful for projects going live in multiple phases, as you can import the incremental configuration data in the subsequent releases. This approach may need significant development efforts to create the X+ scripts or DIXF custom entities to import all the required configurations. Summary Clearly there are several options to choose from for configuration data management but they have their own pros and cons. While building configuration template is ideal solution for configuration data management it could be costly as it may need significant development effort to build custom entity to export and import data across environments. The golden environment process is widely used on the implementation projects as it’s easy to manage and require minimal development team involvement. Resources for Article: Further resources on this subject: Web Services and Forms[article] Setting Up and Managing E-mails and Batch Processing[article] Integrating Microsoft Dynamics GP Business Application fundamentals[article]
Read more
  • 0
  • 0
  • 4251

article-image-how-deploy-simple-django-app-using-aws
Liz Tom
16 Sep 2015
6 min read
Save for later

How to Deploy a Simple Django App Using AWS

Liz Tom
16 Sep 2015
6 min read
So you've written your first Django app and now you want to show the world your awesome To Do List. If you like me, your first Django app was from the awesome Django tutorial on their site. You may have heard of AWS. What exactly does this mean, and how does it pertain to getting your app out there. AWS is Amazon Web Services. They have many different products, but we're just going to focus on using one today: Elastic Compute Cloud (EC2) - Scalable virtual private servers. So you have your Django app and it runs beautifully locally. The goal is to reproduce everything but on Amazon's servers. Note: There are many different ways to set up your servers, this is just one way. You can and should experiment to see what works best for you. Application Server First up we're going to need to spin up a server to host your application. Let's go back, since the very first step would actually be to sign up for an AWS account. Please make sure to do that first. Now that we're back on track, you'll want to log into your account and go to your management dashboard. Click on EC2 under compute. Then click "Launch Instance". Now choose your operating system. I use Ubuntu because that's what we use at work. Basically, you should choose an operating system that is as close to the operating system that you use to develop in. Step 2 has you choosing an instance type. Since this is a small app and I want to be in the free tier the t2.micro will do. When you have a production ready app to go, you can read up more on EC2 instance types here. Basically you can add more power to your EC2 instance as you move up. Step 3: Click Next: Configure Instance Details For a simple app we don't need to change anything on this page. One thing to note is the Purchasing option. There are three different types of EC2 Purchasing Options, Spot Instances, Reserved Instances and Dedicated Instances. See them but since we're still on the free tier, let's not worry about this for now. Step 4: Click Next: Add Storage You don't need to change anything here, but this is where you'd click Next: Tag Instance (Step 5). You also don't need to change anything here, but if you're managing a lot of EC2 instances it's probably a good idea to to tag your instances. Step 6: Click Next: Configure Security Group. Under Type select HTTP and the rest should autofill. Otherwise you will spend hours wondering why Nginx hates you and doesn't want to work. Finally, Click Launch. A modal should have popped up prompting you to select an existing key pair or create a new key pair. Unless you already have an exisiting key pair, select Create a new key pair and give it name. You have to download this file and make sure to keep it somewhere safe and somewhere you will remember. You won't be able to download this file again, but you can always spin up another EC2 instance, and create a new key again. Click Launch Instances! You did it! You launched an EC2 instance! Configuring your EC2 Instance But I'm sorry to tell you that your journey is not over. You'll still need to configure your server with everything it needs to run your Django app. Click View Instances. This should bring you to a panel that shows you if your instance is running or not. You'll need to grab your Public IP address from here. So do you remember that private key you downloaded? You'll be needing that for this step. Open your terminal: cd path/to/your/secret/key chmod 400 your_key-pair_name.pem chmod 400 your_key-pair_name.pem is to set the permissions on the key so only you can read it. Now let's SSH to your instance. ssh -i path/to/your/secret/key/your_key-pair_name.pem ubuntu@IP-ADDRESS Since we're running Ubuntu and will be using apt, we need to make sure that apt is up to date: sudo apt-get update Then you need your webserver (nginx): sudo apt-get install nginx Since we installed Ubuntu 14.04, Nginx starts up automatically. You should be able to visit your public IP address and see a screen that says Welcome to nginx! Great, nginx was downloaded correctly and is all booted up. Let's get your app on there! Since this is a Django project, you'll need to install Django on your server. sudo apt-get install python-pip sudo pip install virtualenv sudo pip install git Pull your project down from github: git clone my-git-hub-url In your project's root directory make sure you have at a minimum a requirements.txt file with the following: django gunicorn Side note: gunicorn is a Python WSGI HTTP Server for UNIX. You can find out more here. Make a virtualenv and install your pip requirements using: pip install -r requirements.txt Now you should have django and gunicorn installed. Since nginx starts automatically you'll want to shut it down. sudo service nginx stop Now you'll turn on gunicorn by running: gunicorn app-name.wsgi Now that gunicorn is up and running it's time to turn on nginx: cd ~/etc/nginx sudo vi nginx.conf Within the http block either at the top or the bottom, you'll want to insert this block: server { listen 80; server_name public-ip-address; access_log /var/log/nginx-access.log; error_log /var/log/nginx-error.log; root /home/ubuntu/project-root; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Now start up nginx again: sudo service nginx start Go to your public IP address and you should see your lovely app on the Internet. The End Congratulations! You did it. You just deployed your awesome Django app using AWS. Do a little dance, pat yourself on back and feel good about what you just accomplished! But, one note, as soon as you close your connection and terminate gunicorn, your app will no longer be running. You'll need to set up something like Upstart to keep your app running all the time. Hope you had fun!   About the author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C. Liz’s passion for full stack development and digital media makes her a natural fit at ISL. Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets. When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 19500

article-image-deploying-orchestrator-appliance
Packt
16 Sep 2015
5 min read
Save for later

Deploying the Orchestrator Appliance

Packt
16 Sep 2015
5 min read
This article by Daniel Langenhan, the author of VMware vRealize Orchestrator Essentials, discusses the deployment of Orchestrator Appliance, and then goes on to explaining how to access it using the Orchestrator home page. In the following sections, we will discuss how to deploy Orchestrator in vCenter and with VMware Workstation. (For more resources related to this topic, see here.) Deploying the Appliance with vCenter To make the best use of Orchestrator, its best to deploy it into your vSphere infrastructure. For this, we deploy it with vCenter. Open your vSphere Web Client and log in. Select a host or cluster that should host the Orchestrator Appliance. Right-click the Host or Cluster and select Deploy OVF Template. The deploy wizard will start and ask you the typical OVF questions: Accept the EULA Choose the VM name and the VM folder where it will be stored Select the storage and network it should connect to. Make sure that you select a static IP The Customize template step will now ask you about some more Orchestrator-specific details. You will be asked to provide a new password for the root user. The root user is used to connect to the vRO appliance operating system or the web console. The other password that is needed is for the vRO Configurator interface. The last piece of information needed is the network information for the new VM. The following screenshot shows an example of the Customize template step:   The last step summarizes all the settings and lets you power on the VM after creation. Click on Finish and wait until the VM is deployed and powered on. Deploying the appliance into VMware Workstation For learning how to use Orchestrator, or for testing purposes, you can deploy Orchestrator using VMware Workstation (Fusion for MAC users). The process is pretty simple: Download the Orchestrator Appliance on to your desktop. Double-click on the OVA file. The import wizard now asks you for a name and location of your local file structure for this VM. Chose a location and click on Import. Accept the EULA. Wait until the import has finished. Click on Edit virtual machine settings. Select Network Adapter. Chose the correct network (Bridged, NAT, or Host only) for this VM. I typically use Host Only.   Click on OK to exit the settings. Power on the VM. Watch the boot screen. At some stage, the boot will stop and you will be prompted for the root password. Enter a new password and confirm it. After a moment, you will be asked for the password for the Orchestrator Configurator. Enter a new password and confirm it. After this, the boot process should finish, and you should see the Orchestrator Appliance DHCP IP. If you would like to configure the VM with a fixed IP, access the appliance configuration, as shown on the console screen (see the next section). After the deployment If the deployment is successful, the console of the VM should show a screen that looks like the following screenshot:   You can now access the Orchestrator Appliance, as shown in the next section. Accessing Orchestrator Orchestrator has its own little webserver that can be accessed by any web browser. Accessing the Orchestrator home page We will now access the Orchestrator home page: Open a web browser such as Mozilla Firefox, IE, or Google Chrome. Enter the IP or FQDN of the Orchestrator Appliance. The Orchestrator home page will open. It looks like the following screenshot:   The home page contains some very useful links, as shown in the preceding screenshot. Here is an explanation of each number: Number Description 1 Click here to start the Orchestrator Java Client. You can also access the Client directly by visiting https://[IP or FQDN]:8281/vco/client/client.jnlp. 2 Click here to download and install the Orchestrator Java Client locally. 3 Click here to access the Orchestrator Configurator, which is scheduled to disappear soon, whereupon we won't use it any more. The way forward will be Orchestrator Control Center. 4 This is a selection of links that can be used to find helpful information and download plugins. 5 These are some additional links to VMware sites. Starting the Orchestrator Client Let's open the Orchestrator Client. We will use an internal user to log in until we have hooked up Orchestrator to SSO. For the Orchestrator Client, you need at least Java 7. From the Orchestrator home page, click on Start Orchestrator Client. Your Java environment will start. You may be required to acknowledge that you really want to start this application. You will now be greeted with the login screen to Orchestrator:   Enter vcoadmin as the username and vcoadmin as the password. This is a preconfigured user that allows you to log in and use Orchestrator directly. Click on Login. Now, the Orchestrator Client will load. After a moment, you will see something that looks like the following screenshot: You are now logged in to the Orchestrator Client. Summary This article guided you through the process of deploying and accessing an Orchestrator Appliance with vCenter and VMware workstation. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] Upgrading VMware Virtual Infrastructure Setups [article] VMware vRealize Operations Performance and Capacity Management [article]
Read more
  • 0
  • 0
  • 7781
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6271

article-image-working-user-interface
Packt
16 Sep 2015
17 min read
Save for later

Working on the User Interface

Packt
16 Sep 2015
17 min read
In this article by Fabrizio Caldarelli, author of the book Yii By Example, will cover the following topics related to the user interface in this article: Customizing JavaScript and CSS Using AJAX Using the Bootstrap widget Viewing multiple models in the same view Saving linked models in the same view It is now time for you to learn what Yii2 supports in order to customize the JavaScript and CSS parts of web pages. A recurrent use of JavaScript is to handle AJAX calls, that is, to manage widgets and compound controls (such as a dependent drop-down list) from jQuery and Bootstrap. Finally, we will employ jQuery to dynamically create more models from the same class in the form, which will be passed to the controller in order to be validated and saved. (For more resources related to this topic, see here.) Customize JavaScript and CSS Using JavaScript and CSS is fundamental to customize frontend output. Differently from Yii1, where calling JavaScript and CSS scripts and files was done using the Yii::app() singleton, in the new framework version, Yii2, this task is part of the yiiwebView class. There are two ways to call JavaScript or CSS: either directly passing the code to be executed or passing the path file. The registerJs() function allows us to execute the JavaScript code with three parameters: The first parameter is the JavaScript code block to be registered The second parameter is the position where the JavaScript tag should be inserted (the header, the beginning of the body section, the end of the body section, enclosed within the jQuery load() method, or enclosed within the jQuery document.ready() method, which is the default) The third and last parameter is a key that identifies the JavaScript code block (if it is not provided, the content of the first parameter will be used as the key) On the other hand, the registerJsFile() function allows us to execute a JavaScript file with three parameters: The first parameter is the path file of the JavaScript file The second parameter is the HTML attribute for the script tag, with particular attention given to the depends and position values, which are not treated as tag attributes The third parameter is a key that identifies the JavaScript code block (if it's not provided, the content of the first parameter will be used as the key) CSS, similar to JavaScript, can be executed using the code or by passing the path file. Generally, JavaScript or CSS files are published in the basic/web folder, which is accessible without restrictions. So, when we have to use custom JavaScript or CSS files, it is recommended to put them in a subfolder of the basic/web folder, which can be named as css or js. In some circumstances, we might be required to add a new CSS or JavaScript file for all web application pages. The most appropriate place to put these entries is AppAsset.php, a file located in basic/assets/AppAsset.php. In it, we can add CSS and JavaScript entries required in web applications, even using dependencies if we need to. Using AJAX Yii2 provides appropriate attributes for some widgets to make AJAX calls; sometimes, however, writing a JavaScript code in these attributes will make code hard to read, especially if we are dealing with complex codes. Consequently, to make an AJAX call, we will use external JavaScript code executed by registerJs(). This is a template of the AJAX class using the GET or POST method: <?php $this->registerJs( <<< EOT_JS // using GET method $.get({ url: url, data: data, success: success, dataType: dataType }); // using POST method $.post({ url: url, data: data, success: success, dataType: dataType }); EOT_JS ); ?> An AJAX call is usually the effect of a user interface event (such as a click on a button, a link, and so on). So, it is directly connected to jQuery .on() event on element. For this reason, it is important to remember how Yii2 renders the name and id attributes of input fields. When we call Html::activeTextInput($model, $attribute) or in the same way use <?= $form->field($model, $attribute)->textInput() ?>. The name and id attributes of the input text field will be rendered as follows: id : The model class name separated with a dash by the attribute name in lowercase; for example, if the model class name is Room and the attribute is floor, the id attribute will be room-floor name: The model class name that encloses the attribute name, for example, if the model class name is Reservation and the attribute is price_per_day, the name attribute will be Reservation[price_per_day]; so every field owned by the Reservation model will be enclosed all in a single array In this example, there are two drop-down lists and a detail box. The two drop-down lists refer to customers and reservations; when user clicks on a customer list item, the second drop-down list of reservations will be filled out according to their choice. Finally, when a user clicks on a reservation list item, a details box will be filled out with data about the selected reservation. In an action named actionDetailDependentDropdown():   public function actionDetailDependentDropdown() { $showDetail = false; $model = new Reservation(); if(isset($_POST['Reservation'])) { $model->load( Yii::$app->request->post() ); if(isset($_POST['Reservation']['id'])&& ($_POST['Reservation']['id']!=null)) { $model = Reservation::findOne($_POST['Reservation']['id']); $showDetail = true; } } return $this->render('detailDependentDropdown', [ 'model' => $model, 'showDetail' => $showDetail ]); } In this action, we will get the customer_id and id parameters from a form based on the Reservation model data and if it are filled out, the data will be used to search for the correct reservation model to be passed to the view. There is a flag called $showDetail that displays the reservation details content if the id attribute of the model is received. In Controller, there is also an action that will be called using AJAX when the user changes the customer selection in the drop-down list:   public function actionAjaxDropDownListByCustomerId($customer_id) { $output = ''; $items = Reservation::findAll(['customer_id' => $customer_id]); foreach($items as $item) { $content = sprintf('reservation #%s at %s', $item->id, date('Y-m-d H:i:s', strtotime($item- >reservation_date))); $output .= yiihelpersHtml::tag('option', $content, ['value' => $item->id]); } return $output; } This action will return the <option> HTML tags filled out with reservations data filtered by the customer ID passed as a parameter. If the customer drop-down list is changed, the detail div will be hidden, an AJAX call will get all the reservations filtered by customer_id, and the result will be passed as content to the reservations drop-down list. If the reservations drop-down list is changed, a form will be submitted. Next in the form declaration, we can find the first of all the customer drop-down list and then the reservations list, which use a closure to get the value from the ArrayHelper::map() methods. We could add a new property in the Reservation model by creating a function starting with the prefix get, such as getDescription(), and put in it the content of the closure, or rather: public function getDescription() { $content = sprintf('reservation #%s at %s', $this>id, date('Y-m-d H:i:s', strtotime($this>reservation_date))); return $content; } Or we could use a short syntax to get data from ArrayHelper::map() in this way: <?= $form->field($model, 'id')->dropDownList(ArrayHelper::map( $reservations, 'id', 'description'), [ 'prompt' => '--- choose' ]); ?> Finally, if $showDetail is flagged, a simple details box with only the price per day of the reservation will be displayed. Using the Bootstrap widget Yii2 supports Bootstrap as a core feature. Bootstrap framework CSS and JavaScript files are injected by default in all pages, and we could use this feature even to only apply CSS classes or call our own JavaScript function provided by Bootstrap. However, Yii2 embeds Bootstrap as a widget, and we can access this framework's capabilities like any other widget. The most used are: Class name Description yiibootstrapAlert This class renders an alert Bootstrap component yiibootstrapButton This class renders a Bootstrap button yiibootstrapDropdown This class renders a Bootstrap drop-down menu component yiibootstrapNav This class renders a nav HTML component yiibootstrapNavBar This class renders a navbar HTML component For example, yiibootstrapNav and yiibootstrapNavBar are used in the default main template.   <?php NavBar::begin([ 'brandLabel' => 'My Company', 'brandUrl' => Yii::$app->homeUrl, 'options' => [ 'class' => 'navbar-inverse navbar-fixed-top', ], ]); echo Nav::widget([ 'options' => ['class' => 'navbar-nav navbar- right'], 'items' => [ ['label' => 'Home', 'url' => ['/site/index']], ['label' => 'About', 'url' => ['/site/about']], ['label' => 'Contact', 'url' => ['/site/contact']], Yii::$app->user->isGuest ? ['label' => 'Login', 'url' => ['/site/login']] : ['label' => 'Logout (' . Yii::$app->user- >identity->username . ')', 'url' => ['/site/logout'], 'linkOptions' => ['data-method' => 'post']], ], ]); NavBar::end(); ?> Yii2 also supports, by itself, many jQuery UI widgets through the JUI extension for Yii 2, yii2-jui. If we do not have the yii2-jui extension in the vendor folder, we can get it from Composer using this command: php composer.phar require --prefer-dist yiisoft/yii2-jui In this example, we will discuss the two most used widgets: datepicker and autocomplete. Let's have a look at the datepicker widget. This widget can be initialized using a model attribute or by filling out a value property. The following is an example made using a model instance and one of its attributes: echo DatePicker::widget([ 'model' => $model, 'attribute' => 'from_date', //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); And, here is a sample of the value property's use: echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, //'language' => 'it', //'dateFormat' => 'yyyy-MM-dd', ]); When data is sent via POST, the date_from and date_to fields will be converted from the d/m/y to the y-m-d format to make it possible for the database to save data. Then, the model object is updated through the save() method. Using the Bootstrap widget, an alert box will be displayed in the view after updating the model. Create the datePicker view: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiijuiDatePicker; ?> <div class="row"> <div class="col-lg-6"> <h3>Date Picker from Value<br />(using MM/dd/yyyy format and English language)</h3> <?php $value = date('Y-m-d'); echo DatePicker::widget([ 'name' => 'from_date', 'value' => $value, 'language' => 'en', 'dateFormat' => 'MM/dd/yyyy', ]); ?> </div> <div class="col-lg-6"> <?php if($reservationUpdated) { ?> <?php echo yiibootstrapAlert::widget([ 'options' => [ 'class' => 'alert-success', ], 'body' => 'Reservation successfully updated', ]); ?> <?php } ?> <?php $form = ActiveForm::begin(); ?> <h3>Date Picker from Model<br />(using dd/MM/yyyy format and italian language)</h3> <br /> <label>Date from</label> <?php // First implementation of DatePicker Widget echo DatePicker::widget([ 'model' => $reservation, 'attribute' => 'date_from', 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]); ?> <br /> <br /> <?php // Second implementation of DatePicker Widget echo $form->field($reservation, 'date_to')- >widget(yiijuiDatePicker::classname(), [ 'language' => 'it', 'dateFormat' => 'dd/MM/yyyy', ]) ?> <?php echo Html::submitButton('Send', ['class' => 'btn btn- primary']) ?> <?php $form = ActiveForm::end(); ?> </div> </div> The view is split into two columns, left and right. The left column simply displays a DataPicker example from the value (fixed to the current date). The right column displays an alert box if the $reservation model has been updated, and the next two kinds of widget declaration too; the first one without using $form and the second one using $form, both outputting the same HTML code. In either case, the DatePicker date output format is set to dd/MM/yyyy through the dateFormat property and the language is set to Italian through the language property. Multiple models in the same view Often, we can find many models of same or different class in a single view. First of all, remember that Yii2 encapsulates all the views' form attributes in the same container, named the same as the model class name. Therefore, when the controller receives the data, these will all be organized in a key of the $_POST array named the same as the model class name. If the model class name is Customer, every form input name attribute will be Customer[attributeA_of_model] This is built with: $form->field($model, 'attributeA_of_model')->textInput(). In the case of multiple models of the same class, the container will again be named as the model class name, but every attribute of each model will be inserted in an array, such as: Customer[0][attributeA_of_model_0] Customer[0][attributeB_of_model_0] … … … Customer[n][attributeA_of_model_n] Customer[n][attributeB_of_model_n] These are built with: $form->field($model, '[0]attributeA_of_model')->textInput(); $form->field($model, '[0]attributeB_of_model')->textInput(); … … … $form->field($model, '[n]attributeA_of_model')->textInput(); $form->field($model, '[n]attributeB_of_model')->textInput(); Notice that the array key information is inserted in the attribute name! So when data is passed to the controller, $_POST['Customer'] will be an array composed by the Customer models and every key of this array, for example, $_POST['Customer'][0] is a model of the Customer class. Let's see now how to save three customers at once. We will create three containers, one for each model class that will contain some fields of the Customer model. Create a view containing a block of input fields repeated for every model passed from the controller: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; /* @var $this yiiwebView */ /* @var $model appmodelsRoom */ /* @var $form yiiwidgetsActiveForm */ ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php for($k=0;$k<sizeof($models);$k++) { ?> <?php $model = $models[$k]; ?> <hr /> <label>Model #<?php echo $k+1 ?></label> <?= $form->field($model, "[$k]name")->textInput() ?> <?= $form->field($model, "[$k]surname")->textInput() ?> <?= $form->field($model, "[$k]phone_number")- >textInput() ?> <?php } ?> </div> <hr /> <div class="form-group"> <?= Html::submitButton('Save', ['class' => 'btn btn- primary']) ?> </div> <?php ActiveForm::end(); ?> </div> For each model, all the fields will have the same validator rules of the Customer class, and every single model object will be validated separately. Saving linked models in the same view It could be convenient to save different kind of models in the same view. This approach allows us to save time and to navigate from every single detail until a final item that merges all data is created. Handling different kind of models linked to each other it is not so different from what we have seen so far. The only point to take care of is the link (foreign keys) between models, which we must ensure is valid. Therefore, the controller action will receive the $_POST data encapsulated in the model's class name container; if we are thinking, for example, of the customer and reservation models, we will have two arrays in the $_POST variable, $_POST['Customer'] and $_POST['Reservation'], containing all the fields about the customer and reservation models. Then, all data must be saved together. It is advisable to use a database transaction while saving data because the action can be considered as ended only when all the data has been saved. Using database transactions in Yii2 is incredibly simple! A database transaction starts with calling beginTransaction() on the database connection object and finishes with calling the commit() or rollback() method on the database transaction object created by beginTransaction(). To start a transaction: $dbTransaction = Yii::$app->db->beginTransaction(); Commit transaction, to save all the database activities: $dbTransaction->commit(); Rollback transaction, to clear all the database activities: $dbTransaction->rollback(); So, if a customer was saved and the reservation was not (for any possible reason), our data would be partial and incomplete. Using a database transaction, we will avoid this danger. We now want to create both the customer and reservation models in the same view in a single step. In this way, we will have a box containing the customer model fields and a box with the reservation model fields in the view. Create a view the fields from the customer and reservation models: <?php use yiihelpersHtml; use yiiwidgetsActiveForm; use yiihelpersArrayHelper; use appmodelsRoom; ?> <div class="room-form"> <?php $form = ActiveForm::begin(); ?> <div class="model"> <?php echo $form->errorSummary([$customer, $reservation]); ?> <h2>Customer</h2> <?= $form->field($customer, "name")->textInput() ?> <?= $form->field($customer, "surname")->textInput() ?> <?= $form->field($customer, "phone_number")->textInput() ?> <h2>Reservation</h2> <?= $form->field($reservation, "room_id")- >dropDownList(ArrayHelper::map(Room::find()->all(), 'id', function($room, $defaultValue) { return sprintf('Room n.%d at floor %d', $room- >room_number, $room->floor); })); ?> <?= $form->field($reservation, "price_per_day")->textInput() ?> <?= $form->field($reservation, "date_from")->textInput() ?> <?= $form->field($reservation, "date_to")->textInput() ?> </div> <div class="form-group"> <?= Html::submitButton('Save customer and room', ['class' => 'btn btn-primary']) ?> </div> <?php ActiveForm::end(); ?> </div> We have created two blocks in the form to fill out the fields for the customer and the reservation. Now, create a new action named actionCreateCustomerAndReservation in ReservationsController in basic/controllers/ReservationsController.php:   public function actionCreateCustomerAndReservation() { $customer = new appmodelsCustomer(); $reservation = new appmodelsReservation(); // It is useful to set fake customer_id to reservation model to avoid validation error (because customer_id is mandatory) $reservation->customer_id = 0; if( $customer->load(Yii::$app->request->post()) && $reservation->load(Yii::$app->request->post()) && $customer->validate() && $reservation->validate() ) { $dbTrans = Yii::$app->db->beginTransaction(); $customerSaved = $customer->save(); if($customerSaved) { $reservation->customer_id = $customer->id; $reservationSaved = $reservation->save(); if($reservationSaved) { $dbTrans->commit(); } else { $dbTrans->rollback(); } } else { $dbTrans->rollback(); } } return $this->render('createCustomerAndReservation', [ 'customer' => $customer, 'reservation' => $reservation ]); } Summary In this article, we saw how to embed JavaScript and CSS in a layout and views, with file content or an inline block. This was applied to an example that showed you how to change the number of columns displayed based on the browser's available width; this is a typically task for websites or web apps that display advertising columns. Again on the subject of JavaScript, you learned how implement direct AJAX calls, taking an example where the reservation detail was dynamically loaded from the customers dropdown list. Next we looked at Yii's core user interface library, which is built on Bootstrap and we illustrated how to use the main Bootstrap widgets natively, together with DatePicker, probably the most commonly used jQuery UI widget. Finally, the last topics covered were multiple models of the same and different classes. We looked at two examples on these topics: the first one to save multiple customers at the same time and the second to create a customer and reservation in the same view. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [article] Meet Yii [article] Creating an Extension in Yii 2 [article]
Read more
  • 0
  • 0
  • 2869

article-image-straight-blender
Packt
16 Sep 2015
18 min read
Save for later

Straight into Blender!

Packt
16 Sep 2015
18 min read
 In this article by Romain Caudron and Pierre-Armand Nicq, the authors of Blender 3D By Example, you will start getting familiar with Blender. (For more resources related to this topic, see here.) Here, navigation within the interface will be presented. Its approach is atypical in comparison to other 3D software, such as Autodesk Maya® or Autodesk 3DS Max®, but once you get used to this, it will be extremely effective. If you have had the opportunity to use Blender before, it is important to note that the interface went through changes during the evolution of the software (especially since version 2.5). We will give you an idea of the possibilities that this wonderful free and open source software gives by presenting different workflows. You will learn some vocabulary and key concepts of 3D creation so that you will not to get lost during your learning. Finally, you will have a brief introduction to the projects that we will carry out throughout this book. Let's dive into the third dimension! The following topics will be covered in this article: Learning some theory and vocabulary Navigating the 3D viewport How to set up preferences Using keyboard shortcuts to save time An overview of the 3D workflow Before learning how to navigate the Blender interface, we will give you a short introduction to the 3D workflow. An anatomy of a 3D scene To start learning about Blender, you need to understand some basic concepts. Don't worry, there is no need to have special knowledge in mathematics or programming to create beautiful 3D objects; it only requires curiosity. Some artistic notions are a plus. All 3D elements, which you will handle, will evolve in to a scene. There is a three-dimensional space with a coordinate system composed of three axes. In Blender, the x axis shows the width, y axis shows the depth, and the z axis shows the height. Some softwares use a different approach and reverses the y and z axes. These axes are color-coded, we advise you to remember them: the x axis in red, the y axis in green and the z axis in blue. A scene may have the scale you want and you can adjust it according to your needs. This looks like a film set for a movie. A scene can be populated by one or more cameras, lights, models, rigs, and many other elements. You will have the control of their placement and their setup. A 3D scene looks like a film set. A mesh is made of vertices, edges, and faces. The vertices are some points in the scene space that are placed at the end of the edges. They could be thought of as 3D points in space and the edges connect them. Connected together, the edges and the vertices form a face, also called a polygon. It is a geometric plane, which has several sides as its name suggests. In 3D software, a polygon is constituted of at least three sides. It is often essential to favor four-sided polygons during modeling for a better result. You will have an opportunity to see this in more detail later. Your actors and environments will be made of polygonal objects, or more commonly called as meshes. If you have played old 3D games, you've probably noticed the very angular outline of the characters; it was, in fact, due to a low count of polygons. We must clarify that the orientation of the faces is important for your polygon object to be illuminated. Each face has a normal. This is a perpendicular vector that indicates the direction of the polygon. In order for the surface to be seen, it is necessary that the normals point to the outside of the model. Except in special cases where the interior of a polygonal object is empty and invisible. You will be able to create your actors and environment as if you were handling virtual clay to give them the desired shape. Anatomy of a 3D Mesh To make your characters presentable, you will have to create their textures, which are 2D images that will be mapped to the 3D object. UV coordinates will be necessary in order to project the texture onto the mesh. Imagine an origami paper cube that you are going to unfold. This is roughly the same. These details are contained in a square space with the representation of the mesh laid flat. You can paint the texture of your model in your favorite software, even in Blender. This is the representation of the UV mapping process. The texture on the left is projected to the 3D model on the right. After this, you can give the illusion of life to your virtual actors by animating them. For this, you will need to place animation keys spaced on the timeline. If you change the state of the object between two keyframes, you will get the illusion of movement—animation. To move the characters, there is a very interesting process that uses a bone system, mimicking the mechanism of a real skeleton. Your polygon object will be then attached to the skeleton with a weight assigned to the vertices on each bone, so if you animate the bones, the mesh components will follow them. Once your characters, props, or environment are ready, you will be able to choose a focal length and an adequate framework for your camera. In order to light your scene, the choice of the render engine will be important for the kind of lamps to use, but usually there are three types of lamps as used in cinema productions. You will have to place them carefully. There are directional lights, which behave like the sun and produce hard shadows. There are omnidirectional lights, which will allow you to simulate diffuse light, illuminating everything around it and casting soft shadows. There are also spots that will simulate a conical shape. As in the film industry or other imaging creation fields, good lighting is a must-have in order to sell the final picture. Lighting is an expressive and narrative element that can magnify your models, or make them irrelevant. Once everything is in place, you are going to make a render. You will have a choice between a still image and an animated sequence. All the given parameters with the lights and materials will be calculated by the render engine. Some render engines offer an approach based on physics with rays that are launched from the camera. Cycles is a good example of this kind of engine and succeeds in producing very realistic renders. Others will have a much simpler approach, but none less technically based on visible elements from the camera. All of this is an overview of what you will be able to achieve while reading this book and following along with Blender. What can you do with Blender? In addition to being completely free and open source, Blender is a powerful tool that is stable and with an integral workflow that will allow you to understand your learning of 3D creation with ease. Software updates are very frequent; they fix bugs and, more importantly, add new features. You will not feel alone as Blender has an active and passionate community around it. There are many sites providing tutorials, and an official documentation detailing the features of Blender. You will be able to carry out everything you need in Blender, including things that are unusual for a 3D package such as concept art creation, sculpting, or digital postproduction, which we have not yet discussed, including compositing and video editing. This is particularly interesting in order to push the aesthetics of your future images and movies to another level. It is also possible to make video games. Also, note that the Blender game engine is still largely unknown and underestimated. Although this aspect of the software is not as developed as other specialized game engines, it is possible to make good quality games without switching to another software. You will realize that the possibilities are enormous, and you will be able to adjust your workflow to suit your needs and desires. Software of this type could scare you by its unusual handling and its complexity, but you'll realize that once you have learned its basics, it is really intuitive in many ways. Getting used to the navigation in Blender Now that you have been introduced to the 3D workflow, you will learn how to navigate the Blender interface, starting with the 3D viewport. An introduction to the navigation of the 3D Viewport It is time to learn how to navigate in the Blender viewport. The viewport represents the 3D space, in which you will spend most of your time. As we previously said, it is defined by three axes (x, y, and z). Its main goal is to display the 3D scene from a certain point of view while you're working on it. The Blender 3D Viewport When you are navigating through this, it will be as if you were a movie director but with special powers that allow you to film from any point of view. The navigation is defined by three main actions: pan, orbit, and zoom. The pan action means that you will move horizontally or vertically according to your current point of view. If we connect that to our cameraman metaphor, it's like if you were moving laterally to the left, or to the right, or moving up or down with a camera crane. By default, in Blender the shortcut to pan around is to press the Shift button and the Middle Mouse Button (MMB), and drag the mouse. The orbit action means that you will rotate around the point that you are focusing on. For instance, imagine that you are filming a romantic scene of two actors and that you rotate around them in a circular manner. In this case, the couple will be the main focus. In a 3D scene, your main focus would be a 3D character, a light, or any other 3D object. To orbit around in the Blender viewport, the default shortcut is to press the MMB and then drag the mouse. The last action that we mentioned is zoom. The zoom action is straightforward. It is the action of moving our point of view closer to an element or further away from an element. In Blender, you can zoom in by scrolling your mouse wheel up and zoom out by scrolling your mouse wheel down. To gain time and precision, Blender proposes some predefined points of view. For instance, you can quickly go in a top view by pressing the numpad 7, you can also go in a front view by pressing the numpad 1, you can go in a side view by pressing the numpad 3, and last but not least, the numpad 0 allows you to go in Camera view, which represents the final render point of the view of your scene. You can also press the numpad 5 in order to activate or deactivate the orthographic mode. The orthographic mode removes perspective. It is very useful if you want to be precise. It feels as if you were manipulating a blueprint of the 3D scene. The difference between Perspective (left) and Orthographic (right) If you are lost, you can always look at the top left corner of the viewport in order to see in which view you are, and whether the orthographic mode is on or off. Try to learn by heart all these shortcuts; you will use them a lot. With repetition, this will become a habit. What are editors? In Blender, the interface is divided into subpanels that we call editors; even the menu bar where you save your file is an editor. Each editor gives you access to tools categorized by their functionality. You have already used an editor, the 3D view. Now it's time to learn more about the editor's anatomy. In this picture, you can see how Blender is divided into editors The anatomy of an editor There are 17 different editors in Blender and they all have the same base. An editor is composed of a Header, which is a menu that groups different options related to the editor. The first button of the header is to switch between other editors. For instance, you can replace the 3D view by the UV Image Editor by clicking on it. You can easily change its place by right-clicking on it in an empty space and by choosing the Flip to Top/Bottom option. The header can be hidden by selecting its top edge and by pulling it down. If you want to bring it back, press the little plus sign at the far right. The header of the 3D viewport. The first button is for switching between editors, and also, we can choose between different options in the menu In some editors, you can get access to hidden panels that give you other options. For instance, in the 3D view you can press the T key or the N key to toggle them on or off. As in the header, if a sub panel of an editor is hidden, you can click on the little plus sign to display it again. Split, Join, and Detach Blender offers you the possibility of creating editors where you want. To do this, you need to right-click on the border of an editor and select Split Area in order to choose where to separate them. Right-click on the border of an editor to split it into two editors The current editor will then be split in two editors. Now you can switch to any other editor that you desire by clicking on the first button of the header bar. If you want to merge two editors into one, you can right-click on the border that separates them and select the Join Area button. You will then have to click on the editor that you want to erase by pointing the arrow on it. Use the Join Area option to join two editors together You then have to choose which editor you want to remove by pointing and clicking on it. We are going to see another method of splitting editors that is nice. You can drag the top right corner of an editor and another editor will magically appear! If you want to join back two editors together, you will have to drag the top right corner in the direction of the editor that you want to remove. The last manipulation can be tricky at first, but with a little bit of practice, you will also be able to do it with closed eyes! The top right corner of an editor If you have multiple monitors, it could be a great idea to detach some editors in a separated window. With this, you could gain space and won't be overwhelmed by a condensed interface. In order to do this, you will need to press the Shift key and drag the top right corner of the editor with the Left Mouse Button (LMB). Some useful layout presets Blender offers you many predefined layouts that depend on the context of your creation. For instance, you can select the Animation preset in order to have all the major animation tools, or you can use the UV Editing preset in order to prepare your texturing. To switch between the presets, go to the top of the interface (in the Info editor, near the Help menu) and click on the drop-down menu. If you want, you can add new presets by clicking on the plus sign or delete presets by clicking on the X button. If you want to rename a preset, simply enter a new name in the corresponding text field. The following screenshot shows the Layout presets drop-down menu: The layout presets drop-down menu Setting up your preferences When we start learning new software, it's good to know how to set up your preferences. Blender has a large number of options, but we will show you just the basic ones in order to change the default navigation style or to add new tools that we call add-ons in Blender. An introduction to the Preferences window The preferences window can be opened by navigating to the File menu and selecting the User Preferences option. If you want, you can use the Ctrl + Alt + U shortcut or the Cmd key and comma key on a Mac system. There are seven tabs in this window as shown here: The different tabs that compose the Preferences window A nice thing that Blender offers is the ability to change its default theme. For this, you can go to the Themes tab and choose between different presets or even change the aspect of each interface elements. Another useful setting to change is the number of undo that is 32 steps, by default. To change this number, go to the Editing tab and under the Undo label, slide the Steps to the desired value. Customizing the default navigation style We will now show you how to use a different style of navigation in the viewport. In many other 3D programs, such as Autodesk Maya®, you can use the Alt key in order to navigate in the 3D view. In order to activate this in Blender, navigate to the Input tab, and under the Mouse section, check the Emulate 3 Button Mouse option. Now if you want to use this navigation style in the viewport, you can press Alt and LMB to orbit around, Ctrl + Alt and the LMB to zoom, and Alt + Shift and the LMB to pan. Remember these shortcuts as they will be very useful when we enter the sculpting mode while using a pen tablet. The Emulate 3 Button Mouse checkbox is shown as follows: The Emulate 3 Button Mouse will be very useful when sculpting using a pen tablet Another useful setting is the Emulate Numpad. It allows you to use the numeric keys that are above the QWERTY keys in addition to the numpad keys. This is very useful for changing the views if you have a laptop without a numpad, or if you want to improve your workflow speed. The Emulate Numpad allows you to use the numeric keys above the QWERTY keys in order to switch views or toggle the perspective on or off Improving Blender with add-ons If you want even more tools, you can install what is called as add-ons on your copy of Blender. Add-ons, also called Plugins or Scripts, are Python files with the .py extension. By default, Blender comes with many disabled add-ons ordered by category. We will now activate two very useful add-ons that will improve our speed while modeling. First, go to the Add-ons tab, and click on the Mesh button in the category list at the left. Here, you will see all the default mesh add-ons available. Click on the check-boxes at the left of the Mesh: F2 and Mesh: LoopTools subpanels in order to activate these add-ons. If you know the name of the add-on you want to activate, you can try to find it by typing its name in the search bar. There are many websites where you can download free add-ons, starting from the official Blender website. If you want to install a script, you can click on the Install from File button and you will be asked to select the corresponding Python file. The official Blender Add-ons Catalog You can find it at http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts. The following screenshot shows the steps for activating the add-ons: Steps for Add-ons activation Where are the add-ons on the hard-disk? All the scripts are placed in the add-ons folder that is located wherever you have installed Blender on your hard disk. This folder will usually be at Your Installation PathBlender FoundationBlender2.VersionNumberscriptsaddons. If you find it easier, you can drop the Python files here instead of at the standard installation. Don't forget to click on the Save User Settings button in order to save all your changes! Summary In this article, you have learned the steps behind 3D creations. You know what a mesh is and what it is composed of. Then you have been introduced to navigation in Blender by manipulating the 3D viewport and going through the user preference menu. In the later sections, you configured some preferences and extended Blender by activating some add-ons. Resources for Article: Further resources on this subject: Editing the UV islands[article] Working with Blender[article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 3159

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 1
  • 12966
article-image-building-wpf-net-client
Packt
16 Sep 2015
8 min read
Save for later

Building a WPF .NET Client

Packt
16 Sep 2015
8 min read
In this article by Einar Ingebrigtsen, author of the book SignalR: Real-time Application Development - Second Edition we will bring the full feature set of what we've built so far for the web onto the desktop through a WPF .NET client. There are quite a few ways of developing Windows client solutions, and WPF was introduced back in 2005 and has become one of the most popular ways of developing software for Windows. In WPF, we have something called XAML, which is what Windows Phone development supports and is also the latest programming model in Windows 10. In this chapter, the following topics will be covered: MVVM Brief introduction to the SOLID principles XAML WPF (For more resources related to this topic, see here.) Decoupling it all So you might be asking yourself, what is MVVM? It stands for Model View ViewModel: a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model (http://martinfowler.com/eaaDev/PresentationModel.html). Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a View. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Decoupling – the next level In this chapter, one of the things we will brush up is the usage of the Dependency Inversion Principle, the D of SOLID. Let's start with the first principle: the S in SOLID of Single Responsibility Principle, which states that a method or a class should only have one reason to change and only have one responsibility. With this, we can't have our units take on more than one responsibility and need help from collaborators to do the entire job. These collaborators are things we now depend on and we should represent these dependencies clearly to our units so that anyone or anything instantiating it knows what we are depending on. We have now flipped around the way in which we get dependencies. Instead of the unit trying to instantiate everything itself, we now clearly state what we need as collaborators, opening up for the calling code to decide what implementations of these dependencies you want to pass on. Also, this is an important aspect; typically, you'd want the dependencies expressed in the form of interfaces, yielding flexibility for the calling code. Basically, what this all means is that instead of a unit or system instantiating and managing its dependencies, we decouple and let something called as the Inversion of Control container deal with this. In the sample, we will use an IoC (Inversion of Control) container called Ninject that will deal with this for us. What it basically does is manage what implementations to give to the dependency specified on the constructor. Often, you'll find that the dependencies are interfaces in C#. This means one is not coupled to a specific implementation and has the flexibility of changing things at runtime based on configuration. Another role of the IOC container is to govern the life cycle of the dependencies. It is responsible for knowing when to create new instances and when to reuse an instance. For instance, in a web application, there are some systems that you want to have a life cycle of per request, meaning that we will get the same instance for the lifetime of a web request. The life cycle is configurable in what is known as a binding. When you explicitly set up the relationship between a contract (interface) and its implementation, you can choose to set up the life cycle behavior as well. Building for the desktop The first thing we will need is a separate project in our solution: Let's add it by right-clicking on the solution in Solution Explorer and navigating to Add | New Project: In the Add New Project dialog box, we want to make sure the .NET Framework 4.5.1 is selected. We could have gone with 4.5, but some of the dependencies that we're going to use have switched to 4.5.1. This is the latest version of the .NET Framework at the time of writing, so if you can, use it. Make sure to select Windows Desktop and then select WPF Application. Give the project the name SignalRChat.WPF and then click on the OK button: Setting up the packages We will need some packages to get started properly. This process is described in detail in Chapter 1, The Primer. Let's start off by adding SignalR, which is our primary framework that we will be working with to move on. We will be pulling this using NuGet, as described in Chapter 1, The Primer: Right-click on the References in Solution Explorer and select Manage NuGet Packages, and type Microsoft.AspNet.SignalR.Client in the Search dialog box. Select it and click on Install. Next, we're going to pull down something called as Bifrost. Bifrost is a library that helps us build MVVM-based solutions on WPF; there are a few other solutions out there, but we'll focus on Bifrost. Add a package called Bifrost.Client. Then, we need the package that gives us the IOC container called Ninject, working together with Bifrost. Add a package called Bifrost.Ninject. Observables One of the things that is part of WPF and all other XAML-based platforms is the notion of observables; be it in properties or collections that will notify when they change. The notification is done through well-known interfaces for this, such as INotifyPropertyChanged or INotifyCollectionChanged. Implementing these interfaces quickly becomes tedious all over the place where you want to notify everything when there are changes. Luckily, there are ways to make this pretty much go away. We can generate the code for this instead, either at runtime or at build time. For our project, we will go for a build-time solution. To accomplish this, we will use something called as Fody and a plugin for it called PropertyChanged. Add another NuGet package called PropertyChanged.Fody. If you happen to get problems during compiling, it could be the result of the dependency to a package called Fody not being installed. This happens for some versions of the package in combination with the latest Roslyn compiler. To fix this, install the NuGet package called Fody explicitly. Now that we have all the packages, we will need some configuration in code: Open the App.xam.cs file and add the following statement: using Bifrost.Configuration; The next thing we will need is a constructor for the App class: public App() { Configure.DiscoverAndConfigure(); } This will tell Bifrost to discover the implementations of the well-known interfaces to do the configuration. Bifrost uses the IoC container internally all the time, so the next thing we will need to do is give it an implementation. Add a class called ContainerCreator at the root of the project. Make it look as follows: using Bifrost.Configuration; using Bifrost.Execution; using Bifrost.Ninject; using Ninject; namespace SignalRChat.WPF { public class ContainerCreator : ICanCreateContainer { public IContainer CreateContainer() { var kernel = new StandardKernel(); var container = new Container(kernel); return container; } } } We've chosen Ninject among others that Bifrost supports, mainly because of familiarity and habit. If you happen to have another favorite, Bifrost supports a few. It's also fairly easy to implement your own support; just go to the source at http://github.com/dolittle/bifrost to find reference implementations. In order for Bifrost to be targeting the desktop, we need to tell it through configuration. Add a class called Configurator at the root of the project. Make it look as follows: using Bifrost.Configuration; namespace SignalRChat.WPF { public class Configurator : ICanConfigure { public void Configure(IConfigure configure) { configure.Frontend.Desktop(); } } } Summary Although there are differences between creating a web solution and a desktop client, the differences have faded over time. We can apply the same principles across the different environments; it's just different programming languages. The SignalR API adds the same type of consistency in thinking, although not as matured as the JavaScript API with proxy generation and so on; still the same ideas and concepts are found in the underlying API. Resources for Article: Further resources on this subject: The Importance of Securing Web Services [article] Working with WebStart and the Browser Plugin [article] Microsoft Azure – Developing Web API for Mobile Apps [article]
Read more
  • 0
  • 0
  • 7972

article-image-slideshow-presentations
Packt
15 Sep 2015
24 min read
Save for later

Slideshow Presentations

Packt
15 Sep 2015
24 min read
 In this article by David Mitchell, author of the book Dart By Example you will be introduced to the basics of how to build a presentation application using Dart. It usually takes me more than three weeks to prepare a good impromptu speech. Mark Twain Presentations make some people shudder with fear, yet they are an undeniably useful tool for information sharing when used properly. The content has to be great and some visual flourish can make it stand out from the crowd. Too many slides can make the most receptive audience yawn, so focusing the presenter on the content and automatically taking care of the visuals (saving the creator from fiddling with different animations and fonts sizes!) can help improve presentations. Compelling content still requires the human touch. (For more resources related to this topic, see here.) Building a presentation application Web browsers are already a type of multimedia presentation application so it is feasible to write a quality presentation program as we explore more of the Dart language. Hopefully it will help us pitch another Dart application to our next customer. Building on our first application, we will use a text based editor for creating the presentation content. I was very surprised how much faster a text based editor is for producing a presentation, and more enjoyable. I hope you experience such a productivity boost! Laying out the application The application will have two modes, editing and presentation. In the editing mode, the screen will be split into two panes. The top pane will display the slides and the lower will contain the editor, and other interface elements. This article will focus on the core creation side of the presentation. The application will be a single Dart project. Defining the presentation format The presentations will be written in a tiny subset of the Markdown format which is a powerful yet simple to read text file based format (much easier to read, type and understand than HTML). In 2004, John Gruber and the late Aaron Swartz created the Markdown language in 2004 with the goal of enabling people to write using an easy-to-read, easy-to-write plain text format. It is used on major websites, such as GitHub.com and StackOverflow.com. Being plain text, Markdown files can be kept and compared in version control. For more detail and background on Markdown see https://en.wikipedia.org/wiki/Markdown A simple titled slide with bullet points would be defined as: #Dart Language +Created By Google +Modern language with a familiar syntax +Structured Web Applications +It is Awesomely productive! I am positive you only had to read that once! This will translate into the following HTML. <h1>Dart Language</h1> <li>Created By Google</li>s <li>Modern language with a familiar syntax</li> <li>Structured Web Applications</li> <li>It is Awesomely productive!</li> Markdown is very easy and fast to parse, which probably explains its growing popularity on the web. It can be transformed into many other formats. Parsing the presentation The content of the TextAreaHtml element is split into a list of individual lines, and processed in a similar manner to some of the features in the Text Editor application using forEach to iterate over the list. Any lines that are blank once any whitespace has been removed via the trim method are ignored. #A New Slide Title +The first bullet point +The second bullet point #The Second Slide Title +More bullet points !http://localhost/img/logo.png #Final Slide +Any questions? For each line starting with a # symbol, a new Slide object is created. For each line starting with a + symbol, they are added to this slides bullet point list. For each line is discovered using a ! symbol the slide's image is set (a limit of one per slide). This continues until the end of the presentation source is reached. A sample presentation To get a new user going quickly, there will be an example presentation which can be used as a demonstration and testing the various areas of the application. I chose the last topic that came up round the family dinner table—the coconut! #Coconut +Member of Arecaceae family. +A drupe - not a nut. +Part of daily diets. #Tree +Fibrous root system. +Mostly surface level. +A few deep roots for stability. #Yield +75 fruits on fertile land +30 typically +Fibre has traditional uses #Finally !coconut.png #Any Questions? Presenter project structures The project is a standard Dart web application with index.html as the entry point. The application is kicked off by main.dart which is linked to in index.html, and the application functionality is stored in the lib folder. Source File Description sampleshows.dart    The text for the slideshow application.  lifecyclemixin.dart  The class for the mixin.  slideshow.dart  Data structures for storing the presentation.  slideshowapp.dart  The application object. Launching the application The main function has a very short implementation. void main() { new SlideShowApp(); } Note that the new class instance does not need to be stored in a variable and that the object does not disappear after that line is executed. As we will see later, the object will attach itself to events and streams, keeping the object alive for the lifetime that the page is loaded. Building bullet point slides The presentation is build up using two classes—Slide and SlideShow. The Slide object creates the DivElement used to display the content and the SlideShow contains a list of Slide objects. The SlideShow object is updated as the text source is updated. It also keeps track of which slide is currently being displayed in the preview pane. Once the number of Dart files grows in a project, the DartAnalyzer will recommend naming the library. It is good habit to name every .dart file in a regular project with its own library name. The slideshow.dart file has the keyword library and a name next to it. In Dart, every file is a library, whether it is explicitly declared or not. If you are looking at Dart code online you may stumble across projects with imports that look a bit strange. #import("dart:html"); This is the old syntax for Dart's import mechanism. If you see this it is a sign that other aspects of the code may be out of date too. If you are writing an application in a single project, source files can be arranged in a folder structure appropriate for the project, though keeping the relatives paths manageable is advisable. Creating too many folders is probably means it is time to create a package! Accessing private fields In Dart, as discussed when we covered packages, the privacy is at the library level but it is still possible to have private fields in a class even though Dart does not have the keywords public, protected, and private. A simple return of a private field's value can be performed with a one line function. String getFirstName() => _name; To retrieve this value, a function call is required, for example, Person.getFirstName() however it may be preferred to have a property syntax such as Person.firstName. Having private fields and retaining the property syntax in this manner, is possible using the get and set keywords. Using true getters and setters The syntax of Dart also supports get and set via keywords: int get score =>score + bonus; set score(int increase) =>score += increase * level; Using either get/set or simple fields is down to preference. It is perfectly possible to start with simple fields and scale up to getters and setters if more validation or processing is required. The advantage of the get and set keywords in a library, is the intended interface for consumers of the package is very clear. Further it clarifies which methods may change the state of the object and which merely report current values. Mixin it up In object oriented languages, it is useful to build on one class to create a more specialized related class. For example, in the text editor the base dialog class was extended to create alert and confirm pop ups. What if we want to share some functionality but do not want inheritance occurring between the classes? Aggregation can solve this problem to some extent: class A{ classb usefulObject; } The downside is that this requires a longer reference to use: new A().usefulObject.handyMethod(); This problem has been solved in Dart (and other languages) by a mixin class to do this job, allowing the sharing of functionality without forced inheritance or clunky aggregation. In Dart, a mixin must meet the requirements: No constructors in the class declaration. The base class of the mixin must be Object. No calls to a super class are made. mixins are really just classes that are malleable enough to fit into the class hierarchy at any point. A use case for a mixin may be serialization fields and methods that may be required on several classes in an application that are not part of any inheritance chain. abstract class Serialisation { void save() { //Implementation here. } void load(String filename) { //Implementation here. } } The with keyword is used to declare that a class is using a mixin. class ImageRecord extends Record with Serialisation If the class does not have an explicit base class, it is required to specify Object. class StorageReports extends Object with Serialization In Dart, everything is an object, even basic types such as num are objects and not primitive types. The classes int and double are subtypes of num. This is important to know, as other languages have different behaviors. Let's consider a real example of this. main() { int i; print("$i"); } In a language such as Java the expected output would be 0 however the output in Dart is null. If a value is expected from a variable, it is always good practice to initialize it! For the classes Slide and SlideShow, we will use a mixin from the source file lifecyclemixin.dart to record a creation and an editing timestamp. abstract class LifecycleTracker { DateTime _created; DateTime _edited; recordCreateTimestamp() => _created = new DateTime.now(); updateEditTimestamp() => _edited = new DateTime.now(); DateTime get created => _created; DateTime get lastEdited => _edited; } To use the mixin, the recordCreateTimestamp method can be called from the constructor and the updateEditTimestamp from the main edit method. For slides, it makes sense just to record the creation. For the SlideShow class, both the creation and update will be tracked. Defining the core classes The SlideShow class is largely a container objects for a list of Slide objects and uses the mixin LifecycleTracker. class SlideShow extends Object with LifecycleTracker { List<Slide> _slides; List<Slide> get slides => _slides; ... The Slide class stores the string for the title and a list of strings for the bullet points. The URL for any image is also stored as a string: class Slide extends Object with LifecycleTracker { String titleText = ""; List<String> bulletPoints; String imageUrl = ""; ... A simple constructor takes the titleText as a parameter and initializes the bulletPoints list. If you want to focus on just-the-code when in WebStorm , double-click on filename title of the tab to expand the source code to the entire window. Double-click again to return to the original layout. For even more focus on the code, go to the View menu and click on Enter Distraction Free Mode. Transforming data into HTML To add the Slide object instance into a HTML document, the strings need to be converted into instances of HTML elements to be added to the DOM (Document Object Model). The getSlideContents() method constructs and returns the entire slide as a single object. DivElement getSlideContents() { DivElement slide = new DivElement(); DivElement title = new DivElement(); DivElement bullets = new DivElement(); title.appendHtml("<h1>$titleText</h1>"); slide.append(title); if (imageUrl.length > 0) { slide.appendHtml("<img src="$imageUrl" /><br/>"); } bulletPoints.forEach((bp) { if (bp.trim().length > 0) { bullets.appendHtml("<li>$bp</li>"); } }); slide.append(bullets); return slide; } The Div elements are constructed as objects (instances of DivElement), while the content is added as literal HTML statements. The method appendHtml is used for this particular task as it renders HTML tags in the text. The regular method appendText puts the entire literal text string (including plain unformatted text of the HTML tags) into the element. So what exactly is the difference? The method appendHtml evaluates the supplied ,HTML, and adds the resultant object node to the nodes of the parent element which is rendered in the browser as usual. The method appendText is useful, for example, to prevent user supplied content affecting the format of the page and preventing malicious code being injected into a web page. Editing the presentation When the source is updated the presentation is updated via the onKeyUp event. This was used in the text editor project to trigger a save to local storage. This is carried out in the build method of the SlideShow class, and follows the pattern we discussed parsing the presentation. build(String src) { updateEditTimestamp(); _slides = new List<Slide>(); Slide nextSlide; src.split("n").forEach((String line) { if (line.trim().length > 0) { // Title - also marks start of the next slide. if (line.startsWith("#")) { nextSlide = new Slide(line.substring(1)); _slides.add(nextSlide); } if (nextSlide != null) { if (line.startsWith("+")) { nextSlide.bulletPoints.add(line.substring(1)); } else if (line.startsWith("!")) { nextSlide.imageUrl = line.substring(1); } } } }); } As an alternative to the startsWith method, the square bracket [] operator could be used for line [0] to retrieve the first character. The startsWith can also take a regular expression or a string to match and a starting index, refer to the dart:core documentation for more information. For the purposes of parsing the presentation, the startsWith method is more readable. Displaying the current slide The slide is displayed via the showSlide method in slideShowApp.dart. To preview the current slide, the current index, stored in the field currentSlideIndex, is used to retrieve the desired slide object and the Div rendering method called. showSlide(int slideNumber) { if (currentSlideShow.slides.length == 0) return; slideScreen.style.visibility = "hidden"; slideScreen ..nodes.clear() ..nodes.add(currentSlideShow.slides[slideNumber].getSlideContents ()); rangeSlidePos.value = slideNumber.toString(); slideScreen.style.visibility = "visible"; } The slideScreen is a DivElement which is then updated off screen by setting the visibility style property to hidden The existing content of the DivElement is emptied out by calling nodes.clear() and the slide content is added with nodes.add. The range slider position is set and finally the DivElement is set to visible again. Navigating the presentation A button set with familiar first, previous, next and last slide allow the user to jump around the preview of the presentation. This is carried out by having an index into the list of slides stored in the field slide in the SlideShowApp class. Handling the button key presses The navigation buttons require being set up in an identical pattern in the constructor of the SlideShowApp object. First get an object reference using id, which is the id attribute of the element, and then attaching a handler to the click event. Rather than repeat this code, a simple function can handle the process. setButton(String id, Function clickHandler) { ButtonInputElement btn = querySelector(id); btn.onClick.listen(clickHandler); } As function is a type in Dart, functions can be passed around easily as a parameter. Let us take a look at the button that takes us to the first slide. setButton("#btnFirst", startSlideShow); void startSlideShow(MouseEvent event) { showFirstSlide(); } void showFirstSlide() { showSlide(0); } The event handlers do not directly change the slide, these are carried out by other methods, which may be triggered by other inputs such as the keyboard. Using the function type The SlideShowApp constructor makes use of this feature. Function qs = querySelector; var controls = qs("#controls"); I find the querySelector method a little long to type (though it is a good descriptive of what it does). With Function being types, we can easily create a shorthand version. The constructor spends much of its time selecting and assigning the HTML elements to member fields of the class. One of the advantages of this approach is that the DOM of the page is queried only once, and the reference stored and reused. This is good for performance of the application as, once the application is running, querying the DOM may take much longer. Staying within the bounds Using min and max function from the dart:math package, the index can be kept in range of the current list. void showLastSlide() { currentSlideIndex = max(0, currentSlideShow.slides.length - 1); showSlide(currentSlideIndex); } void showNextSlide() { currentSlideIndex = min(currentSlideShow.slides.length - 1, ++currentSlideIndex); showSlide(currentSlideIndex); } These convenience functions can save a great deal if and else if comparisons and help make code a good degree more readable. Using the slider control The slider control is another new control in the HTML5 standard. This will allow the user to scroll though the slides in the presentation. This control is a personal favorite of mine, as it is so visual and can be used to give very interactive feedback to the user. It seemed to be a huge omission from the original form controls in the early generation of web browsers. Even with clear widely accepted features, HTML specifications can take a long time to clear committees and make it into everyday browsers! <input type="range" id="rngSlides" value="0"/> The control has an onChange event which is given a listener in the SlideShowApp constructor. rangeSlidepos.onChange.listen(moveToSlide);rangeSlidepos.onChange .listen(moveToSlide); The control provides its data via a simple string value, which can be converted to an integer via the int.parse method to be used as an index to the presentation's slide list. void moveToSlide(Event event) { currentSlideIndex = int.parse(rangeSlidePos.value); showSlide(currentSlideIndex); } The slider control must be kept in synchronization with any other change in slide display, use of navigation or change in number of slides. For example, the user may use the slider to reach the general area of the presentation, and then adjust with the previous and next buttons. void updateRangeControl() { rangeSlidepos ..min = "0" ..max = (currentSlideShow.slides.length - 1).toString(); } This method is called when the number of slides is changed, and as with working with most HTML elements, the values to be set need converted to strings. Responding to keyboard events Using the keyboard, particularly the arrow (cursor) keys, is a natural way to look through the slides in a presentation even in the preview mode. This is carried out in the SlideShowApp constructor. In Dart web applications, the dart:html package allows direct access to the globalwindow object from any class or function. The Textarea used to input the presentation source will also respond to the arrow keys so there will need to be a check to see if it is currently being used. The property activeElement on the document will give a reference to the control with focus. This reference can be compared to the Textarea, which is stored in the presEditor field, so a decision can be taken on whether to act on the keypress or not. Key Event Code Action Left Arrow  37  Go back a slide. Up Arrow  38  Go to first slide.   Right Arrow  39  Go to next slide.  Down Arrow  40  Go to last slide. Keyboard events, like other events, can be listened to by using a stream event listener. The listener function is an anonymous function (the definition omits a name) that takes the KeyboardEvent as its only parameter. window.onKeyUp.listen((KeyboardEvent e) { if (presEditor != document.activeElement){ if (e.keyCode == 39) showNextSlide(); else if (e.keyCode == 37) showPrevSlide(); else if (e.keyCode == 38) showFirstSlide(); else if (e.keyCode == 40) showLastSlide(); } }); It is a reasonable question to ask how to get the keyboard key codes required to write the switching code. One good tool is the W3C's Key and Character Codes page at http://www.w3.org/2002/09/tests/keys.html, to help with this but it can often be faster to write the handler and print out the event that is passed in! Showing the key help Rather than testing the user's memory, there will be a handy reference to the keyboard shortcuts. This is a simple Div element which is shown and then hidden when the key (remember to press Shift too!) is pressed again by toggling the visibility style from visible to hidden. Listening twice to event streams The event system in Dart is implemented as a stream. One of the advantages of this is that an event can easily have more than one entity listening to the class. This is useful, for example in a web application where some keyboard presses are valid in one context but not in another. The listen method is an add operation (accumulative) so the key press for help can be implemented separately. This allows a modular approach which helps reuse as the handlers can be specialized and added as required. window.onKeyUp.listen((KeyboardEvent e) { print(e); //Check the editor does not have focus. if (presEditor != document.activeElement) { DivElement helpBox = qs("#helpKeyboardShortcuts"); if (e.keyCode == 191) { if (helpBox.style.visibility == "visible") { helpBox.style.visibility = "hidden"; } else { helpBox.style.visibility = "visible"; } } } }); In, for example, a game, a common set of event handling may apply to title and introduction screen and the actual in game screen contains additional event handling as a superset. This could be implemented by adding and removing handlers to the relevant event stream. Changing the colors HTML5 provides browsers with full featured color picker (typically browsers use the native OS's color chooser). This will be used to allow the user to set the background color of the editor application itself. The color picker is added to the index.html page with the following HTML: <input id="pckBackColor" type="color"> The implementation is straightforward as the color picker control provides: InputElement cp = qs("#pckBackColor"); cp.onChange.listen( (e) => document.body.style.backgroundColor = cp.value); As the event and property (onChange and value) are common to the input controls the basic InputElement class can be used. Adding a date Most presentations are usually dated, or at least some of the jokes are! We will add a convenient button for the user to add a date to the presentation using the HTML5 input type date which provides a graphical date picker. <input type="date" id="selDate" value="2000-01-01"/> The default value is set in the index.html page as follows: The valueAsDate property of the DateInputElement class provides the Date object which can be added to the text area: void insertDate(Event event) { DateInputElement datePicker = querySelector("#selDate"); if (datePicker.valueAsDate != null) presEditor.value = presEditor.value + datePicker.valueAsDate.toLocal().toString(); } In this case, the toLocal method is used to obtain a string formatted to the month, day, year format. Timing the presentation The presenter will want to keep to their allotted time slot. We will include a timer in the editor to aid in rehearsal. Introducing the stopwatch class The Stopwatch class (from dart:core) provides much of the functionality needed for this feature, as shown in this small command line application: main() { Stopwatch sw = new Stopwatch(); sw.start(); print(sw.elapsed); sw.stop(); print(sw.elapsed); } The elapsed property can be checked at any time to give the current duration. This is very useful class, for example, it can be used to compare different functions to see which is the fastest. Implementing the presentation timer The clock will be stopped and started with a single button handled by the toggleTimer method. A recurring timer will update the duration text on the screen as follows: If the timer is running, the update Timer and the Stopwatch in field slidesTime is stopped. No update to the display is required as the user will need to see the final time: void toggleTimer(Event event) { if (slidesTime.isRunning) { slidesTime.stop(); updateTimer.cancel(); } else { updateTimer = new Timer.periodic(new Duration(seconds: 1), (timer) { String seconds = (slidesTime.elapsed.inSeconds % 60).toString(); seconds = seconds.padLeft(2, "0"); timerDisplay.text = "${slidesTime.elapsed.inMinutes}:$seconds"; }); slidesTime ..reset() ..start(); } } The Stopwatch class provides properties for retrieving the elapsed time in minutes and seconds. To format this to minutes and seconds, the seconds portion is determined with the modular division operator % and padded with the string function padLeft. Dart's string interpolation feature is used to build the final string, and as the elapsed and inMinutes properties are being accessed, the {} brackets are required so that the single value is returned. Overview of slides This provides the user with a visual overview of the slides as shown in the following screenshot: The presentation slides will be recreated in a new full screen Div element. This is styled using the fullScreen class in the CSS stylesheet in the SlideShowApp constructor: overviewScreen = new DivElement(); overviewScreen.classes.toggle("fullScreen"); overviewScreen.onClick.listen((e) => overviewScreen.remove()); The HTML for the slides will be identical. To shrink the slides, the list of slides is iterated over, the HTML element object obtained and the CSS class for the slide is set: currentSlideShow.slides.forEach((s) { aSlide = s.getSlideContents(); aSlide.classes.toggle("slideOverview"); aSlide.classes.toggle("shrink"); ... The CSS hover class is set to scale the slide when the mouse enters so a slide can be focused on for review. The classes are set with the toggle method which either adds if not present or removes if they are. The method has an optional parameter: aSlide.classes.toggle('className', condition); The second parameter is named shouldAdd is true if the class is always to be added and false if the class is always to be removed. Handout notes There is nothing like a tangible handout to give attendees to your presentation. This can be achieved with a variation of the overview display: Instead of duplicating the overview code, the function can be parameterized with an optional parameter in the method declaration. This is declared with square brackets [] around the declaration and a default value that is used if no parameter is specified. void buildOverview([bool addNotes = false]) This is called by the presentation overview display without requiring any parameters. buildOverview(); This is called by the handouts display without requiring any parameters. buildOverview(true); If this parameter is set, an additional Div element is added for the Notes area and the CSS is adjust for the benefit of the print layout. Comparing optional positional and named parameters The addNotes parameter is declared as an optional positional parameter, so an optional value can be specified without naming the parameter. The first parameter is matched to the supplied value. To give more flexibility, Dart allows optional parameters to be named. Consider two functions, the first will take named optional parameters and the second positional optional parameters. getRecords1(String query,{int limit: 25, int timeOut: 30}) { } getRecords2(String query,[int limit = 80, int timeOut = 99]) { } The first function can be called in more ways: getRecords1(""); getRecords1("", limit:50, timeOut:40); getRecords1("", timeOut:40, limit:65); getRecords1("", limit:50); getRecords1("", timeOut:40); getRecords2(""); getRecords2("", 90); getRecords2("", 90, 50); With named optional parameters, the order they are supplied is not important and has the advantage that the calling code is clearer as to the use that will be made of the parameters being passed. With positional optional parameters, we can omit the later parameters but it works in a strict left to right order so to set the timeOut parameter to a non-default value, limit must also be supplied. It is also easier to confuse which parameter is for which particular purpose. Summary The presentation editor is looking rather powerful with a range of advanced HTML controls moving far beyond text boxes to date pickers and color selectors. The preview and overview help the presenter visualize the entire presentation as they work, thanks to the strong class structure built using Dart mixins and data structures using generics. We have spent time looking at the object basis of Dart, how to pass parameters in different ways and, closer to the end user, how to handle keyboard input. This will assist in the creation of many different types of application and we have seen how optional parameters and true properties can help document code for ourselves and other developers. Hopefully you learned a little about coconuts too. The next step for this application is to improve the output with full screen display, animation and a little sound to capture the audiences' attention. The presentation editor could be improved as well—currently it is only in the English language. Dart's internationalization features can help with this. Resources for Article: Further resources on this subject: Practical Dart[article] Handling the DOM in Dart[article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 1762

article-image-dynamodb-best-practices
Packt
15 Sep 2015
24 min read
Save for later

DynamoDB Best Practices

Packt
15 Sep 2015
24 min read
 In this article by Tanmay Deshpande, the author of the book DynamoDB Cookbook, we will cover the following topics: Using a standalone cache for frequently accessed items Using the AWS ElastiCache for frequently accessed items Compressing large data before storing it in DynamoDB Using AWS S3 for storing large items Catching DynamoDB errors Performing auto-retries on DynamoDB errors Performing atomic transactions on DynamoDB tables Performing asynchronous requests to DynamoDB (For more resources related to this topic, see here.) Introduction We are going to talk about DynamoDB implementation best practices, which will help you improve the performance while reducing the operation cost. So let's get started. Using a standalone cache for frequently accessed items In this recipe, we will see how to use a standalone cache for frequently accessed items. Cache is a temporary data store, which will save the items in memory and will provide those from the memory itself instead of making a DynamoDB call. Make a note that this should be used for items, which you expect to not be changed frequently. Getting ready We will perform this recipe using Java libraries. So the prerequisite is that you should have performed recipes, which use the AWS SDK for Java. How to do it… Here, we will be using the AWS SDK for Java, so create a Maven project with the SDK dependency. Apart from the SDK, we will also be using one of the most widely used open source caches, that is, EhCache. To know about EhCache, refer to http://ehcache.org/. Let's use a standalone cache for frequently accessed items: To use EhCache, we need to include the following repository in pom.xml: <repositories> <repository> <id>sourceforge</id> <name>sourceforge</name> <url>https://oss.sonatype.org/content/repositories/ sourceforge-releases/</url> </repository> </repositories> We will also need to add the following dependency: <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>2.9.0</version> </dependency> Once the project setup is done, we will create a cachemanager class, which will be used in the following code: public class ProductCacheManager { // Ehcache cache manager CacheManager cacheManager = CacheManager.getInstance(); private Cache productCache; public Cache getProductCache() { return productCache; } //Create an instance of cache using cache manager public ProductCacheManager() { cacheManager.addCache("productCache"); this.productCache = cacheManager.getCache("productCache"); } public void shutdown() { cacheManager.shutdown(); } } Now, we will create another class where we will write a code to get the item from DynamoDB. Here, we will first initiate the ProductCacheManager: static ProductCacheManager cacheManager = new ProductCacheManager(); Next, we will write a method to get the item from DynamoDB. Before we fetch the data from DynamoDB, we will first check whether the item with the given key is available in cache. If it is available in cache, we will return it from cache itself. If the item is not found in cache, we will first fetch it from DynamoDB and immediately put it into cache. Once the item is cached, every time we need this item, we will get it from cache, unless the cached item is evicted: private static Item getItem(int id, String type) { Item product = null; if (cacheManager.getProductCache().isKeyInCache(id + ":" + type)) { Element prod = cacheManager.getProductCache().get(id + ":" + type); product = (Item) prod.getObjectValue(); System.out.println("Returning from Cache"); } else { AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); DynamoDB dynamoDB = new DynamoDB(client); Table table = dynamoDB.getTable("product"); product = table.getItem(new PrimaryKey("id", id, "type", type)); cacheManager.getProductCache().put( new Element(id + ":" + type, product)); System.out.println("Making DynamoDB Call for getting the item"); } return product; } Now we can use this method whenever needed. Here is how we can test it: Item product = getItem(10, "book"); System.out.println("First call :Item: " + product); Item product1 = getItem(10, "book"); System.out.println("Second call :Item: " + product1); cacheManager.shutdown(); How it works… EhCache is one of the most popular standalone caches used in the industry. Here, we are using EhCache to store frequently accessed items from the product table. Cache keeps all its data in memory. Here, we will save every item against its keys that are cached. We have the product table, which has the composite hash and range keys, so we will also store the items against the key of (Hash Key and Range Key). Note that caching should be used for only those tables that expect lesser updates. It should only be used for the table, which holds static data. If at all anyone uses cache for not so static tables, then you will get stale data. You can also go to the next level and implement a time-based cache, which holds the data for a certain time, and after that, it clears the cache. We can also implement algorithms, such as Least Recently Used (LRU), First In First Out (FIFO), to make the cache more efficient. Here, we will make comparatively lesser calls to DynamoDB, and ultimately, save some cost for ourselves. Using AWS ElastiCache for frequently accessed items In this recipe, we will do the same thing that we did in the previous recipe. The only thing we will change is that we will use a cloud hosted distributed caching solution instead of saving it on the local standalone cache. ElastiCache is a hosted caching solution provided by Amazon Web Services. We have two options to select which caching technology you would need. One option is Memcached and another option is Redis. Depending upon your requirements, you can decide which one to use. Here are links that will help you with more information on the two options: http://memcached.org/ http://redis.io/ Getting ready To get started with this recipe, we will need to have an ElastiCache cluster launched. If you are not aware of how to do it, you can refer to http://aws.amazon.com/elasticache/. How to do it… Here, I am using the Memcached cluster. You can choose the size of the instance as you wish. We will need a Memcached client to access the cluster. Amazon has provided a compiled version of the Memcached client, which can be downloaded from https://github.com/amazonwebservices/aws-elasticache-cluster-client-memcached-for-java. Once the JAR download is complete, you can add it to your Java Project class path: To start with, we will need to get the configuration endpoint of the Memcached cluster that we launched. This configuration endpoint can be found on the AWS ElastiCache console itself. Here is how we can save the configuration endpoint and port: static String configEndpoint = "my-elastic- cache.mlvymb.cfg.usw2.cache.amazonaws.com"; static Integer clusterPort = 11211; Similarly, we can instantiate the Memcached client: static MemcachedClient client; static { try { client = new MemcachedClient(new InetSocketAddress(configEndpoint, clusterPort)); } catch (IOException e) { e.printStackTrace(); } } Now, we can write the getItem method as we did for the previous recipe. Here, we will first check whether the item is present in cache; if not, we will fetch it from DynamoDB, and put it into cache. If the same request comes the next time, we will return it from the cache itself. While putting the item into cache, we are also going to put the expiry time of the item. We are going to set it to 3,600 seconds; that is, after 1 hour, the key entry will be deleted automatically: private static Item getItem(int id, String type) { Item product = null; if (null != client.get(id + ":" + type)) { System.out.println("Returning from Cache"); return (Item) client.get(id + ":" + type); } else { AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); DynamoDB dynamoDB = new DynamoDB(client); Table table = dynamoDB.getTable("product"); product = table.getItem(new PrimaryKey("id", id, "type", type)); System.out.println("Making DynamoDB Call for getting the item"); ElasticCache.client.add(id + ":" + type, 3600, product); } return product; } How it works… A distributed cache also works in the same fashion as the local one works. A standalone cache keeps the data in memory and returns it if it finds the key. In distributed cache, we have multiple nodes; here, keys are kept in a distributed manner. The distributed nature helps you divide the keys based on the hash value of the keys. So, when any request comes, it is redirected to a specified node and the value is returned from there. Note that ElastiCache will help you provide a faster retrieval of items at the additional cost of the ElastiCache cluster. Also note that the preceding code will work if you execute the application from the EC2 instance only. If you try to execute this on the local machine, you will get connection errors. Compressing large data before storing it in DynamoDB We are all aware of DynamoDB's storage limitations for the item's size. Suppose that we get into a situation where storing large attributes in an item is a must. In that case, it's always a good choice to compress these attributes, and then save them in DynamoDB. In this recipe, we are going to see how to compress large items before storing them. Getting ready To get started with this recipe, you should have your workstation ready with Eclipse or any other IDE of your choice. How to do it… There are numerous algorithms with which we can compress the large items, for example, GZIP, LZO, BZ2, and so on. Each algorithm has a trade-off between the compression time and rate. So, it's your choice whether to go with a faster algorithm or with an algorithm, which provides a higher compression rate. Consider a scenario in our e-commerce website, where we need to save the product reviews written by various users. For this, we created a ProductReviews table, where we will save the reviewer's name, its detailed product review, and the time when the review was submitted. Here, there are chances that the product review messages can be large, and it would not be a good idea to store them as they are. So, it is important to understand how to compress these messages before storing them. Let's see how to compress large data: First of all, we will write a method that accepts the string input and returns the compressed byte buffer. Here, we are using the GZIP algorithm for compressions. Java has a built-in support, so we don't need to use any third-party library for this: private static ByteBuffer compressString(String input) throws UnsupportedEncodingException, IOException { // Write the input as GZIP output stream using UTF-8 encoding ByteArrayOutputStream baos = new ByteArrayOutputStream(); GZIPOutputStream os = new GZIPOutputStream(baos); os.write(input.getBytes("UTF-8")); os.finish(); byte[] compressedBytes = baos.toByteArray(); // Writing bytes to byte buffer ByteBuffer buffer = ByteBuffer.allocate(compressedBytes.length); buffer.put(compressedBytes, 0, compressedBytes.length); buffer.position(0); return buffer; } Now, we can simply use this method to store the data before saving it in DynamoDB. Here is an example of how to use this method in our code: private static void putReviewItem() throws UnsupportedEncodingException, IOException { AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); DynamoDB dynamoDB = new DynamoDB(client); Table table = dynamoDB.getTable("ProductReviews"); Item product = new Item() .withPrimaryKey(new PrimaryKey("id", 10)) .withString("reviewerName", "John White") .withString("dateTime", "20-06-2015T08:09:30") .withBinary("reviewMessage", compressString("My Review Message")); PutItemOutcome outcome = table.putItem(product); System.out.println(outcome.getPutItemResult()); } In a similar way, we can write a method that decompresses the data on retrieval from DynamoDB. Here is an example: private static String uncompressString(ByteBuffer input) throws IOException { byte[] bytes = input.array(); ByteArrayInputStream bais = new ByteArrayInputStream(bytes); ByteArrayOutputStream baos = new ByteArrayOutputStream(); GZIPInputStream is = new GZIPInputStream(bais); int chunkSize = 1024; byte[] buffer = new byte[chunkSize]; int length = 0; while ((length = is.read(buffer, 0, chunkSize)) != -1) { baos.write(buffer, 0, length); } return new String(baos.toByteArray(), "UTF-8"); } How it works… Compressing data at client side has numerous advantages. Lesser size means lesser use of network and disk resources. Compression algorithms generally maintain a dictionary of words. While compressing, if they see the words getting repeated, then those words are replaced by their positions in the dictionary. In this way, the redundant data is eliminated and only their references are kept in the compressed string. While uncompressing the same data, the word references are replaced with the actual words, and we get our normal string back. Various compression algorithms contain various compression techniques. Therefore, the compression algorithm you choose will depend on your need. Using AWS S3 for storing large items Sometimes, we might get into a situation where storing data in a compressed format might not be sufficient enough. Consider a case where we might need to store large images or binaries that might exceed the DynamoDB's storage limitation per items. In this case, we can use AWS S3 to store such items and only save the S3 location in our DynamoDB table. AWS S3: Simple Storage Service allows us to store data in a cheaper and efficient manner. To know more about AWS S3, you can visit http://aws.amazon.com/s3/. Getting ready To get started with this recipe, you should have your workstation ready with the Eclipse IDE. How to do it… Consider a case in our e-commerce website where we would like to store the product images along with the product data. So, we will save the images on AWS S3, and only store their locations along with the product information in the product table: First of all, we will see how to store data in AWS S3. For this, we need to go to the AWS console, and create an S3 bucket. Here, I created a bucket called e-commerce-product-images, and inside this bucket, I created folders to store the images. For example, /phone/apple/iphone6. Now, let's write the code to upload the images to S3: private static void uploadFileToS3() { String bucketName = "e-commerce-product-images"; String keyName = "phone/apple/iphone6/iphone.jpg"; String uploadFileName = "C:\tmp\iphone.jpg"; // Create an instance of S3 client AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider()); // Start the file uploading File file = new File(uploadFileName); s3client.putObject(new PutObjectRequest(bucketName, keyName, file)); } Once the file is uploaded, you can save its path in one of the attributes of the product table, as follows: private static void putItemWithS3Link() { AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); DynamoDB dynamoDB = new DynamoDB(client); Table table = dynamoDB.getTable("productTable"); Map<String, String> features = new HashMap<String, String>(); features.put("camera", "13MP"); features.put("intMem", "16GB"); features.put("processor", "Dual-Core 1.4 GHz Cyclone (ARM v8-based)"); Set<String> imagesSet = new HashSet<String>(); imagesSet.add("https://s3-us-west-2.amazonaws.com/ e-commerce-product-images/phone/apple/iphone6/iphone.jpg"); Item product = new Item() .withPrimaryKey(new PrimaryKey("id", 250, "type", "phone")) .withString("mnfr", "Apple").withNumber("stock", 15) .withString("name", "iPhone 6").withNumber("price", 45) .withMap("features", features) .withStringSet("productImages", imagesSet); PutItemOutcome outcome = table.putItem(product); System.out.println(outcome.getPutItemResult()); } So whenever required, we can fetch the item by its key, and fetch the actual images from S3 using the URL saved in the productImages attribute. How it works… AWS S3 provides storage services at very cheaper rates. It's like a flat data dumping ground where we can store any type of file. So, it's always a good option to store large datasets in S3 and only keep its URL references in DynamoDB attributes. The URL reference will be the connecting link between the DynamoDB item and the S3 file. If your file is too large to be sent in one S3 client call, you may want to explore its multipart API, which allows you to send the file in chunks. Catching DynamoDB errors Till now, we discussed how to perform various operations in DynamoDB. We saw how to use AWS provided by SDK and play around with DynamoDB items and attributes. Amazon claims that AWS provides high availability and reliability, which is quite true considering the years of experience I have been using their services, but we still cannot deny the possibility where services such as DynamoDB might not perform as expected. So, it's important to make sure that we have a proper error catching mechanism to ensure that the disaster recovery system is in place. In this recipe, we are going to see how to catch such errors. Getting ready To get started with this recipe, you should have your workstation ready with the Eclipse IDE. How to do it… Catching errors in DynamoDB is quite easy. Whenever we perform any operations, we need to put them in the try block. Along with it, we need to put a couple of catch blocks in order to catch the errors. Here, we will consider a simple operation to put an item into the DynamoDB table: try { AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); DynamoDB dynamoDB = new DynamoDB(client); Table table = dynamoDB.getTable("productTable"); Item product = new Item() .withPrimaryKey(new PrimaryKey("id", 10, "type", "mobile")) .withString("mnfr", "Samsung").withNumber("stock", 15) .withBoolean("isProductionStopped", true) .withNumber("price", 45); PutItemOutcome outcome = table.putItem(product); System.out.println(outcome.getPutItemResult()); } catch (AmazonServiceException ase) { System.out.println("Error Message: " + ase.getMessage()); System.out.println("HTTP Status Code: " + ase.getStatusCode()); System.out.println("AWS Error Code: " + ase.getErrorCode()); System.out.println("Error Type: " + ase.getErrorType()); System.out.println("Request ID: " + ase.getRequestId()); } catch (AmazonClientException e) { System.out.println("Amazon Client Exception :" + e.getMessage()); } We should first catch AmazonServiceException, which arrives if the service you are trying to access throws any exception. AmazonClientException should be put last in order to catch any client-related exceptions. How it works… Amazon assigns a unique request ID for each and every request that it receives. Keeping this request ID is very important if something goes wrong, and if you would like to know what happened, then this request ID is the only source of information. We need to contact Amazon to know more about the request ID. There are two types of errors in AWS: Client errors: These errors normally occur when the request we submit is incorrect. The client errors are normally shown with a status code starting with 4XX. These errors normally occur when there is an authentication failure, bad requests, missing required attributes, or for exceeding the provisioned throughput. These errors normally occur when users provide invalid inputs. Server errors: These errors occur when there is something wrong from Amazon's side and they occur at runtime. The only way to handle such errors is retries; and if it does not succeed, you should log the request ID, and then you can reach the Amazon support with that ID to know more about the details. You can read more about DynamoDB specific errors at http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ErrorHandling.html. Performing auto-retries on DynamoDB errors As mentioned in the previous recipe, we can perform auto-retries on DynamoDB requests if we get errors. In this recipe, we are going to see how to perform auto=retries. Getting ready To get started with this recipe, you should have your workstation ready with the Eclipse IDE. How to do it… Auto-retries are required if we get any errors during the first request. We can use the Amazon client configurations to set our retry strategy. By default, the DynamoDB client auto-retries a request if any error is generated three times. If we think that this is not efficient for us, then we can define this on our own, as follows: First of all, we need to create a custom implementation of RetryCondition. It contains a method called shouldRetry, which we need to implement as per our needs. Here is a sample CustomRetryCondition class: public class CustomRetryCondition implements RetryCondition { public boolean shouldRetry(AmazonWebServiceRequest originalRequest, AmazonClientException exception, int retriesAttempted) { if (retriesAttempted < 3 && exception.isRetryable()) { return true; } else { return false; } } } Similarly, we can implement CustomBackoffStrategy. The back-off strategy gives a hint on after what time the request should be retried. You can choose either a flat back-off time or an exponential back-off time: public class CustomBackoffStrategy implements BackoffStrategy { /** Base sleep time (milliseconds) **/ private static final int SCALE_FACTOR = 25; /** Maximum exponential back-off time before retrying a request */ private static final int MAX_BACKOFF_IN_MILLISECONDS = 20 * 1000; public long delayBeforeNextRetry(AmazonWebServiceRequest originalRequest, AmazonClientException exception, int retriesAttempted) { if (retriesAttempted < 0) return 0; long delay = (1 << retriesAttempted) * SCALE_FACTOR; delay = Math.min(delay, MAX_BACKOFF_IN_MILLISECONDS); return delay; } } Next, we need to create an instance of RetryPolicy, and set the RetryCondition and BackoffStrategy classes, which we created. Apart from this, we can also set a maximum number of retries. The last parameter is honorMaxErrorRetryInClientConfig. It means whether this retry policy should honor the maximum error retry set by ClientConfiguration.setMaxErrorRetry(int): RetryPolicy retryPolicy = new RetryPolicy(customRetryCondition, customBackoffStrategy, 3, false); Now, initiate the ClientConfiguration, and set the RetryPolicy we created earlier: ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setRetryPolicy(retryPolicy); Now, we need to set this client configuration when we initiate the AmazonDynamoDBClient; and once done, your retry policy with a custom back-off strategy will be in place: AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider(), clientConfiguration); How it works… Auto-retries are quite handy when we receive a sudden burst in DynamoDB requests. If there are more number of requests than the provisioned throughputs, then auto-retries with an exponential back-off strategy will definitely help in handling the load. So if the client gets an exception, then it will get auto retried after sometime; and if by then the load is less, then there wouldn't be any loss for your application. The Amazon DynamoDB client internally uses HttpClient to make the calls, which is quite a popular and reliable implementation. So if you need to handle such cases, this kind of an implementation is a must. In case of batch operations, if any failure occurs, DynamoDB does not fail the complete operation. In case of batch write operations, if a particular operation fails, then DynamoDB returns the unprocessed items, which can be retried. Performing atomic transactions on DynamoDB tables I hope we are all aware that operations in DynamoDB are eventually consistent. Considering this nature it obviously does not support transactions the way we do in RDBMS. A transaction is a group of operations that need to be performed in one go, and they should be handled in an atomic nature. (If one operation fails, the complete transaction should be rolled back.) There might be use cases where you would need to perform transactions in your application. Considering this need, AWS has provided open sources, client-side transaction libraries, which helps us achieve atomic transactions in DynamoDB. In this recipe, we are going to see how to perform transactions on DynamoDB. Getting ready To get started with this recipe, you should have your workstation ready with the Eclipse IDE. How to do it… To get started, we will first need to download the source code of the library from GitHub and build the code to generate the JAR file. You can download the code from https://github.com/awslabs/dynamodb-transactions/archive/master.zip. Next, extract the code and run the following command to generate the JAR file: mvn clean install –DskipTests On a successful build, you will see a JAR generated file in the target folder. Add this JAR to the project by choosing a configure build path in Eclipse: Now, let's understand how to use transactions. For this, we need to create the DynamoDB client and help this client to create two helper tables. The first table would be the Transactions table to store the transactions, while the second table would be the TransactionImages table to keep the snapshots of the items modified in the transaction: AmazonDynamoDBClient client = new AmazonDynamoDBClient( new ProfileCredentialsProvider()); client.setRegion(Region.getRegion(Regions.US_EAST_1)); // Create transaction table TransactionManager.verifyOrCreateTransactionTable(client, "Transactions", 10, 10, (long) (10 * 60)); // Create transaction images table TransactionManager.verifyOrCreateTransactionImagesTable(client, "TransactionImages", 10, 10, (long) (60 * 10)); Next, we need to create a transaction manager by providing the names of the tables we created earlier: TransactionManager txManager = new TransactionManager(client, "Transactions", "TransactionImages"); Now, we create one transaction, and perform the operations you will need to do in one go. Consider our product table where we need to add two new products in one single transaction, and the changes will reflect only if both the operations are successful. We can perform these using transactions, as follows: Transaction t1 = txManager.newTransaction(); Map<String, AttributeValue> product = new HashMap<String, AttributeValue>(); AttributeValue id = new AttributeValue(); id.setN("250"); product.put("id", id); product.put("type", new AttributeValue("phone")); product.put("name", new AttributeValue("MI4")); t1.putItem(new PutItemRequest("productTable", product)); Map<String, AttributeValue> product1 = new HashMap<String, AttributeValue>(); id.setN("350"); product1.put("id", id); product1.put("type", new AttributeValue("phone")); product1.put("name", new AttributeValue("MI3")); t1.putItem(new PutItemRequest("productTable", product1)); t1.commit(); Now, execute the code to see the results. If everything goes fine, you will see two new entries in the product table. In case of an error, none of the entries would be in the table. How it works… The transaction library when invoked, first writes the changes to the Transaction table, and then to the actual table. If we perform any update item operation, then it keeps the old values of that item in the TransactionImages table. It also supports multi-attribute and multi-table transactions. This way, we can use the transaction library and perform atomic writes. It also supports isolated reads. You can refer to the code and examples for more details at https://github.com/awslabs/dynamodb-transactions. Performing asynchronous requests to DynamoDB Till now, we have used a synchronous DynamoDB client to make requests to DynamoDB. Synchronous requests block the thread unless the operation is not performed. Due to network issues, sometimes, it can be difficult for the operation to get completed quickly. In that case, we can go for asynchronous client requests so that we submit the requests and do some other work. Getting ready To get started with this recipe, you should have your workstation ready with the Eclipse IDE. How to do it… Asynchronous client is easy to use: First, we need to the AmazonDynamoDBAsync class: AmazonDynamoDBAsync dynamoDBAsync = new AmazonDynamoDBAsyncClient( new ProfileCredentialsProvider()); Next, we need to create the request to be performed in an asynchronous manner. Let's say we need to delete a certain item from our product table. Then, we can create the DeleteItemRequest, as shown in the following code snippet: Map<String, AttributeValue> key = new HashMap<String, AttributeValue>(); AttributeValue id = new AttributeValue(); id.setN("10"); key.put("id", id); key.put("type", new AttributeValue("phone")); DeleteItemRequest deleteItemRequest = new DeleteItemRequest( "productTable", key); Next, invoke the deleteItemAsync method to delete the item. Here, we can optionally define AsyncHandler if we want to use the result of the request we had invoked. Here, I am also printing the messages with time so that we can confirm its asynchronous nature: dynamoDBAsync.deleteItemAsync(deleteItemRequest, new AsyncHandler<DeleteItemRequest, DeleteItemResult>() { public void onSuccess(DeleteItemRequest request, DeleteItemResult result) { System.out.println("Item deleted successfully: "+ System.currentTimeMillis()); } public void onError(Exception exception) { System.out.println("Error deleting item in async way"); } }); System.out.println("Delete item initiated" + System.currentTimeMillis()); How it works Asynchronous clients use AsyncHttpClient to invoke the DynamoDB APIs. This is a wrapper implementation on top of Java asynchronous APIs. Hence, they are quite easy to use and understand. The AsyncHandler is an optional configuration you can do in order to use the results of asynchronous calls. We can also use the Java Future object to handle the response. Summary We have covered various recipes on cost and performance efficient use of DynamoDB. Recipes like error handling and auto retries helps readers in make their application robust. It also highlights use of transaction library in order to implement atomic transaction on DynamoDB. Resources for Article: Further resources on this subject: The EMR Architecture[article] Amazon DynamoDB - Modelling relationships, Error handling[article] Index, Item Sharding, and Projection in DynamoDB [article]
Read more
  • 0
  • 0
  • 23026
article-image-using-3d-objects
Packt
15 Sep 2015
11 min read
Save for later

Using 3D Objects

Packt
15 Sep 2015
11 min read
In this article by Liz Staley, author of the book Manga Studio EX 5 Cookbook, you will learn the following topics: Adding existing 3D objects to a page Importing a 3D object from another program Manipulating 3D objects Adjusting the 3D camera (For more resources related to this topic, see here.) One of the features of Manga Studio 5 that people ask me about all the time is 3D objects. Manga Studio 5 comes with a set of 3D assets: characters, poses, and a few backgrounds and small objects. These can be added directly to your page, posed and positioned, and used in your artwork. While I usually use these 3D poses as a reference (much like the wooden drawing dolls that you can find in your local craft store), you can conceivably use 3D characters and imported 3D assets from programs such as Poser to create entire comics. Let's get into the third dimension now, and you will learn how to use these assets in Manga Studio 5. Adding existing 3D objects to a page Manga Studio 5 comes with many 3D objects present in the materials library. This is the fastest way to get started with using the 3D features. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start the recipes covered here. How to do it… The following steps will show us how to add an existing 3D material to a page: Open the materials library. This can be done by going to Window | Material | Material [3D]. Select a category of 3D material from the list on the left-hand side of the library, or scroll down the Material library preview window to browse all the available materials. Select a material to add to the page by clicking on it to highlight it. In this recipe, we are choosing the School girl B 02 character material. It is highlighted in the following screenshot: Hold the left mouse button down on the selected material and drag it onto the page, releasing the mouse button once the cursor is over the page, to display the material. Alternately, you can click on the Paste selected material to canvas icon at the bottom of the Material library menu. The selected 3D material will be added to the page. The School girl B 02 material is shown in this default character pose: Importing a 3D object from another program You don't have to use only the default 3D models included in Manga Studio 5. The process of importing a model is very easy. The types of files that can be imported into Manga Studio 5 are c2fc, c2fr, fbx, 1wo, 1ws, obj, 6kt, and 6kh. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start this recipe. For this recipe, you will also need a model to import into the program. These can be found on numerous websites, including my.smithmicro.com, under the Poser tab. How to do it… The following steps will walk us through the simple process of importing a 3D model into Manga Studio 5: Open the location where the 3D model you wish to import has been saved. If you have downloaded the 3D model from the Internet, it may be in the Downloads folder on your PC. Arrange the windows on your computer screen so that the location of the 3D model and Manga Studio 5 are both visible, as shown in the following screenshot: Click on the 3D model file and hold down the mouse button. While still holding down the mouse button, drag the 3D model file into the Manga Studio 5 window. Release the mouse button. The 3D model will be imported into the open page, as shown in this screenshot: Manipulating 3D objects You've learned how to add a 3D object to our project. But how can you pose it the way you want it to look for your scene? With a little time and patience, you'll be posing characters like a pro in no time! Getting ready Follow the directions in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… This recipe will walk us through moving a character into a custom pose: Be sure that the Object tool under Operation is selected. Click on the 3D object to manipulate, if it is not already selected. To move the entire object up, down, left, or right, hover the mouse cursor over the fourth icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then, drag to move the object in the desired direction. The following screenshot shows the location of the icon used to move the object up, down, left, or right. It is highlighted in pink and also shown over the 3D character. If your models are moving very slowly, you may need to allocate more memory to Manga Studio EX 5. This can be done by going to File | Preferences | Performance. To rotate the object along the y axis (or the horizon line), hover the mouse cursor over the fifth icon in the top-left corner of the box around the selected object. Click on it, hold the left mouse button, and drag. The object will rotate along the y axis, as shown in this screenshot: To rotate the object along the x axis (straight up and down vertically), hover the mouse cursor over the sixth icon in the top-left corner of the box around the selected object. Click and drag. The object will rotate vertically around its center, , as shown in the following screenshot: To move the object back and forth in 3D space, hover the mouse cursor over the seventh icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then drag it. The icon is shown as follows, highlighted in pink, and the character has been moved back—away from the camera: To move one part of a character, click on the part to be moved. For this recipe, we'll move the character's arm down. To do this, we'll click on the upper arm portion of the character to select it. When a portion of the character is selected, a sphere with three lines circling it will appear. Each of these three lines represents one axis (x, y, and z) and controls the rotation of that portion of the character. This set of lines is shown here: Use the lines of the sphere to rotate the part of the character to the desired position. For a more precise movement, the scroll wheel on the mouse can be used as well. In the following screenshot, the arm has been rotated so that it is down at the character's side: Do you keep accidentally moving a part of the model that you don't want to move? Put the cursor over the part of the model that you'd like to keep in place, and then right-click. A blue box will appear on that part of the model, and the piece will be locked in to place. Right-click again to unlock the part. How it works… In this recipe, we covered how to move and rotate a 3D object and portions of 3D characters. This is the start of being able to create your own custom poses and saving them for reuse. It's also the way to pose the drawing doll models in Manga Studio to make pose references for your comic artwork. In the 3D-Body Type folder of the materials library, you will find Female and Male drawing dolls that can be posed just as the premade characters can. These generic dolls are great for getting that difficult pose down. Then use the next recipe, Adjusting the 3D camera, to get the angle you need, and draw away! The following screenshot shows a drawing doll 3D object that has been posed in a custom stance. The preceding pose was relatively easy to achieve. The figure was rotated along the x axis, and then the head and neck joints were both rotated individually so that the doll looked toward the camera. Both its arms were rotated down and then inward. The hands were posed. The ankle joints were selected and the feet were rotated so that the toes were pointed. Then the knee of the near leg was rotated to bend it. The hip of the near leg was also rotated so that the leg was lifted slightly, giving a "cutesy" look to the pose. Having trouble posing a character's hands exactly the way you want them? Then open the Sub Tool Detail palette and click on Pose in the left-hand-side menu. In this area, you will find a menu with a picture of a hand. This is a quick controller for the fingers. Select the hand that you wish to pose. Along the bottom of the menu are some preset hand poses for things such as closed fists. At the top of each finger on this menu is an icon that looks like chain links. Click on one of them to lock the finger that it is over and prevent it from moving. The triangle area over the large blue hand symbol controls how open and closed the fingers are. You will find this menu much easier than rotating each joint individually—I'm sure! Adjusting the 3D camera In addition to manipulating 3D objects or characters, you can also change the position of the 3D camera to get the composition that you desire for your work. Think of the 3D camera just like a camera on a movie set. It can be rotated or moved around to frame the actors (3D characters) and scenery just the way the director wants! Not sure whether you moved the character or the camera? Take a look at the ground plane, which is the "checkerboard" floor area underneath the characters and objects. If the character is standing straight up and down on the ground plane, it means that the camera was moved. If the character is floating above or below the ground plane, or part of the way through it, it means that the character or object was moved. Getting ready Follow the directions given in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… To rotate the camera around an object (the object will remain stationary), hover the mouse cursor over the first icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and the camera rotation are shown in the following screenshot: To move the camera up, down, left, or right, hover the mouse cursor over the second icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and camera movement are shown in this screenshot: To move the camera back and forth in the 3D space, hover the mouse cursor over the third icon in the top-left corner of the box around the selected object. Again, click and hold the left mouse button, and then drag. The next screenshot shows the zoom icon in pink at the top and the overlay on top of the character. Note how the hand of the character and the top of the head are now out of the page, since the camera is closer to her and she appears larger on the canvas. Summary In this article, we have studied to add existing 3D objects to a page using Manga Studio 5 in detail. After adding the existing object, we saw steps to add the 3D object from another program. Then, there are steps to manipulate these 3D objects along the co-ordinate system by using tools available in Manga Studio 5. Finally, we learnt to position the 3D camera, by rotating it around an object. Resources for Article: Further resources on this subject: Ink Slingers [article] Getting Familiar with the Story Features [article] Animating capabilities of Cinema 4D [article]
Read more
  • 0
  • 0
  • 3947

article-image-hello-pong
Packt
15 Sep 2015
19 min read
Save for later

Hello, Pong!

Packt
15 Sep 2015
19 min read
In this article written by Alejandro Rodas de Paz and Joseph Howse, authors of the book Python Game Programming By Example, we learn how game development is a highly evolving software development process, and it how has improved continuously since the appearance of the first video games in the 1950s. Nowadays, there is a wide variety of platforms and engines, and this process has been facilitated with the arrival of open source tools. Python is a free high-level programming language with a design intended to write readable and concise programs. Thanks to its philosophy, we can create our own games from scratch with just a few lines of code. There are a plenty of game frameworks for Python, but for our first game, we will see how we can develop it without any third-party dependency. We will be covering the following topics: Installation of the required software Overview of Tkinter, a GUI library included in the Python standard library Applying object-oriented programming to encapsulate the logic of our game Basic collision and input detection Drawing game objects without external assets (For more resources related to this topic, see here.) Installing Python You will need Python 3.4 with Tcl / Tk 8.6 installed on your computer. The latest branch of this version is Python 3.4.3, which can be downloaded from https://www.python.org/downloads/. Here, you can find the official binaries for the most popular platforms, such as Windows and Mac OS. During the installation process, make sure that you check the Tcl/Tk option to include the library. The code examples included in the book have been tested against Windows 8 and Mac, but can be run on Linux without any modification. Note that some distributions may require you to install the appropriate package for Python 3. For instance, on Ubuntu, you need to install the python3-tk package. Once you have Python installed, you can verify the version by opening Command Prompt or a terminal and executing these lines: $ python –-version Python 3.4.3 After this check, you should be able to start a simple GUI program: $ python >>> from tkinter import Tk >>> root = Tk() >>> root.title('Hello, world!') >>> root.mainloop() These statements create a window, change its title, and run indefinitely until the window is closed. Do not close the new window that is displayed when the second statement is executed. Otherwise, it will raise an error because the application has been destroyed. We will use this library in our first game, and the complete documentation of the module can be found at https://docs.python.org/3/library/tkinter.html. Tkinter and Python 2 The Tkinter module was renamed to tkinter in Python 3. If you have Python 2 installed, simply change the import statement with Tkinter in uppercase, and the program should run as expected. Overview of Breakout The Breakout game starts with a paddle and a ball at the bottom of the screen and some rows of bricks at the top. The player must eliminate all the bricks by hitting them with the ball, which rebounds against the borders of the screen, the bricks, and the bottom paddle. As in Pong, the player controls the horizontal movement of the paddle. The player starts the game with three lives, and if she or he misses the ball's rebound and it reaches the bottom border of the screen, one life is lost. The game is over when all the bricks are destroyed, or when the player loses all their lives. This is a screenshot of the final version of our game: Basic GUI layout We will start out game by creating a top-level window as in the simple program we ran previously. However, this time, we will use two nested widgets: a container frame and the canvas where the game objects will be drawn, as shown here: With Tkinter, this can easily be achieved using the following code: import tkinter as tk lives = 3 root = tk.Tk() frame = tk.Frame(root) canvas = tk.Canvas(frame, width=600, height=400, bg='#aaaaff') frame.pack() canvas.pack() root.title('Hello, Pong!') root.mainloop() Through the tk alias, we access the classes defined in the tkinter module, such as Tk, Frame, and Canvas. Notice the first argument of each constructor call which indicates the widget (the child container), and the required pack() calls for displaying the widgets on their parent container. This is not necessary for the Tk instance, since it is the root window. However, this approach is not exactly object-oriented, since we use global variables and do not define any new class to represent our new data structures. If the code base grows, this can lead to poorly organized projects and highly coupled code. We can start encapsulating the pieces of our game in this way: import tkinter as tk class Game(tk.Frame): def __init__(self, master): super(Game, self).__init__(master) self.lives = 3 self.width = 610 self.height = 400 self.canvas = tk.Canvas(self, bg='#aaaaff', width=self.width, height=self.height,) self.canvas.pack() self.pack() if __name__ == '__main__': root = tk.Tk() root.title('Hello, Pong!') game = Game(root) game.mainloop() Our new type, called Game, inherits from the Frame Tkinter class. The class Game(tk.Frame): definition specifies the name of the class and the superclass between parentheses. If you are new to object-oriented programming with Python, this syntax may not sound familiar. In our first look at classes, the most important concepts are the __init__ method and the self variable: The __init__ method is a special method that is invoked when a new class instance is created. Here, we set the object attributes, such as the width, the height, and the canvas widget. We also call the parent class initialization with the super(Game, self).__init__(master) statement, so the initial state of the Frame is properly initialized. The self variable refers to the object, and it should be the first argument of a method if you want to access the object instance. It is not strictly a language keyword, but the Python convention is to call it self so that other Python programmers won't be confused about the meaning of the variable. In the preceding snippet, we introduced the if __name__ == '__main__' condition, which is present in many Python scripts. This snippet checks the name of the current module that is being executed, and will prevent starting the main loop where this module was being imported from another script. This block is placed at the end of the script, since it requires that the Game class be defined. New- and old-style classes You may see the MySuperClass.__init__(self, arguments) syntax in some Python 2 examples, instead of the super call. This is the old-style syntax, the only flavor available up to Python 2.1, and is maintained in Python 2 for backward compatibility. The super(MyClass, self).__init__(arguments) is the new-class style introduced in Python 2.2. It is the preferred approach, and we will use it throughout this book. Since no external assets are needed, you can place the set of code files given along with the book(Chapter1_01.Py) in any directory and execute it from the python command line by running the file. The main loop will run indefinitely until you click on the close button of the window, or if you kill the process from the command line. This is the starting point of our game, so let's start diving into the Canvas widget and see how we can draw and animate items in it. Diving into the Canvas widget So far, we have the window set up and now we can start drawing items on the canvas. The canvas widget is two-dimensional and uses the Cartesian coordinate system. The origin—the (0, 0) ordered pair—is placed at the top-left corner, and the axis can be represented as shown in the following screenshot: Keeping this layout in mind, we can use two methods of the Canvas widget to draw the paddle, the bricks, and the ball: canvas.create_rectangle(x0, y0, x1, y1, **options) canvas.create_oval(x0, y0, x1, y1, **options) Each of these calls returns an integer, which identifies the item handle. This reference will be used later to manipulate the position of the item and its options. The **options syntax represents a key/value pair of additional arguments that can be passed to the method call. In our case, we will use the fill and the tags option. The x0 and y0 coordinates indicate the top-left corner of the previous screenshot, and x1 and y1 are indicated in the bottom-right corner. For instance, we can call canvas.create_rectangle(250, 300, 330, 320, fill='blue', tags='paddle') to create a player's paddle, where: The top-left corner is at the coordinates (250, 300). The bottom-right corner is at the coordinates (300, 320). The fill='blue' means that the background color of the item is blue. The tags='paddle' means that the item is tagged as a paddle. This string will be useful later to find items in the canvas with specific tags. We will invoke other Canvas methods to manipulate the items and retrieve widget information. This table gives the references to the Canvas widget that will be used here: Method Description canvas.coords(item) Returns the coordinates of the bounding box of an item. canvas.move(item, x, y) Moves an item by a horizontal and a vertical offset. canvas.delete(item) Deletes an item from the canvas. canvas.winfo_width() Retrieves the canvas width. canvas.itemconfig(item, **options) Changes the options of an item, such as the fill color or its tags. canvas.bind(event, callback) Binds an input event with the execution of a function. The callback handler receives one parameter of the type Tkinter event. canvas.unbind(event) Unbinds the input event so that there is no callback function executed when the event occurs. canvas.create_text(*position, **opts) Draws text on the canvas. The position and the options arguments are similar to the ones passed in canvas.create_rectangle and canvas.create_oval. canvas.find_withtag(tag) Returns the items with a specific tag. canvas.find_overlapping(*position) Returns the items that overlap or are completely enclosed by a given rectangle. You can check out a complete reference of the event syntax as well as some practical examples at http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm#events. Basic game objects Before we start drawing all our game items, let's define a base class with the functionality that they will have in common—storing a reference to the canvas and its underlying canvas item, getting information about its position, and deleting the item from the canvas: class GameObject(object): def __init__(self, canvas, item): self.canvas = canvas self.item = item def get_position(self): return self.canvas.coords(self.item) def move(self, x, y): self.canvas.move(self.item, x, y) def delete(self): self.canvas.delete(self.item) Assuming that we have created a canvas widget as shown in our previous code samples, a basic usage of this class and its attributes would be like this: item = canvas.create_rectangle(10,10,100,80, fill='green') game_object = GameObject(canvas,item) #create new instance print(game_object.get_position()) # [10, 10, 100, 80] game_object.move(20, -10) print(game_object.get_position()) # [30, 0, 120, 70] game_object.delete() In this example, we created a green rectangle and a GameObject instance with the resulting item. Then we retrieved the position of the item within the canvas, moved it, and calculated the position again. Finally, we deleted the underlying item. The methods that the GameObject class offers will be reused in the subclasses that we will see later, so this abstraction avoids unnecessary code duplication. Now that you have learned how to work with this basic class, we can define separate child classes for the ball, the paddle, and the bricks. The Ball class The Ball class will store information about the speed, direction, and radius of the ball. We will simplify the ball's movement, since the direction vector will always be one of the following: [1, 1] if the ball is moving towards the bottom-right corner [-1, -1] if the ball is moving towards the top-left corner [1, -1] if the ball is moving towards the top-right corner [-1, 1] if the ball is moving towards the bottom-left corner Representation of the possible direction vectors Therefore, by changing the sign of one of the vector components, we will change the ball's direction by 90 degrees. This will happen when the ball bounces with the canvas border, or when it hits a brick or the player's paddle: class Ball(GameObject): def __init__(self, canvas, x, y): self.radius = 10 self.direction = [1, -1] self.speed = 10 item = canvas.create_oval(x-self.radius, y-self.radius, x+self.radius, y+self.radius, fill='white') super(Ball, self).__init__(canvas, item)   For now, the object initialization is enough to understand the attributes that the class has. We will cover the ball rebound logic later, when the other game objects are defined and placed in the game canvas. The Paddle class The Paddle class represents the player's paddle and has two attributes to store the width and height of the paddle. A set_ball method will be used store a reference to the ball, which can be moved with the ball before the game starts: class Paddle(GameObject): def __init__(self, canvas, x, y): self.width = 80 self.height = 10 self.ball = None item = canvas.create_rectangle(x - self.width / 2, y - self.height / 2, x + self.width / 2, y + self.height / 2, fill='blue') super(Paddle, self).__init__(canvas, item) def set_ball(self, ball): self.ball = ball def move(self, offset): coords = self.get_position() width = self.canvas.winfo_width() if coords[0] + offset >= 0 and coords[2] + offset <= width: super(Paddle, self).move(offset, 0) if self.ball is not None: self.ball.move(offset, 0) The move method is responsible for the horizontal movement of the paddle. Step by step, the following is the logic behind this method: The self.get_position() calculates the current coordinates of the paddle The self.canvas.winfo_width() retrieves the canvas width If both the minimum and maximum x-axis coordinates plus the offset produced by the movement are inside the boundaries of the canvas, this is what happens: The super(Paddle, self).move(offset, 0) calls the method with same name in the Paddle class's parent class, which moves the underlying canvas item If the paddle still has a reference to the ball (this happens when the game has not been started), the ball is moved as well This method will be bound to the input keys so that the player can use them to control the paddle's movement. We will see later how we can use Tkinter to process the input key events. For now, let's move on to the implementation of the last one of our game's components. The Brick class Each brick in our game will be an instance of the Brick class. This class contains the logic that is executed when the bricks are hit and destroyed: class Brick(GameObject): COLORS = {1: '#999999', 2: '#555555', 3: '#222222'} def __init__(self, canvas, x, y, hits): self.width = 75 self.height = 20 self.hits = hits color = Brick.COLORS[hits] item = canvas.create_rectangle(x - self.width / 2, y - self.height / 2, x + self.width / 2, y + self.height / 2, fill=color, tags='brick') super(Brick, self).__init__(canvas, item) def hit(self): self.hits -= 1 if self.hits == 0: self.delete() else: self.canvas.itemconfig(self.item, fill=Brick.COLORS[self.hits]) As you may have noticed, the __init__ method is very similar to the one in the Paddle class, since it draws a rectangle and stores the width and the height of the shape. In this case, the value of the tags option passed as a keyword argument is 'brick'. With this tag, we can check whether the game is over when the number of remaining items with this tag is zero. Another difference from the Paddle class is the hit method and the attributes it uses. The class variable called COLORS is a dictionary—a data structure that contains key/value pairs with the number of hits that the brick has left, and the corresponding color. When a brick is hit, the method execution occurs as follows: The number of hits of the brick instance is decreased by 1 If the number of hits remaining is 0, self.delete() deletes the brick from the canvas Otherwise, self.canvas.itemconfig() changes the color of the brick. For instance, if we call this method for a brick with two hits left, we will decrease the counter by 1 and the new color will be #999999, which is the value of Brick.COLORS[1]. If the same brick is hit again, the number of remaining hits will become zero and the item will be deleted. Adding the Breakout items Now that the organization of our items is separated into these top-level classes, we can extend the __init__ method of our Game class: class Game(tk.Frame): def __init__(self, master): super(Game, self).__init__(master) self.lives = 3 self.width = 610 self.height = 400 self.canvas = tk.Canvas(self, bg='#aaaaff', width=self.width, height=self.height) self.canvas.pack() self.pack() self.items = {} self.ball = None self.paddle = Paddle(self.canvas, self.width/2, 326) self.items[self.paddle.item] = self.paddle for x in range(5, self.width - 5, 75): self.add_brick(x + 37.5, 50, 2) self.add_brick(x + 37.5, 70, 1) self.add_brick(x + 37.5, 90, 1) self.hud = None self.setup_game() self.canvas.focus_set() self.canvas.bind('<Left>', lambda _: self.paddle.move(-10)) self.canvas.bind('<Right>', lambda _: self.paddle.move(10)) def setup_game(self): self.add_ball() self.update_lives_text() self.text = self.draw_text(300, 200, 'Press Space to start') self.canvas.bind('<space>', lambda _: self.start_game()) This initialization is more complex that what we had at the beginning of the article. We can divide it into two sections: Game object instantiation, and their insertion into the self.items dictionary. This attribute contains all the canvas items that can collide with the ball, so we add only the bricks and the player's paddle to it. The keys are the references to the canvas items, and the values are the corresponding game objects. We will use this attribute later in the collision check, when we will have the colliding items and will need to fetch the game object. Key input binding, via the Canvas widget. The canvas.focus_set() call sets the focus on the canvas, so the input events are directly bound to this widget. Then we bind the left and right keys to the paddle's move() method and the spacebar to trigger the game start. Thanks to the lambda construct, we can define anonymous functions as event handlers. Since the callback argument of the bind method is a function that receives a Tkinter event as an argument, we define a lambda that ignores the first parameter—lambda _: <expression>. Our new add_ball and add_brick methods are used to create game objects and perform a basic initialization. While the first one creates a new ball on top of the player's paddle, the second one is a shorthand way of adding a Brick instance:   def add_ball(self): if self.ball is not None: self.ball.delete() paddle_coords = self.paddle.get_position() x = (paddle_coords[0] + paddle_coords[2]) * 0.5 self.ball = Ball(self.canvas, x, 310) self.paddle.set_ball(self.ball) def add_brick(self, x, y, hits): brick = Brick(self.canvas, x, y, hits) self.items[brick.item] = brick The draw_text method will be used to display text messages in the canvas. The underlying item created with canvas.create_text() is returned, and it can be used to modify the information:   def draw_text(self, x, y, text, size='40'): font = ('Helvetica', size) return self.canvas.create_text(x, y, text=text, font=font) The update_lives_text method displays the number of lives left and changes its text if the message is already displayed. It is called when the game is initialized—this is when the text is drawn for the first time—and it is also invoked when the player misses a ball rebound:    def update_lives_text(self): text = 'Lives: %s' % self.lives if self.hud is None: self.hud = self.draw_text(50, 20, text, 15) else: self.canvas.itemconfig(self.hud, text=text) We leave start_game unimplemented for now, since it triggers the game loop, and this logic will be added in the next section. Since Python requires a code block for each method, we use the pass statement. This does not execute any operation, and it can be used as a placeholder when a statement is required syntactically: def start_game(self): pass If you execute this script, it will display a Tkinter window like the one shown in the following figure. At this point, we can move the paddle horizontally, so we are ready to start the game and hit some bricks! Summary We covered the basics of the control flow and the class syntax. We used Tkinter widgets, especially the Canvas widget and its methods, to achieve the functionality needed to develop a game based on collisions and simple input detection. Our Breakout game can be customized as we want. Feel free to change the color defaults, the speed of the ball, or the number of rows of bricks. However, GUI libraries are very limited, and more complex frameworks are required to achieve a wider range of capabilities. Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [article] Understanding the Python regex engine [article] Ten IPython essentials [article]
Read more
  • 0
  • 1
  • 7371
Modal Close icon
Modal Close icon