Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-saving-data-create-longer-games
Packt
15 May 2014
6 min read
Save for later

Saving Data to Create Longer Games

Packt
15 May 2014
6 min read
(For more resources related to this topic, see here.) Creating collectibles to save The Unity engine features an incredibly simple saving and loading system that can load your data between sessions in just a few lines of code. The downside of using Unity's built-in data management is that save data will be erased if the game is ever uninstalled from the OUYA console. Later, we'll talk about how to make your data persistent between installations, but for now, we'll set up some basic data-driven values in your marble prototype. However, before we load the saved data, we have to create something to save. Time for action – creating a basic collectible Some games use save data to track the total number of times the player has obtained a collectible item. Players may not feel like it's worth gathering collectibles if they disappear when the game session is closed, but making the game track their long-term progress can give players the motivation to explore a game world and discover everything it has to offer. We're going to add collectibles to the marble game prototype you created and save them so that the player can see how many collectibles they've totally gathered over every play session. Perform the following steps to create a collectible: Open your RollingMarble Unity project and double-click on the scene that has your level in it. Create a new cylinder from the Create menu in your Hierarchy menu. Move the cylinder so that it rests on the level's platform. It should appear as shown in the following screenshot: We don't want our collectible to look like a plain old cylinder, so manipulate it with the rotate and scale tools until it looks a little more like a coin. Obviously, you'll have a coin model in the final game that you can load, but we can customize and differentiate primitive objects for the purpose of our prototype. Our primitive is starting to look like a coin, but it's still a bland gray color. To make it look a little bit nicer, we'll use Unity to apply a material. A material tells the engine how an object should appear when it is rendered, including which textures and colors to use for each object. Right now, we'll only apply a basic color, but later on we'll see how it can store different kinds of textures and other data. Materials can be created and customized in a matter of minutes in Unity, and they're a great way to color simple objects or distinguish primitive shapes from one another. Create a new folder named Materials in your Project window and right-click on it to create a new material named CoinMaterial as shown in the following screenshot: Click on the material that you just created and its properties will appear in the Inspector window. Click on the color box next to the Main Color property and change it to a yellow color. The colored sphere in the Material window will change to reflect how the material will look in real time, as shown in the following screenshot: Our collectible coin now has a color, but as we can see from the preview of the sphere, it's still kind of dull. We want our coin to be shiny so that it catches the player's eye, so we'll change the Shader type, which dictates how light hits the object. The current Shader type on our coin material is Diffuse, which basically means it is a softer, nonreflective material. To make the coin shiny, change the Shader type to Specular. You'll see a reflective flare appear on the sphere preview; adjust the Shininess slider to see how different levels of specularity affect the material. You may have noticed that another color value was added when you changed the material's shader from Diffuse to Specular; this value affects only the shiny parts of the object. You can make the material shine brighter by changing it from gray to white, or give its shininess a tint by using a completely new color. Attach your material to the collectible object by clicking-and-dragging the material from the Project window and releasing it over the object in your scene view. The object will look like the one shown in the following screenshot: Our collectible coin object now has a unique shape and appearance, so it's a good idea to save it as a prefab. Create a Prefabs folder in your Project window if you haven't already, and use the folder's right-click menu to create a new blank prefab named Coin. Click-and-drag the coin object from the hierarchy to the prefab to complete the link. We'll add code to the coin later, but we can change the prefab after we initially create it, so don't worry about saving an incomplete collectible. Verify whether the prefab link worked by clicking-and-dragging multiple instances of the prefab from the Project window onto the Scene view. What just happened? Until you start adding 3D models to your game, primitives are a great way to create placeholder objects, and materials are useful for making them look more complex and unique. Materials add color to objects, but they also contain a shader that affects the way light hits the object. The two most basic shaders are Diffuse (dull) and Specular (shiny), but there are several other shaders in Unity that can help make your object appear exactly like you want it. You can even code your own shaders using the ShaderLab language, which you can learn on your own using the documentation at http://docs.unity3d.com/Documentation/Components/SL-Reference.html. Next, we'll add some functionality to your coin to save the collection data. Have a go hero – make your prototype stand out with materials As materials are easy to set up with Unity's color picker and built-in shaders, you have a lot of options at your fingertips to quickly make your prototype stand out and look better than a basic grayscale mock-up. Take any of your existing projects and see how far you can push the aesthetic with different combinations of colors and materials. Keep the following points in mind: Some shaders, such as Specular, have multiple colors that you can assign. Play around with different combinations to create a unique appearance. There are more shaders available to you than just the ones loaded into a new project; move your mouse over the Import Package option in Unity's Assets menu and import the Toon Shading package to add even more options to your shader collection. Complex object prefabs made of more than one primitive can have a different material on each primitive. Add multiple materials to a single object to help your user differentiate between its various parts and give your scene more detail. Try changing the materials used in your scene until you come up with something unique and clean, as shown in the following screenshot of our cannon prototype with custom materials:
Read more
  • 0
  • 0
  • 4942

article-image-physics-uikit-dynamics
Packt
14 May 2014
8 min read
Save for later

Physics with UIKit Dynamics

Packt
14 May 2014
8 min read
(For more resources related to this topic, see here.) Motion and physics in UIKit With the introduction of iOS 7, Apple completely removed the skeuomorphic design that has been used since the introduction of the iPhone and iOS. In its place is a new and refreshing flat design that features muted gradients and minimal interface elements. Apple has strongly encouraged developers to move away from a skeuomorphic and real-world-based design in favor of these flat designs. Although we are guided away from a real-world look, Apple also strongly encourages that your user interface have a real-world feel. Some may think this is a contradiction; however, the goal is to give users a deeper connection to the user interface. UI elements that respond to touch, gestures, and changes in orientation are examples of how to apply this new design paradigm. In order to help assist in this new design approach, Apple has introduced two very nifty APIs, UIKit Dynamics and Motion Effects. UIKit Dynamics To put it simply, iOS 7 has a fully featured physics engine built into UIKit. You can manipulate specific properties to provide a more real-world feel to your interface. This includes gravity, springs, elasticity, bounce, and force to name a few. Each interface item will contain its own properties and the dynamic engine will abide by these properties. Motion effects One of the coolest features of iOS 7 on our devices is the parallax effect found on the home screen. Tilting the device in any direction will pan the background image to emphasize depth. Using motion effects, we can monitor the data supplied by the device's accelerometer to adjust our interface based on movement and orientation. By combining these two features, you can create great looking interfaces with a realistic feel that brings it to life. To demonstrate UIKit Dynamics, we will be adding some code to our FoodDetailViewController.m file to create some nice effects and animations. Adding gravity Open FoodDetailViewController.m and add the following instance variables to the view controller: UIDynamicAnimator* animator; UIGravityBehavior* gravity; Scroll to viewDidLoad and add the following code to the bottom of the method: animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view]; gravity = [[UIGravityBehavior alloc] initWithItems:@[self.foodImageView]]; [animator addBehavior:gravity]; Run the application, open the My Foods view, select a food item from the table view, and watch what happens. The food image should start to accelerate towards the bottom of the screen until it eventually falls off the screen, as shown in the following set of screenshots: Let's go over the code, specifically the two new classes that were just introduced, UIDynamicAnimator and UIGravityBehavior. UIDynamicAnimator This is the core component of UIKit Dynamics. It is safe to say that the dynamic animator is the physics engine itself wrapped in a convenient and easy-to-use class. The animator will do nothing on its own, but instead keep track of behaviors assigned to it. Each behavior will interact inside of this physics engine. UIGravityBehavior Behaviors are the core compositions of UIKit Dynamics animation. These behaviors all define individual responses to the physics environment. This particular behavior mimics the effects of gravity by applying force. Each behavior is associated with a view (or views) when created. Because you explicitly define this property, you can control which views will perform the behavior. Behavior properties Almost all behaviors have multiple properties that can be adjusted to the desired effect. A good example is the gravity behavior. We can adjust its angle and magnitude. Add the following code before adding the behavior to the animator: gravity.magnitude = 0.1f; Run the application and test it to see what happens. The picture view will start to fall; however, this time it will be at a much slower rate. Replace the preceding code line with the following line: gravity.magnitude = 10.0f; Run the application, and this time you will notice that the image falls much faster. Feel free to play with these properties and get a feel for each value. Creating boundaries When dealing with gravity, UIKit Dynamics does not conform to the boundaries of the screen. Although it is not visible, the food image continues to fall after it has passed the edge of the screen. It will continue to fall unless we set boundaries that will contain the image view. At the top of the file, create another instance variable: UICollisionBehavior *collision; Now in our viewDidLoad method, add the following code below our gravity code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView]]; collision.translatesReferenceBoundsIntoBoundary = YES; [animator addBehavior:collision]; Here we are creating an instance of a new class (which is a behavior), UICollisionBehavior. Just like our gravity behavior, we associate this behavior with our food image view. Rather than explicitly defining the coordinates for the boundary, we use the convenient translatesReferenceBoundsIntoBoundary property on our collision behavior. By setting this property to yes, the boundary will be defined by the bounds of the reference view that we set when allocating our UIDynamics animator. Because the reference view is self.view, the boundary is now the visible space of our view. Run the application and watch how the image will fall, but stop once it reaches the bottom of the screen, as shown in the following screenshot: Collisions With our image view responding to gravity and our screen bounds we can start detecting collisions. You may have noticed that when the image view is falling, it falls right through our two labels below it. This is because UIKit Dynamics will only respond to UIView elements that have been assigned behaviors. Each behavior can be assigned to multiple objects, and each object can have multiple behaviors. Because our labels have no behaviors associated with them, the UIKit Dynamics physics engine simply ignores it. Let's make the food image view collide with the date label. To do this, we simply need to add the label to the collision behavior allocation call. Here is what the new code looks like: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; As you can see, all we have done is add self.foodDateLabel to the initWithItems array property. As mentioned before, any single behavior can be associated with multiple items. Run your code and see what happens. When the image falls, it hits the date label but continues to fall, pushing the date label with it. Because we didn't associate the gravity behavior with the label, it does not fall immediately. Although it does not respond to gravity, the label will still be moved because it is a physics object after all. This approach is not ideal, so let's use another awesome feature of UIKit Dynamics, invisible boundaries. Creating invisible boundaries We are going to take a slightly different approach to this problem. Our label is only a point of reference for where we want to add a boundary that will stop our food image view. Because of this, the label does not need to be associated with any UIKit Dynamic behaviors. Remove self.foodDateLabel from the following code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; Instead, add the following code to the bottom of viewDidLoad but before we add our collision behavior to the animator: // Add a boundary to the top edge CGPoint topEdge = CGPointMake(self.foodDateLabel.frame.origin.x + self.foodDateLabel.frame.size.width, self.foodDateLabel.frame.origin.y); [collision addBoundaryWithIdentifier:@"barrier" fromPoint:self.foodDateLabel.frame.origin toPoint:topEdge]; Here we add a boundary to the collision behavior and pass some parameters. First we define an identifier, which we will use later, and then we pass the food date label's origin as the fromPoint property. The toPoint property is set to the CGPoint we created using the food date label's frame. Go ahead and run the application, and you will see that the food image will now stop at the invisible boundary we defined. The label is still visible to the user, but the dynamic animator ignores it. Instead the animator sees the barrier we defined and responds accordingly, even though the barrier is invisible to the user. Here is a side-by-side comparison of the before and after: Dynamic items When using UIKit Dynamics, it is important to understand what UIKit Dynamics items are. Rather than referencing dynamics as views, they are referenced as items, which adhere to the UIDynamicItem protocol. This protocol defines the center, transform, and bounds of any object that adheres to this protocol. UIView is the most common class that adheres to the UIDynamicItem protocol. Another example of a class that conforms to this protocol is the UICollectionViewLayoutAttributes class. Summary In this article, we covered some basics of how UIKit Dynamics manages your application's behaviors, that enables us to create some really unique interface effects. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Unity iOS Essentials: Flyby Background [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 8384

article-image-installing-vertica
Packt
13 May 2014
9 min read
Save for later

Installing Vertica

Packt
13 May 2014
9 min read
(For more resources related to this topic, see here.) Massively Parallel Processing (MPP) databases are those which partition (and optionally replicate) data into multiple nodes. All meta-information regarding data distribution is stored in master nodes. When a query is issued, it is parsed and a suitable query plan is developed as per the meta-information and executed on relevant nodes (nodes that store related user data). HP offers one such MPP database called Vertica to solve pertinent issues of Big Data analytics. Vertica differentiates itself from other MPP databases in many ways. The following are some of the key points: Column-oriented architecture: Unlike traditional databases that store data in a row-oriented format, Vertica stores its data in columnar fashion. This allows a great level of compression on data, thus freeing up a lot of disk space. Design tools: Vertica offers automated design tools that help in arranging your data more effectively and efficiently. The changes recommended by the tool not only ease pressure on the designer, but also help in achieving seamless performance. Low hardware costs: Vertica allows you to easily scale up your cluster using just commodity servers, thus reducing hardware-related costs to a certain extent. This article will guide you through the installation and creation of a Vertica cluster. This article will also cover the installation of Vertica Management Control, which is shipped with the Vertica Enterprise edition only. It should be noted that it is possible to upgrade Vertica to a higher version but vice versa is not possible. Before installing Vertica, you should bear in mind the following points: Only one database instance can be run per cluster of Vertica. So, if you have a three-node cluster, then all three nodes will be dedicated to one single database. Only one instance of Vertica is allowed to run per node/host. Each node requires at least 1 GB of RAM. Vertica can be deployed on Linux only and has the following requirements: Only the root user or the user with all privileges (sudo) can run the install_vertica script. This script is very crucial for installation and will be used at many places. Only ext3/ext4 filesystems are supported by Vertica. Verify whether rsync is installed. The time should be synchronized in all nodes/servers of a Vertica cluster; hence, it is good to check whether NTP daemon is running. Understanding the preinstallation steps Vertica has various preinstallation steps that are needed to be performed for the smooth running of Vertica. Some of the important ones are covered here. Swap space Swap space is the space on the physical disk that is used when primary memory (RAM) is full. Although swap space is used in sync with RAM, it is not a replacement for RAM. It is suggested to have 2 GB of swap space available for Vertica. Additionally, Vertica performs well when swap-space-related files and Vertica data files are configured to store on different physical disks. Dynamic CPU frequency scaling Dynamic CPU frequency scaling, or CPU throttling, is where the system automatically adjusts the frequency of the microprocessor dynamically. The clear advantage of this technique is that it conserves energy and reduces the heat generated. It is believed that CPU frequency scaling reduces the number of instructions a processor can issue. Additional theories state that when frequency scaling is enabled, the CPU doesn't come to full throttle promptly. Hence, it is best that dynamic CPU frequency scaling is disabled. CPU frequency scaling can be disabled from Basic Input/Output System (BIOS). Please note that different hardware might have different settings to disable CPU frequency scaling. Understanding disk space requirements It is suggested to keep a buffer of 20-30 percent of disk space per node. Vertica uses buffer space to store temporary data, which is data coming from the merge out operations, hash joins, and sorts, and data arising from managing nodes in the cluster. Steps to install Vertica Installing Vertica is fairly simple. With the following steps, we will try to understand a two-node cluster: Download the Vertica installation package from http://my.vertica.com/ according to the Linux OS that you are going to use. Now log in as root or use the sudo command. After downloading the installation package, install the package using the standard command: For .rpm (CentOS/RedHat) packages, the command will be: rpm -Uvh vertica-x.x.x-x.x.rpm For .deb (Ubuntu) packages, the command will be: dpkg -i vertica-x.x.x-x.x.deb Refer to the following screenshot for more details: Running the Vertica package In the previous step, we installed the package on only one machine. Note that Vertica is installed under /opt/vertica. Now, we will setup Vertica on other nodes as well. For that, run on the same node: /opt/vertica/sbin/install_vertica -s host_list -r rpm_package -u dba_username Here –s is the hostname/IP of all the nodes of the cluster including the one on which Vertica is already installed. –r is the path of Vertica package and –u is the username that we wish to create for working on Vertica. This user has sudo privileges. If prompted, provide a password for the new user. If we do not specify any username, then Vertica creates dbadmin as the user, as shown in the following example: [impetus@centos64a setups]$ sudo /opt/vertica/sbin/install_vertica -s 192.168.56.101,192.168.56.101,192.168.56.102 -r "/ilabs/setups/vertica-6.1.3-0.x86_64.RHEL5.rpm" -u dbadmin Vertica Analytic Database 6.1.3-0 Installation Tool Upgrading admintools meta data format.. scanning /opt/vertica/config/users Starting installation tasks... Getting system information for cluster (this may take a while).... Enter password for impetus@192.168.56.102 (2 attempts left): backing up admintools.conf on 192.168.56.101 Default shell on nodes: 192.168.56.101 /bin/bash 192.168.56.102 /bin/bash Installing rpm on 1 hosts.... installing node.... 192.168.56.102 NTP service not synchronized on the hosts: ['192.168.56.101', '192.168.56.102'] Check your NTP configuration for valid NTP servers. Vertica recommends that you keep the system clock synchronized using NTP or some other time synchronization mechanism to keep all hosts synchronized. Time variances can cause (inconsistent) query results when using Date/Time Functions. For instructions, see: * http://kbase.redhat.com/faq/FAQ_43_755.shtm * http://kbase.redhat.com/faq/FAQ_43_2790.shtm Info: the package 'pstack' is useful during troubleshooting. Vertica recommends this package is installed. Checking/fixing OS parameters..... Setting vm.min_free_kbytes to 37872 ... Info! The maximum number of open file descriptors is less than 65536 Setting open filehandle limit to 65536 ... Info! The session setting of pam_limits.so is not set in /etc/pam.d/su Setting session of pam_limits.so in /etc/pam.d/su ... Detected cpufreq module loaded on 192.168.56.101 Detected cpufreq module loaded on 192.168.56.102 CPU frequency scaling is enabled. This may adversely affect the performance of your database. Vertica recommends that cpu frequency scaling be turned off or set to 'performance' Creating/Checking Vertica DBA group Creating/Checking Vertica DBA user Password for dbadmin: Installing/Repairing SSH keys for dbadmin Creating Vertica Data Directory... Testing N-way network test. (this may take a while) All hosts are available ... Verifying system requirements on cluster. IP configuration ... IP configuration ... Testing hosts (1 of 2).... Running Consistency Tests LANG and TZ environment variables ... Running Network Connectivity and Throughput Tests... Waiting for 1 of 2 sites... ... Test of host 192.168.56.101 (ok) ==================================== Enough RAM per CPUs (ok) -------------------------------- Test of host 192.168.56.102 (ok) ==================================== Enough RAM per CPUs (FAILED) -------------------------------- Vertica requires at least 1 GB per CPU (you have 0.71 GB/CPU) See the Vertica Installation Guide for more information. Consistency Test (ok) ========================= Info: The $TZ environment variable is not set on 192.168.56.101 Info: The $TZ environment variable is not set on 192.168.56.102 Updating spread configuration... Verifying spread configuration on whole cluster. Creating node node0001 definition for host 192.168.56.101 ... Done Creating node node0002 definition for host 192.168.56.102 ... Done Error Monitor 0 errors 4 warnings Installation completed with warnings. Installation complete. To create a database: 1. Logout and login as dbadmin.** 2. Run /opt/vertica/bin/adminTools as dbadmin 3. Select Create Database from the Configuration Menu ** The installation modified the group privileges for dbadmin. If you used sudo to install vertica as dbadmin, you will need to logout and login again before the privileges are applied. After we have installed Vertica on all the desired nodes, it is time to create a database. Log in as a new user (dbadmin in default scenarios) and connect to admin panel. For that we have to run following command: /opt/vertica/bin/adminTools If you are connecting to admin tools for the first time, you will be prompted for a license key. If you have the license file, then enter its path; if you want to use the community edition, then just click on OK. License key prompt After the previous step, you will be asked to review and accept the End-user License Agreement (EULA). Prompt for EULA After reviewing and accepting the EULA, you will be presented with the main menu of the admin tools of Vertica. Admin Tools Main Menu Now, to create a database, navigate to Administration Tools | Configuration Menu | Create Database. Create database option in the Configuration menu Now, you will be asked to enter a database name and a comment that you will like to associate with the database. Name and Comment of the Database After entering the name and comment, you will be prompted to enter a password for this database. Password for the New database After entering and re-entering (for confirmation) the password, you need to provide pathnames where the files related to user data and catalog data will be stored. Catalog and Data Pathname After providing all the necessary information related to the database, you will be asked to select hosts on which the database needs to be deployed. Once all the desired hosts are selected, Vertica will ask for one final check. Final confirmation for a database creation Now, Vertica will be creating and deploying the database. Database Creation Once the database is created, we can connect to it using the VSQL tool or perform admin tasks. Summary As you can see and understand this article explains briefly about, Vertica installation. One can check further by creating sample tables and performing basic CRUD operations. For a clean installation, it is recommended to serve all the minimum requirements of Vertica. It should be noted that installation of client API(s) and Vertica Management console needs to be done separately and is not included in the basic package. Resources for Article: Further resources on this subject: Visualization of Big Data [Article] Limits of Game Data Analysis [Article] Learning Data Analytics with R and Hadoop [Article]
Read more
  • 0
  • 0
  • 11060

article-image-using-openstack-swift
Packt
13 May 2014
4 min read
Save for later

Using OpenStack Swift

Packt
13 May 2014
4 min read
(For more resources related to this topic, see here.) Installing the clients This section talks about installing the cURL command line tool. cURL – It is a command line tool which can be used to transfer data using various protocols. We install cURL using the following command $ apt-get install curl OpenStack Swift Client CLI – This tool is installed by the following command. $ apt-get install python-swiftclient REST API Client – To access OpenStack Swift services via REST API, we can use third party tools like Fiddler web debugger which supports REST architecture. Creating Token by using Authentication The first step in order to access containers or objects is to authenticate the user by sending a request to the authentication service and get a valid token that can then be used in subsequent commands to perform various operations as follows: curl -X POST -i https://auth.lts2.evault.com/v2.0/Tokens -H 'Content-type: application/json' -d '{"auth":{"passwordCredentials":{"username":"user","password":"password"},"tenantName":"admin"}}' The token that is generated is given below. It has been truncated for better readability. token = MIIGIwYJKoZIhvcNAQcCoIIGFDCCBhACAQExCTAHBgUrDgMCGjC CBHkGCSqGSIb3DQEHAaCCBGoEggRme…yJhY2Nlc3MiOiB7InRva2VuIjoge yJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0yNlQwNjoxODo0Mi4zNTA0NTciLCU+ KNYN20G7KJO05bXbbpSAWw+5Vfl8zl6JqAKKWENTrlKBvsFzO-peLBwcKZX TpfJkJxqK7Vpzc-NIygSwPWjODs--0WTes+CyoRD EVault LTS2 authentication The EVault LTS2 OpenStack Swift cluster provides its own private authentication service which returns back the token. This generated token will be passed as the token parameter in subsequent commands. Displaying meta-data information for Account, Container, Object This section describes how we can obtain information about the account, container or object. Using OpenStack Swift Client CLI The OpenStack Swift client CLI stat command is used to get information about the account, container or object. The name of the container should be provided after the stat command for getting container information. The name of the container and object should be provided after the stat command for getting object information. Make the following request to display the account status. # swift --os-auth-token=token --os-storage-url=https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b stat Where token is the generated token as described in the previous section and 26cef4782cca4e5aabbb9497b8c1ee1b is the account name. The response shows the information about the account. Account: 26cef4782cca4e5aabbb9497b8c1ee1b Containers: 2 Objects: 6 Bytes: 17 Accept-Ranges: bytes Server: nginx/1.4.1 Using cURL The following shows how to obtain the same information using cURL. It shows that the account contains 2 containers and 1243 objects. Make the following request: curl -X HEAD -ihttps://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b -H 'X-Auth-Token: token' -H 'Content-type: application/json' The response is as follows: HTTP/1.1 204 No Content Server: nginx/1.4.1 Date: Wed, 04 Dec 2013 06:53:13 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 0 X-Account-Bytes-Used: 3439364822 X-Account-Container-Count: 2 X-Account-Object-Count: 6 Using REST API The same information can be obtained using the following REST API method. Make the following request: Method : HEAD URL: https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1bHeader : X-Auth-Token: token Data : No data The response is as follows:HTTP/1.1 204 No Content Server: nginx/1.4.1 Date: Wed, 04 Dec 2013 06:47:17 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 0 X-Account-Bytes-Used: 3439364822 X-Account-Container-Count: 2 X-Account-Object-Count: 6 Listing Containers This section describes how to obtain information about the containers present in an account. Using OpenStack Swift Client CLI Make the following request: swift --os-auth-token=token --os-storage-url= https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b list The response is as follows: cities countries Using cURL The following shows how to obtain the same information using cURL. It shows that the account contains 2 containers and 1243 objects. Make the following request: curl -X GET –i https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b -H 'X-Auth_token: token' The response is as follows: HTTP/1.1 200 OK X-Account-Container-Count: 2 X-Account-Object-Count: 6 cities countries Using REST API Make the following request: Method : GET URL : https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b Header : X-Auth-Token: token Data : No data The response is as follows: X-Account-Container-Count: 2 X-Account-Object-Count: 6 cities countries Summary This article has thus explained various mechanisms that are available to access OpenStack Swift and how by using these mechanisms we will be able to authenticate accounts and list containers. Resources for Article: Further resources on this subject: Securing vCloud Using the vCloud Networking and Security App Firewall [Article] Introduction to Cloud Computing with Microsoft Azure [Article] Troubleshooting in OpenStack Cloud Computing [Article]
Read more
  • 0
  • 0
  • 4479

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 3231

article-image-working-neo4j-embedded-database
Packt
09 May 2014
6 min read
Save for later

Working with a Neo4j Embedded Database

Packt
09 May 2014
6 min read
(For more resources related to this topic, see here.) Neo4j is a graph database, which means that it does not use tables and rows to represent data logically; instead, it uses nodes and relationships. Both nodes and relationships can have a number of properties. While relationships must have one direction and one type, nodes can have a number of labels. For example, the following diagram shows three nodes and their relationships, where every node has a label (language or graph database), while relationships have a type (QUERY_LANGUAGE_OF and WRITTEN_IN). The properties used in the graph shown in the following diagram are: name, type, and from. Note that every relation must have exactly one type and one direction, whereas labels for nodes are optional and can be multiple. Neo4j running modes Neo4j can be used in two modes: An embedded database in a Java application; A standalone server via REST In any case, this choice does not affect the way you query and work with the database. It's only an architectural choice driven by the nature of the application (whether a standalone server or a client-server), performance, monitoring, and safety of data. An embedded database An embedded Neo4j database is the best choice for performance. It runs in the same process of the client application that hosts it and stores data in the given path. Thus, an embedded database must be created programmatically. We choose an embedded database for the following reasons: When we use Java as the programming language for our project When our application is standalone Preparing the development environment The fastest way to prepare the IDE for Neo4j is using Maven. Maven is a dependency management and automated building tool. In the following procedure, we will use NetBeans 7.4, but it works in a very similar way with the other IDEs (for Eclipse, you would need the m2eclipse plugin). The procedure is described as follows: Create a new Maven project as shown in the following screenshot: In the next page of the wizard, name the project, set a valid project location, and then click on Finish. After NetBeans has created the project, expand Project Files in the project tree and open the pom.xml file. In the <dependencies> tag, insert the following XML code: <dependencies> <dependency> <groupId>org.neo4j</groupId> <artifactId>neo4j</artifactId> <version>2.0.1</version> </dependency> </dependencies> <repositories> <repository> <id>neo4j</id> <url>http://m2.neo4j.org/content/repositories/releases/</url> <releases> <enabled>true</enabled> </releases> </repository> </repositories>   This code instructs Maven the dependency we are using on our project, that is, Neo4j. The version we have used here is 2.0.1. Of course, you can specify the latest available version. Once saved, the Maven file resolves the dependency, downloads the JAR files needed, and updates the Java build path. Now, the project is ready to use Neo4j and Cypher. Creating an embedded database Creating an embedded database is straightforward. First of all, to create a database, we need a GraphDatabaseFactory class, which can be done with the following code: GraphDatabaseFactory graphDbFactory = new GraphDatabaseFactory();   Then, we can invoke the newEmbeddedDatabase method with the following code: GraphDatabaseService graphDb = graphDbFactory .newEmbeddedDatabase("data/dbName");   Now, with the GraphDatabaseService class, we can fully interact with the database, create nodes, create relationships, set properties and indexes. Invoking Cypher from Java To execute Cypher queries on a Neo4j database, you need an instance of ExecutionEngine; this class is responsible for parsing and running Cypher queries, returning results in a ExecutionResult instance: import org.neo4j.cypher.javacompat.ExecutionEngine; import org.neo4j.cypher.javacompat.ExecutionResult; // ... ExecutionEngine engine = new ExecutionEngine(graphDb); ExecutionResult result = engine.execute("MATCH (e:Employee) RETURN e");   Note that we use the org.neo4j.cypher.javacompat package and not the org.neo4j.cypher package even though they are almost the same. The reason is that Cypher is written in Scala, and Cypher authors provide us with the former package for better Java compatibility. Now with the results, we can do one of the following options: Dumping to a string value Converting to a single column iterator Iterating over the full row Dumping to a string is useful for testing purposes: String dumped = result.dumpToString();   If we print the dumped string to the standard output stream, we will get the following result: Here, we have a single column (e) that contains the nodes. Each node is dumped with all its properties. The numbers between the square brackets are the node IDs, which are the long and unique values assigned by Neo4j on the creation of the node. When the result is single column, or we need only one column of our result, we can get an iterator over one column with the following code: import org.neo4j.graphdb.ResourceIterator; // ... ResourceIterator<Node> nodes = result.columnAs("e");   Then, we can iterate that column in the usual way, as shown in the following code: while(nodes.hasNext()) { Node node = nodes.next(); // do something with node }   However, Neo4j provides a syntax-sugar utility to shorten the code that is to be iterated: import org.neo4j.helpers.collection.IteratorUtil; // ... for (Node node : IteratorUtil.asIterable(nodes)) { // do something with node }   If we need to iterate over a multiple-column result, we would write this code in the following way: ResourceIterator<Map<String, Object>> rows = result.iterator(); for(Map<String,Object> row : IteratorUtil.asIterable(rows)) { Node n = (Node) row.get("e"); try(Transaction t = n.getGraphDatabase().beginTx()) { // do something with node } }   The iterator function returns an iterator of maps, where keys are the names of the columns. Note that when we have to work with nodes, even if they are returned by a Cypher query, we have to work in transaction. In fact, Neo4j requires that every time we work with the database, either reading or writing to the database, we must be in a transaction. The only exception is when we launch a Cypher query. If we launch the query within an existing transaction, Cypher will work as any other operation. No change will be persisted on the database until we commit the transaction, but if we run the query outside any transaction, Cypher will open a transaction for us and will commit changes at the end of the query. Summary We have now completed the setting up of a Neo4j database. We also learned about Cypher pattern matching. Resources for Article: Further resources on this subject: OpenSceneGraph: Advanced Scene Graph Components [Article] Creating Network Graphs with Gephi [Article] Building a bar graph cityscape [Article]
Read more
  • 0
  • 0
  • 6847
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-guidelines-setting-ouya-odk
Packt
07 May 2014
5 min read
Save for later

Guidelines for Setting Up the OUYA ODK

Packt
07 May 2014
5 min read
(For more resources related to this topic, see here.) Starting with the OUYA Development Kit The OUYA Development Kit (OUYA ODK) is a tool to create games and applications for the OUYA console, and its extensions and libraries are in the .jar format. It is released under Apache License Version 2.0. The OUYA ODK contains the following folders: Licenses: The SDK games and applications depend on various open source libraries. This folder contains all the necessary authorizations for the successful compilation and publication of the project in the OUYA console or testing in the emulator. Samples: This folder has some scene examples, which help to show users how to use the Standard Development Kit. Javadoc: This folder contains the documentation of Java classes, methods, and libraries. Libs: This folder contains the .jar files for the OUYA Java classes and their dependencies for the development of applications for the OUYA console. OUYA Framework APK file: This file contains the core of the OUYA software environment that allows visualization of a project based on the environment of OUYA. OUYA Launcher APK file: This file contains the OUYA launcher that displays the generated .apk file. The ODK plugin within Unity3D Download the Unity3D plugin for OUYA. In the developer portal, you will find these resources at https://github.com/ouya/ouya-unity-plugin. After downloading the ODK plugin, unzip the file in the desktop and import the ODK plugin for the Unity3D folder in the interface engine of the Assets folder; you will find several folders in it, including the following ones: Ouya: This folder contains the Examples, SDK, and StarterKit folders LitJson: This folder contains libraries that are important for compilation Plugins: This folder contains the Android folder, which is required for mobile projects "The Last Maya" created with the .ngui extension Importing the ODK plugin within Unity3D The OUYA Unity plugin can be imported into the Unity IDE. Navigate to Assets | Import Package | Custom Package…. Find the Ouya Unity Plugin folder on the desktop and import all the files. The package is divided into Core and Examples. The Core folder contains the OUYA panel and all the code for the construction of a project for the console. The Core and Examples folders can be used as individual packages and exported from the menu, as shown in the following screenshot: Installing and configuring the ODK plugin First, execute the Unity3D application and navigate to File | Open Project and then select the folder where you need to put the OUYA Unity plugin. You can check if Ouya Unity Plugin has been successfully imported by having a look at the window menu at the top of Unity3D, where the toolbars are located. In this manner, you can review the various components of the OUYA panel. While loading the OUYA panel, a window will be displayed with the following sections and buttons: Build Application: This is the first button and is used to compile, build, and create an Android Application Package file (APK) Build and Run Application: This is the next button and allows you to compile the application, generate an APK, and then run it on the emulator or publish directly to a device connected to the computer Compile: This button compiles the entire solution The lower section displays the paths of different libraries. Before it uses the OUYA plugin, the user ought to edit the fields in the PlayerSettings window (specifically the Bundle Identifier field), set the Minimum API Level field to API level 16, and set the Default Orientation field to Landscape Left. Another button that is mandatory is Bundle Identifier synchronizer, which synchronizes the Android manifest file (XML) and the identifiers of Java packages. Remember that the package ID must be unique for each game and has to be edited to avoid synchronization problems. Also, the OuyaGameObject (shown in the following screenshot) is very important for use in in-app purchases: The OUYA panel The Unity tab in the OUYA panel shows the path of the Unity JAR file, which houses the file's JAR class. This file is important because it is the one that communicates with the Unity Web Player. This Unity tab is shown in the following screenshot: The Java JDK tab shows the routes of the Java Runtime installation with all its different components to properly compile a project for Android and OUYA, as shown in the following screenshot: The Android SDK tab displays the current version of the SDK and contains the paths of the different components of the SDK: Android Jar Path ADB, APT SDK Path, and Path, as shown in the following screenshot. These paths must correspond to the PATH environment variable of the operating system. Finally, the last tab of the OUYA panel, Android NDK, shows the installation path of C++ scripts for native builds, as shown in the following screenshot: Installing and configuring the Java class If at this point you want to perform native development using the NDK or have problems opening or compiling the OUYA project, you need to configure the Java files. To install and configure the Java class, perform the following steps: Download and install the JDK 1.6 and configure the Java Runtime path in the PATH environment variable. Next, you need to set a unique bundle, identifier.com.yourcompany.gametitle. Hit the Sync button so your packages and manifest match. Create a game in the developer portal that uses the bundle ID. Download that signing key (key.der) and save it in Unity. Compile the Java plugin and the Java application. Input your developer UUID from the developer portal into the OuyaGameObject in the scene.
Read more
  • 0
  • 0
  • 2590

article-image-backup-and-restore-improvements
Packt
25 Apr 2014
11 min read
Save for later

Backup and Restore Improvements

Packt
25 Apr 2014
11 min read
(For more resources related to this topic, see here.) Database backups to a URL and Microsoft Azure Storage The ability to backup to a URL was introduced in SQL Server 2012 Service Pack 1 cumulative update package 2. Prior to this, if you wanted to backup to a URL in SQL Server 2012, you needed to use Transact-SQL or PowerShell. SQL Server 2014 has integrated this option into Management Studio too. The reason for allowing backups to a URL is to allow you to integrate your SQL Server backups with cloud-based storage and store your backups in Microsoft Azure. By being able to create a backup there, you can keep database backups of your on-premise database in Microsoft Azure. This makes your backups safer and protected in the event that your main site is lost to a disaster as your backups are stored offsite. This can avoid the need for an actual disaster recovery site. In order to create a backup to Microsoft Azure Storage, you need a storage account and a storage container. From a SQL Server perspective, you will require a URL, which will specify a Uniform Resource Identifier (URI) to a unique backup file in Microsoft Cloud. It is the URL that provides the location for the backup and the backup filename. The URL will need to point to a blob, not just a container. If it does not exist, then it is created. However, if a backup file exists, then the backup will fail. This is unless the WITH FORMAT command is specified, which like in older versions of SQL Server allows the backup to overwrite the existing backup with the new one that you wish to create. You will also need to create a SQL Server credential to allow the SQL Server to authenticate with Microsoft Azure Storage. This credential will store the name of the storage account and also the access key. The WITH CREDENTIAL statement must be used when issuing the backup or restore commands. There are some limitations you need to consider when backing up your database to a URL and making use of Microsoft Azure Storage to store your database backups: Maximum backup size of 1 TB (Terabyte). Cannot be combined with backup devices. Cannot append to existing backups—in SQL Server, you can have more than one backup stored in a file. When taking a backup to a URL, the ratio should be of one backup to one file. You cannot backup to multiple blobs. In a normal SQL Server backup, you can stripe it across multiple files. You cannot do this with a backup to a URL on Microsoft Azure. There are some limitations you need to consider when backing up to the Microsoft Azure Storage; you can find more information on this at http://msdn.microsoft.com/en-us/library/dn435916(v=sql.120).aspx#backuptaskssms. For the purposes of this exercise, I have created a new container on my Microsoft Azure Storage account called sqlbackup. With the storage account container, you will now take the backup to a URL. As part of this process, you will create a credential using your Microsoft Azure publishing profile. This is slightly different to the process we just discussed, but you can download this profile from Microsoft Azure. Once you have your publishing profile, you can follow the steps explained in the following section. Backing up a SQL Server database to a URL You can use Management Studio's backup task to initiate the backup. In order to do this, you need to start Management Studio and connect to your local SQL Server instance. You will notice that I have a database called T3, and it is this database that I will be backing up to the URL as follows: Right-click on the database you want to back up and navigate to Tasks | Backup. This will start the backup task wizard for you. On the General page, you should change the backup destination from Disk to URL. Making this change will enable all the other options needed for taking a backup to a URL. You will need to provide a filename for your backup, then create the SQL Server credential you want to use to authenticate on the Windows Azure Storage container. Click on the Create Credential button to open the Create credential dialog box. There is an option to use your publishing profile, so click on the Browse button and select the publishing profile that you downloaded from the Microsoft Azure web portal. Once you have selected your publishing profile, it will prepopulate the credential name, management certificate, and subscription ID fields for you. Choose the appropriate Storage Account for your backups. Following this, you should then click on Create to create the credential. You will need to specify the Windows Azure Storage container to use for the backup. In this case, I entered sqlbackup. When you have finished, your General page should look like what is shown in the following screenshot: Following this, click on OK and the backup should run. If you want to use Transact-SQL, instead of Management Studio, to take the backup, the code would look like this: BACKUP DATABASE [T3] TO URL = N'https://gresqlstorage.blob.core.windows.net/sqlbackup/t3.bak' WITH CREDENTIAL = N'AzureCredential' , NOFORMAT, NOINIT, NAME = N'T3-Full Database Backup', NOSKIP, NOREWIND, NOUNLOAD, STATS = 10 GO This is a normal backup database statement, as it has always been, but it specifies a URL and a credential to use to take the backup as well. Restoring a backup stored on Windows Azure Storage In this section, you will learn how to restore a database using the backup you have stored on Windows Azure Storage: To carry out the restore, connect to your local instance of SQL Server in Management Studio, right-click on the databases folder, and choose the Restore database option. This will open the database restore pages. In the Source section of the General page, select the Device option, click on the dropdown and change the backup media type to URL, and click on Add. In the next screen, you have to specify the Windows Azure Storage account connection information. You will need to choose the storage account to connect to and specify an access key to allow SQL Server to connect to Microsoft Azure. You can get this from the Storage section of the Microsoft Azure portal. After this, you will need to specify a credential to use. In this case, I will use the credential that was created when I took the backup earlier. Click on Connect to connect to Microsoft Azure. You will then need to chose the backup to restore from. In this case, I'll use the backup of the T3 database that was created in the preceding section. You can then complete the restore options as you would do with a local backup. In this case, the database has been called T3_cloud, mainly for reference so that it can be easily identified. If you want to restore the existing database, you need to use the WITH REPLACE command in the restore statement. The restore statement would look like this: RESTORE DATABASE t3 FROM URL =' https://gresqlstorage.blob.core.windows.net/sqlbackup/t3.bak ' WITH CREDENTIAL = ' N'AzureCredential' ' ,REPLACE ,STATS = 5 When the restore has been completed, you will have a new copy of the database on the local SQL Server instance. SQL Server Managed Backup to Microsoft Azure Building on the ability to take a backup of a SQL Server database to a URL and Microsoft Azure Storage, you can now set up Managed Backups of your SQL Server databases to Microsoft Azure. It allows you to automate your database backups to the Microsoft Azure Storage. All database administrators appreciate automation, as it frees their time to focus on other projects. So, this feature will be useful to you. It's fully customizable, and you can build your backup strategy around the transaction workload of your database and set a retention policy. Configuring SQL Server-managed backups to Microsoft Azure In order to set up and configure Managed Backups in SQL Server 2014, a new stored procedure has been introduced to configure Managed Backups on a specific database. The stored procedure is called smart_admin.sp_set_db_backup. The syntax for the stored procedure is as follows: EXEC smart_admin.sp_set_db_backup [@database_name = ] 'database name' ,[@enable_backup = ] { 0 | 1} ,[@storage_url = ] 'storage url' ,[@retention_days = ] 'retention_period_in_days' ,[@credential_name = ] 'sql_credential_name' ,[@encryption_algorithm] 'name of the encryption algorithm' ,[@encryptor_type] {'CERTIFICATE' | 'ASYMMETRIC_KEY'} ,[@encryptor_name] 'name of the certificate or asymmetric key' This stored procedure will be used to set up Managed Backups on the T3 database. The SQL Server Agent will need to be running for this to work. In my case, I executed the following code to enable Managed Backups on my T3 database: Use msdb; GO EXEC smart_admin.sp_set_db_backup @database_name='T3' ,@enable_backup=1 ,@storage_url = 'https://gresqlstorage.blob.core.windows.net/' ,@retention_days=5 ,@credential_name='AzureCredential' ,@encryption_algorithm =NO_ENCRYPTION To view the Managed Backup information, you can run the following query: Use msdb GO SELECT * FROM smart_admin.fn_backup_db_config('T3') The results should look like this: To disable the Managed Backup, you can use the smart_admin.sp_set_db_backup procedure to disable it: Use msdb; GO EXEC smart_admin.sp_set_db_backup @database_name='T3' ,@enable_backup=0 Encryption For the first time in SQL Server, you can encrypt your backups using the native SQL Server backup tool. In SQL Server 2014, the backup tool supports several encryption algorithms, including AES 128, AES 192, AES 256, and Triple DES. You will need a certificate or an asymmetric key when taking encrypted backups. Obviously, there are a number of benefits to encrypting your SQL Server database backups, including securing the data in the database. This can also be very useful if you are using transparent data encryption (TDE) to protect your database's data files. Encryption is also supported using SQL Server Managed Backup to Microsoft Azure. Creating an encrypted backup To create an encrypted SQL Server backup, there are a few prerequisites that you need to ensure are set up on the SQL Server. Creating a database master key for the master database Creating the database master key is important because it is used to protect the private key certificate and the asymmetric keys that are stored in the master database, which will be used to encrypt the SQL Server backup. The following Transact-SQL will create a database master key for the master database: USE master; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@$$W0rd'; GO In this example, a simple password has been used. In a production environment, it would be advisable to create a master key with a more secure password. Creating a certificate or asymmetric key The backup encryption process will need to make use of a certificate or asymmetric key to be able to take the backup. The following code creates a certificate that can be used to back up your databases using encryption: Use Master GO CREATE CERTIFICATE T3DBBackupCertificate WITH SUBJECT = 'T3 Backup Encryption Certificate'; GO Now you can take an encrypted backup of the database. Creating an encrypted database backup You can now take an encrypted backup of your databases. The following Transact-SQL statements back up the T3 database using the certificate you created in the preceding section: BACKUP DATABASE t3 TO DISK = N'C:Backupt3_enc.bak' WITH COMPRESSION, ENCRYPTION ( ALGORITHM = AES_256, SERVER CERTIFICATE = T3DBBackupCertificate ), STATS = 10 GO This is a local backup; it's located in the C:backup folder, and the encryption algorithm used is AES_256. Summary This article has shown some of the new backup features of SQL Server 2014. The ability to backup to Microsoft Azure Storage means that you can implement a robust backup and restore strategy at a relatively lower cost. Resources for Article: Further resources on this subject: SQL Server 2008 R2: Multiserver Management Using Utility Explorer [Article] Microsoft SQL Server 2008 High Availability: Installing Database Mirroring [Article] Manage SQL Azure Databases with the Web Interface 'Houston' [Article]
Read more
  • 0
  • 0
  • 1943

Packt
23 Apr 2014
7 min read
Save for later

User Interactivity – Mini Golf

Packt
23 Apr 2014
7 min read
(For more resources related to this topic, see here.) Using user input and touch events A touch event occurs each time a user touches the screen, drags, or releases the screen, and also during an interruption. Touch events begin with a user touching the screen. The touches are all handled by the UIResponder class, which then causes a UIEvent object to be generated, which then passes your code to a UITouch object for each finger touching the screen. For most of the code you work on, you will be concerned about the basic information. Where did the user start to touch the screen? Did they drag their finger across the screen? And, where did they let go? You will also want to know if the touch event got interrupted or canceled by another event. All of these situations can be handled in your view controller code using the following four methods (You can probably figure out what each one does.): -(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event These methods are structured pretty simply, and we will show you how to implement these a little later in the article. When a touch event occurs on an object, let's say a button, that object will maintain the touch event until it has ended. So, if you touch a button and drag your finger outside the button, the touch-ended event is sent to the button as a touch up outside event. This is very important to know when you are setting up touches for your code. If you have a set of touch events associated with your background and another set associated with a button in the foreground, you need to understand that if touchesBegan begins on the button and touchesEnded ends on the background, touchesEnded is still passed to your button. In this article, we will be creating a simple Mini Golf game. We start with the ball in a standard start position on the screen. The user can touch anywhere on the screen and drag their finger in any direction and release. The ball will move in the direction the player dragged their finger, and the longer the distance from their original position, the more powerful the shot will be. In the Mini Golf game, we put the touch events on the background and not the ball for one simple reason. When the user's ball stops close to the edge of the screen, we still want them to be able to drag a long distance for more power. Using gestures in iOS apps There are special types of touch events known as gestures. Gestures are used all over in iOS apps, but most notable is their use in maps. The simplest gesture would be tap, which is used for selecting items; however, gestures such as pinch and rotate are used for scaling and, well, rotating. The most common gestures have been put together in the Gesture Recognizer classes from Apple, and they work on all iOS devices. These gestures are tap, pinch, rotate, swipe, pan, and long press. You will see all of the following items listed in your Object Library: Tap: This is an simple touch and release on the screen. A tap may also include multiple taps. Pinch: This is a gesture involving a two-finger touch and dragging of your fingers together or apart. Rotate: This is a gesture involving a two finger-touch and rotation of your fingers in a circle. Swipe: This gesture involves holding a touch down and then dragging to a release point. Pan: This gesture involves holding a touch and dragging. A pan does not have an end point like the swipe but instead recognizes direction. Long press: This gesture involves a simple touch and hold for a prespecified minimum amount of time. The following screenshot displays the Object Library: Let's get started on our new game as we show you how to integrate a gesture into your game: We'll start by making our standard Single View Application project. Let's call this one MiniGolf and save it to a new directory, as shown in the following screenshot: Once your project has been created, select it and uncheck Landscape Left and Landscape Right in your Deployment Info. Create a new Objective-C class file for our game view controller with a subclass of UIViewController, which we will call MiniGolfViewController so that we don't confuse it with our other game view controllers: Next, you will need to drag it into the Resources folder for the Mini Golf game: Select your Main.storyboard, and let's get our initial menu set up. Drag in a new View Controller object from your Objects Library. We are only going to be working with two screens for this game. For the new View Controller object, let's set the custom class to your new MiniGolfViewController: Let's add in a background and a button to segue into our new Mini Golf View Controller scene. Drag out an Image View object from our Object Library and set it to full screen. Drag out a Button object and name it Play Game. Press control, click on the Play Game button, and drag over to the Mini Golf View Controller scene to create a modal segue, as shown in the next screenshot. We wouldn't normally do this with a game, but let's use gestures to create a segue from the main menu to start the game. In the ViewController.m file, add in your unwind code: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue { } In your ViewController.h file, add in the corresponding action declaration: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue; In the Mini Golf View Controller, let's drag out an Image View object, set it to full screen, and set the image to gameBackground.png. Drag out a Button object and call it Exit Game and a Label object and set Strokes to 0 in the text area. It should look similar to the following screenshot: Press control and click-and-drag your Exit Game button to the Exit unwind segue button, as shown in the following screenshot: If you are planning on running this game in multiple screen sizes, you should pin your images and buttons in place. If you are just running your code in the iPhone 4 simulator, you should be fine. That should just about do it for the usual prep work. Now, let's get into some gestures. If you run your game right now, you should be able to go into your game area and exit back out. But what if we wanted to set it up so that when you touch the screen and swipe right, you would use a different segue to enter the gameplay area? The actions performed on the gestures are as follows: Select your view and drag out Swipe Gesture Recognizer from your Objects Library onto your view: Press control and drag your Swipe Gesture Recognizer to your Mini Golf View Controller just like you did with your Play Game button. Choose a modal segue and you are done. For each of the different types of gestures that are offered in the Object Library, there are unique settings in the attributes inspector. For the swipe gesture, you can choose the swipe direction and the number of touches required to run. For example, you could set up a two-finger left swipe just by changing a few settings, as shown in the following screenshot: You could just as easily press control and click-and-drag from your Swipe Gesture Recognizer to your header file to add IBAction associated with your swipe; this way, you can control what happens when you swipe. You will need to set Type to UISwipeGestureRecognizer: Once you have done this, a new IBAction function will be added to both your header file and your implementation file: - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender; and - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender { //Code for when a user swipes } This works the same way for each of the gestures in the Object Library.
Read more
  • 0
  • 0
  • 8084

article-image-using-cross-validation
Packt
22 Apr 2014
7 min read
Save for later

Using cross-validation

Packt
22 Apr 2014
7 min read
(For more resources related to this topic, see here.) To start from, cross-validation is a common validation technique that can be used to evaluate machine learning models. Cross-validation essentially measures how well the estimated model will generalize some given data. This data is different from the training data supplied to our model, and is called the cross-validation set, or simply validation set, of our model. Cross-validation of a given model is also called rotation estimation. If an estimated model performs well during cross-validation, we can assume that the model can understand the relationship between its various independent and dependent variables. The goal of cross-validation is to provide a test to determine if a formulated model is overfit on the training data. In the perspective of implementation, cross-validation is a kind of unit test for a machine learning system. A single round of cross-validation generally involves partitioning all the available sample data into two subsets and then performing training on one subset and validation and/or testing on the other subset. Several such rounds, or folds, of cross-validation must be performed using different sets of data to reduce the variance of the overall cross-validation error of the given model. Any particular measure of the cross-validation error should be calculated as the average of this error over the different folds in cross-validation. There are several types of cross-validation we can implement as a diagnostic for a given machine learning model or system. Let's briefly explore a few of them as follows: A common type is k-fold cross-validation, in which we partition the cross-validation data into k equal subsets. The training of the model is then performed on subsets of the data and the cross-validation is performed on a single subset. A simple variation of k-fold cross-validation is 2-fold cross-validation, which is also called the holdout method. In 2-fold cross-validation, the training and cross-validation subsets of data will be almost equal in proportion. Repeated random subsampling is another simple variant of cross-validation in which the sample data is first randomized or shuffled and then used as training and cross-validation data. This method is notably not dependent on the number of folds used for cross-validation. Another form of k-fold cross-validation is leave-one-out cross-validation, in which only a single record from the available sample data is used for cross-validation. Leave-one-out cross-validation is essentially k-fold cross-validation in which k is equal to the number of samples or observations in the sample data. Cross-validation basically treats the estimated model as a black box, that is, it makes no assumptions about the implementation of the model. We can also use cross-validation to select features in a given model by using cross-validation to determine the feature set that produces the best fit model over the given sample data. Of course, there are a couple of limitations of classification, which can be summarized as follows: If a given model is needed to perform feature selection internally, we must perform cross-validation for each selected feature set in the given model. This can be computationally expensive depending on the amount of available sample data. Cross-validation is not very useful if the sample data comprises exactly or nearly equal samples. In summary, it's a good practice to implement cross-validation for any machine learning system that we build. Also, we can choose an appropriate cross-validation technique depending on the problem we are trying to model as well as the nature of the collected sample data. For the example that will follow, the namespace declaration should look similar to the following declaration: (ns my-namespace (:use [clj-ml classifiers data])) We can use the clj-ml library to cross-validate the classifier we built for the fish packaging plant. Essentially, we built a classifier to determine whether a fish is a salmon or a sea bass using the clj-ml library. To recap, a fish is represented as a vector containing the category of the fish and values for the various features of the fish. The attributes of a fish are its length, width, and lightness of skin. We also described a template for a sample fish, which is defined as follows: (def fish-template [{:category [:salmon :sea-bass]} :length :width :lightness]) The fish-template vector defined in the preceding code can be used to train a classifier with some sample data. For now, we will not bother about which classification algorithm we have used to model the given training data. We can only assume that the classifier was created using the make-classifier function from the clj-ml library. This classifier is stored in the *classifier* variable as follows: (def *classifier* (make-classifier ...)) Suppose the classifier was trained with some sample data. We must now evaluate this trained classification model. To do this, we must first create some sample data to cross-validate. For the sake of simplicity, we will use randomly generated data in this example. We can generate this data using the make-sample-fish function. This function simply creates a new vector of some random values representing a fish. Of course, we must not forget the fact that the make-sample-fish function has an in-built partiality, so we create a meaningful pattern in a number of samples created using this function as follows: (def fish-cv-data (for [i (range 3000)] (make-sample-fish))) We will need to use a dataset from the clj-ml library, and we can create one using the make-dataset function, as shown in the following code: (def fish-cv-dataset (make-dataset "fish-cv" fish-template fish-cv-data)) To cross-validate the classifier, we must use the classifier-evaluate function from the clj-ml.classifiers namespace. This function essentially performs k-fold cross-validation on the given data. Other than the classifier and the cross-validation dataset, this function requires the number of folds that we must perform on the data to be specified as the last parameter. Also, we will first need to set the class field of the records in fish-cv-dataset using the dataset-set-class function. We can define a single function to perform these operations as follows: (defn cv-classifier [folds] (dataset-set-class fish-cv-dataset 0) (classifier-evaluate *classifier* :cross-validation fish-cv-dataset folds)) We will use 10 folds of cross-validation on the classifier. Since the classifier-evaluate function returns a map, we bind this return value to a variable for further use, as follows: user> (def cv (cv-classifier 10)) #'user/cv We can fetch and print the summary of the preceding cross-validation using the :summary keyword as follows: user> (print (:summary cv)) Correctly Classified Instances 2986 99.5333 % Incorrectly Classified Instances 14 0.4667 % Kappa statistic 0.9888 Mean absolute error 0.0093 Root mean squared error 0.0681 Relative absolute error 2.2248 % Root relative squared error 14.9238 % Total Number of Instances 3000 nil As shown in the preceding code, we can view several statistical measures of performance for our trained classifier. Apart from the correctly and incorrectly classified records, this summary also describes the Root Mean Squared Error (RMSE) and several other measures of error in our classifier. For a more detailed view of the correctly and incorrectly classified instances in the classifier, we can print the confusion matrix of the cross-validation using the :confusion-matrix keyword, as shown in the following code: user> (print (:confusion-matrix cv)) === Confusion Matrix === a b <-- classified as 2129 0 | a = salmon 9 862 | b = sea-bass nil As shown in the preceding example, we can use the clj-ml library's classifier-evaluate function to perform a k-fold cross-validation on any given classifier. Although we are restricted to using classifiers from the clj-ml library when using the classifier-evaluate function, we must strive to implement similar diagnostics in any machine learning system we build.
Read more
  • 0
  • 0
  • 2251
article-image-testing-and-tracing-applications
Packt
22 Apr 2014
7 min read
Save for later

Testing and Tracing Applications

Packt
22 Apr 2014
7 min read
(For more resources related to this topic, see here.) Tracing RabbitMQ Tracing the execution of a program is a convenient way to figure out what is really happening under the hood when reasoning about a particular behavior leads to no firm conclusion. Usually, the buck stops at the border where the application interacts with external resources such as the RabbitMQ broker. The good news is that RabbitMQ provides two tools that can be of tremendous help when it comes to tracing the interactions with a broker. The first of these tools is Tracer, an AMQP-aware network proxy that can be placed between a RabbitMQ client and a broker in order to gain insight into the interactions that are happening between each other. Tracer is available as part of the Java client download available at http://www.rabbitmq.com/download.html. After installation, Tracer can be started with the following: runjava.sh com.rabbitmq.tools.Tracer [listenPort] [connectHost] [connectPort]. All the parameters are optional. If left blank, Tracer will start a local proxy listening on port 5673 and connect to a local RabbitMQ on port 5672. Since you're happy with the defaults, you start Tracer with just the following command line: $ ./runjava.sh com.rabbitmq.tools.Tracer Now you can run the integration tests you've just created through this proxy. Do you remember that we made the connection information configurable on these tests? The approach is going to pay off now as we will configure them to go through the proxy port instead of directly hitting the RabbitMQ broker. You do this by running the following command line: $ mvn -Pintegration_tests -Dtest.rmq.addresses=localhost:5673 verify The output of Tracer is very verbose as it includes the complete details of the AMQP operations. Hereafter only the columns that show the channel ID, interaction direct (-> is client to broker and <- is opposite), and the name of the operation is reproduced, with the interactions related to the subscriber highlighted: ch#0 <- <connection.start> ch#0 -> <connection.start-ok> ch#0 <- <connection.tune> ch#0 -> <connection.tune-ok> ch#0 -> <connection.open> ch#0 <- <connection.open-ok> ch#1 -> <channel.open> ch#1 <- <channel.open-ok> ch#1 -> <queue.declare> ch#1 <- <queue.declare-ok> ch#1 -> <channel.close> ch#1 <- <channel.close-ok>() ch#1 -> <channel.open> ch#1 <- <channel.open-ok> ch#1 -> <basic.consume> ch#1 <- <basic.consume-ok> ch#2 -> <channel.open> ch#2 <- <channel.open-ok> ch#2 -> <basic.publish> ch#2 -> <channel.close> ch#1 <- <basic.deliver> ch#2 <- <channel.close-ok> ch#1 -> <basic.cancel> ch#1 <- <basic.cancel-ok> ch#1 -> <channel.close> ch#1 <- <channel.close-ok> ch#0 -> <connection.close> ch#0 <- <connection.close-ok> You can see the operations from the client and the responses from the broker, typically being named after the operation and suffixed with -ok. In essence, the following is the AMQP synopsis of the test code you're running: Establish a connection Open a channel, use it to declare the test queue, and close it Open a channel, use it to consume a queue Open a channel, use it to publish the test message, and close it Receive the message delivery, cancel the consumer, and close its channel Close the connection Notice how the connection start and tune operations are initiated by the broker as a response to establishing the connection to it. Also, notice that the channel number gets reused after being closed; it may seem that the same channel #1 has been used for creating the test queue and subscribing to it, but that's not the case since this channel has been explicitly closed. Only its identifier has been reused. Tracer is a very powerful tool to easily gain a deep understanding of the AMQP protocol and the usage your applications make of it. However, it requires you to insert a proxy between a client and the broker it connects to. Fear not if this is an issue; RabbitMQ has more than one trick in its bag of tracing tools. Drinking at the Firehose RabbitMQ offers the possibility of spying on all message publications and delivery operations that happens in a particular virtual host of a broker. This feature is called the Firehose tracer. When activated on a virtual host, a copy of all published and all delivered messages is sent to the amq.rabbitmq.trace exchange (which is automatically created in every virtual host). The routing key used for messages published to the amq.rabbitmq.trace exchange is publish.<exchange_name> for publication events and deliver.<queue_name> for message deliveries. The original message body is carried to the copies sent to this exchange. Extra information about the original publication or delivery event are added in a set of headers, including exchange_name for the name of the exchange where the message was originally published or redelivered if the message has been delivered more than once. You want to use the Firehose when running the integration tests to see the exchanged messages from the broker's standpoint. Before activating the Firehose on RabbitMQ, you need first to create a client application that will subscribe to the exchange and print out the messages that come to it. For this, you create the following Python script: #!/usr/bin/env python import amqp connection = amqp.Connection(host='localhost', userid='ccm-dev', password='coney123', virtual_host='ccm-dev-vhost') channel = connection.channel() EXCHANGE = 'amq.rabbitmq.trace' QUEUE = 'firehose-queue' channel.queue_declare(queue=QUEUE, durable=False, auto_delete=True, exclusive=True) channel.queue_bind(queue=QUEUE, exchange=EXCHANGE, routing_key='#') def handle_message(message): print message.routing_key, '->', message.properties, message.body print '--------------------------------' channel.basic_consume(callback=handle_message, queue=QUEUE, no_ack=True) print ' [*] Waiting for messages. To exit press CTRL+C' while channel.callbacks: channel.wait() channel.close() connection.close() After starting this script, you turn the Firehose on by running the following command line: $ sudo rabbitmqctl -p ccm-dev-vhost trace_on Starting tracing for vhost "ccm-dev-vhost" ... ...done. Now you can run the integration tests again, this time on the standard port since no proxying is needed with the Firehose: $ mvn -Pintegration_tests verify Let's now look at the following output of the Firehose consumer Python script: publish. -> {'application_headers': {u'node': u'rabbit@pegasus', u'exchange_name': u'', u'routing_keys': [u'amq.gen-vTMWL--04lap8s8JPbX5gA'], u'properties': {}}} 93b56787-b4f5-41e1-8c6f-d5f9b64275ca -------------------------------- deliver.amq.gen-vTMWL--04lap8s8JPbX5gA -> {'application_headers': {u'node': u'rabbit@pegasus', u'exchange_name': u'', u'redelivered': 0, u'routing_keys': u'amq.gen-vTMWL--04lap8s8JPbX5gA'], u'properties': {}}} 93b56787-b4f5-41e1-8c6f-d5f9b64275ca -------------------------------- As you can see, the publication to the default exchange (remember, its name is an empty string) and the delivery to the automatically named test queue are clearly visible. All the details that concern them are readily available in the message properties. Keep in mind that running the Firehose is taxing for the RabbitMQ broker, so when you're done with your tracing session, shut it down with the following: $ sudo rabbitmqctl -p ccm-dev-vhost trace_off Stopping tracing for vhost "ccm-dev-vhost" ... ...done. The Firehose will come handy when tracing what's happening between your different applications and your RabbitMQ brokers in depth. Keep in mind that using unique message IDs, as you've learned throughout this article, will help you a lot when the time comes to perform forensics analysis and trace the progression of messages across your complete infrastructure. Summary In this article, you have discovered powerful tracing tools to peek deeper under the hood of the AMQP protocol and the RabbitMQ broker. Resources for Article: Further resources on this subject: RabbitMQ Acknowledgements [Article] Troubleshooting in OpenStack Cloud Computing [Article] Introduction to Cloud Computing with Microsoft Azure [Article]
Read more
  • 0
  • 0
  • 5602

article-image-best-practices-modern-web-applications
Packt
22 Apr 2014
9 min read
Save for later

Best Practices for Modern Web Applications

Packt
22 Apr 2014
9 min read
(For more resources related to this topic, see here.) The importance of search engine optimization Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time. Item 1 – using keywords effectively In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to: Come up with a set of keywords that are pertinent to your topic Research common search keywords related to your website Take an intersection of these two sets of keywords and preemptively use them across the website Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead. However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website. Item 2 – header tags are powerful Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings. Item 3 – make sure to have alternative attributes for images Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website: Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code: <img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" /> This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results. Item 4 – enforcing clean URLs While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users. The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL: http://www.example.com/post/24/golden-dragon-review The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users. While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers. Item 5 – backlink whenever safe and possible Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result. Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host. Item 6 – handling HTTP status codes properly Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do: Status Code Alias Effect on SEO 200 Success This loads the page and the content is contributed to SEO 301 Permanent redirect This redirects the page and the redirected content is contributed to SEO 302 Temporary redirect This redirects the page and the redirected content doesn't contribute to SEO 404 Client error (not found) This loads the page and the content does not contribute to SEO 500 Server error This will not load the page and there is no content to contribute to SEO In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking. When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page. Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank. Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content. The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test. Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify. Item 7 – making use of your robots.txt and site map files Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories. For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website: User-agent: * Disallow: /music/ While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes: <?xml version="1.0" encoding="utf-8"?> <urlset > <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset> The attributes mentioned in the preceding code are explained in the following table: Attribute Meaning loc This stands for the URL location to be crawled lastmod This indicates the date on which the web page was last modified changefreq This indicates the page is modified and the number of times the crawler should visit as a result priority This indicates the web page's priority in comparison to the other web pages Using Grunt to reinforce SEO practices With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely. Some examples of Grunt plugins that accomplish this need are: grunt-html-snapshots (https://www.npmjs.org/package/grunt-html-snapshots) grunt-ajax-seo (https://www.npmjs.org/package/grunt-ajax-seo)
Read more
  • 0
  • 0
  • 2118

article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 3142
article-image-differences-style-between-java-and-scala-code
Packt
22 Apr 2014
6 min read
Save for later

Differences in style between Java and Scala code

Packt
22 Apr 2014
6 min read
(For more resources related to this topic, see here.) Writing an algorithm in Java follows an imperative style, that is, a sequence of statements that change a program state. Scala, focusing primarily on functional programming, adopts a more declarative approach, where everything is an expression rather than a statement. Let's illustrate this in an example. In Java, you would commonly find the following code snippet: ... String customerLevel = null; if(amountBought > 3000) { customerLevel = "Gold"; } else { customerLevel = "Silver"; } ... The Scala equivalent consists of the following code snippet: scala> val amountBought = 5000 amountBought: Int = 5000 scala> val customerLevel = if (amountBought> 3000) "Gold" else "Silver" customerLevel: String = Gold Note that unlike the Java statements, if is now embedded as part of the resulting evaluated expression. In general, working where everything is evaluated as an expression (and here an immutable expression) will make it much easier for reuse as well as composition. Being able to chain the result of one expression to the next will give you a concise way of expressing fairly complicated transformations that would require much more code in Java. Adjusting the code layout As the intent of functional programming is to minimize state behavior, it often consists of short lambda expressions so that you can visualize a fairly complicated transformation in an elegant and concise way, in many cases even as one-liners. For this reason, general formatting in Scala recommends that you use only two-space indentations instead of the four-space indentation that is generally admitted in Java code, as shown in the following code snippet: scala> class Customer( val firstName: String, val lastName: String, val age: Int, val address: String, val country: String, valhasAGoodRating: Boolean ) { override def toString() = s" $firstName $lastName" } defined class Customer If you have many constructor/method parameters, having them aligned as previously illustrated makes it easier to change them without the need to reformat the whole indentation. It is also the case if you want to refactor the class with a longer name, for example, VeryImportantCustomer instead of Customer; it will make smaller and more precise differences against your version control management system (Git, subversion, and so on). Naming conventions Conventions for naming packages, classes, fields, and methods in the camel case generally follow the Java conventions. Note that you should avoid the underscore (_) in variable names (such as first_name or _first_name) as the underscore has a special meaning in Scala (self or this in anonymous functions). However, constants, most likely declared as private static final myConstant in Java, are normally declared in Scala in the upper camel case, such as in the following enclosing object: scala> object Constants { | val MyNeverChangingAge = 20 | } defined module Constants Choosing a meaningful name for variables and methods should always be a priority in Java, and it is often recommended to use rather long variable names to precisely describe what a variable or method represents. In Scala, things are a little bit different; meaningful names are, of course, a good way to make code more readable. However, as we are at the same time aiming at making behavior transformations concise through the use of functions and lambda expressions, short variable names can be an advantage if you can capture a whole piece of functionality in a short block of code. For example, incrementing a list of integers in Scala can simply be expressed as follows: scala> val amounts = List(3,6,7,10) map ( x => x +1 ) amounts: List[Int] = List(4, 7, 8, 11) Although using x as a variable name is often discouraged in Java, here it does not matter that much as the variable is not reused and we can capture the transformation it does at once. There are many short or long alternatives to the previous lambda syntax that will produce the same result. So, which one to choose? Some of the alternatives are as follows: scala> val amounts = List(3,6,7,10) map ( myCurrentAmount => myCurrentAmount +1 ) amounts: List[Int] = List(4, 7, 8, 11) In this case, a long variable name breaks a clear and concise one-liner into two lines of code, thereby, making it difficult to understand. Meaningful names make more sense here if we start expressing logic on several lines as shown in the following code snippet: scala> val amounts = List(3,6,7,10) map { myCurrentAmount => val result = myCurrentAmount + 1 println("Result: " + result) result } Result: 4 Result: 7 Result: 8 Result: 11 amounts: List[Int] = List(4, 7, 8, 11) A shorter but still expressive name is sometimes a good compromise to indicate to the reader that this is an amount we are currently manipulating in our lambda expression, as follows: scala> val amounts = List(3,6,7,10) map( amt => amt + 1 ) amounts: List[Int] = List(4, 7, 8, 11) Finally, the shortest syntax of all that is well accepted by fluent Scala programmers for such a simple increment function is as follows: scala> val amounts = List(3,6,7,10) map( _ + 1 ) amounts: List[Int] = List(4, 7, 8, 11) Underscores are also encountered in Scala for expressing more complicated operations in an elegant but more awkward way, as is the following sum operation using the foldLeft method that accumulates the state from one element to the other: scala> val sumOfAmounts = List(3,6,7,10).foldLeft(0)( _ + _ ) sumOfAmounts: Int = 26 Instead of explicitly having 0 as the initial value for the sum, we can write this summation a bit more elegantly by using the reduce method that is similar to foldLeft. However, we take the first element of the collection as the initial value (here, 3 will be the initial value), as shown in the following command: scala> val sumOfAmounts = List(3,6,7,10) reduce ( _ + _ ) sumOfAmounts: Int = 26 As far as style is concerned, fluent Scala programmers will not have any problem reading this code. However, if the state accumulation operation is more complicated than just a simple + operation, it might be wise to write it more explicitly as shown in the following command: scala> val sumOfAmounts = List(3,6,7,10) reduce ( (total,element) => total + element ) sumOfAmounts: Int = 26 Summary In this article, we discussed about the style differences and the naming conventions that we must be aware of, to write easier-to-read and more maintainable code. Resources for Article: Further resources on this subject: The Business Layer (Java EE 7 First Look) [article] Getting Started with JavaFX [article] Enterprise JavaBeans [article]
Read more
  • 0
  • 0
  • 5118

article-image-fastest-way-go-idea-prezi
Packt
22 Apr 2014
12 min read
Save for later

The Fastest Way to Go from an Idea to a Prezi

Packt
22 Apr 2014
12 min read
(For more resources related to this topic, see here.) Mission briefing In this article, we will create a Prezi presentation based on just an idea. Often, people have an idea for a presentation they have to build, but they don't have any idea about what the exact content should be. They end up including a lot of details and are not able to build a clear structure for their presentation. A good presentation consists of a clear message, a few main topics, and a clear structure for all the information. Brainstorming is ideal to generate ideas and content (diverge), but don't forget to mark the main ideas and get rid of the information you don't really need (converge). Divergent thinking is about expanding your ideas, looking for alternatives, quantity, trial and error, chaos, and intuition. With divergent thinking, you can explore as many aspects of a concept as possible. Convergent thinking is about focus, selecting ideas, choosing, structuring, organizing, quality, and logic. Convergent thinking is the opposite of divergent thinking. It's important to create a distinction between the main topics and the details. Ideally, you should have three main topics. That's enough. Not all information is of the same importance. You'll have main topics, subtopics, and details. The result of our structuring session is a clear mind map (in Prezi!) that we will use as a basis for our presentation in Prezi. Using a mind map for your Prezi presentation is the easiest way to use Prezi in a good way. This way of presenting always works, because you zoom in for the details and zoom out for the overview. Why is it awesome? Brainstorming is a great way to develop the content for your presentation. Put your brains to work and you will be able to come up with the best and creative ideas. Yes, you too can be creative! It's easy. Just follow this article and you'll learn to generate ideas in Prezi and create a great prezi out of it. We'll also keep you from falling into the trap of trying to brainstorm and structure at the same time, as that would just complicate things. In this article, you will learn how to first diverge, converge, and finally fill in the details. Your Hotshot objectives The major tasks necessary to complete this article are as follows: You have an idea, but where do you start? From brainstorming to mind mapping How should you present your mind map? Mission checklist We have no special needs for this article. We'll keep it fast and simple, and we'll be only using Prezi. The only thing we need to start off is an idea. To make sure that we focus on the process and not too much on the subject itself, we decided to choose a light subject for this article. The subject should trigger your brain so that ideas start popping up immediately. Our first idea is to create a presentation about "The Future". This should give you some inspiration! What about goal, message, and audience? If you think we are forgetting something in the process of creating a presentation, you could be right. Every presentation should start with the following three questions in order to define the goal, message, and audience of the presentation: What do I want to achieve with this presentation? (goal) What do I want the audience to remember? (message) Who is the audience? Most people never ask these questions and immediately start creating and designing their presentation. If you are not asking yourself what the goal of your presentation is, and if you don't define it, you can never meet that goal and your presentation might never be a success. However, the focus of this article is on brainstorming, mind mapping, and being creative; therefore, we will not ask these questions yet. The aim of this article is to practice brainstorming and structuring and therefore, we will leave out these three very important questions. You can look at this article as a free presentation assignment to practice presenting. This could not only be an article for school, but also for the company or organization in which you work. In this article, we'll start with the brainstorming and we'll define our goal, message, and audience later. You have an idea but where do you start? We use our idea "The Future" as a starting point for our presentation. We could immediately start in Prezi, but sometimes it's better not to start directly with a computer. First, we need to free our mind. Engage thrusters Before you start brainstorming, it's a good idea to free your mind and get ready for some creativity. Pick one task from the following list. Choose the one you never do (or do the least). Stand up and take a five-minute walk Listen to the sounds around you really carefully for five minutes Sing your favorite song from your childhood Empty your Lego box on the table and start building something Play a game of darts or pool Watch a funny YouTube movie Run around crazily for one minute Laugh out loud for at least one full minute Buy and eat an ice cream Objective complete – mini debriefing Starting your brainstorm is like preparing for a new task. Brainstorming is fun but also requires hard work. Make sure you are in the right mood, free your mind, and stay focused. You don't have to be relaxed to be creative as a lot of people think, you need to be active. From brainstorm to mind map Our next task is to turn our brainstorm into a useful mind map. In this process, we need to structure our information, get rid of the information we don't need, and decide which topics will be our main topics. When we limit the number of main topics to three, we'll reach the core of our information. Engage thrusters First, we will mark our starting point in our brainstorm (the first idea and our temporary title) and the main topics. We'll use the Prezi text styles for this. Double-click on the first idea The Future, choose the text style Title, and make this text big (if it's not already big enough). Use the bigger A in the menu to enlarge the text or click on the small circle in the bottom-right corner of the text box as shown in the following screenshot: Resizing text You have two options to resize text: inside the text box (editing mode) or via the transformation tool. If you are inside the text box (by double-clicking on the text), you can resize the text by using the small or bigger A or the small circle in the bottom-right corner of the text box. Click on the text box once and use the transformation tool to resize the text. The plus sign is used to enlarge the text and the minus sign is used to minimize the text. You can also click-and-drag the small blue squares at the corner of the text box to resize text. Now, choose your three most important words from all the words on the canvas. These will become your main topics. Double-click on these words and choose the text style Subtitle for them. Make these words bigger, but not as big as the title. This is shown in the following screenshot: The next step is the most interesting one. For every other word on the canvas, decide whether it should be part of one of our main topics or remove the word. If a word is part of a main topic, move the word to that main topic. If new words pop up in your mind, it's okay to add them to the canvas. When you finish this process, your canvas will look like the following screenshot. This is the information structure (or mind map) that will be the basis of your presentation. To show relations or to emphasize information, you can add arrows and lines or use the highlighter. This helps you to structure the information further and determine the objects on which you want to focus. It can also help you think about the flow of the information during your presentation. Arrows and lines are for when you want to show relationships and associations. The highlighter is suitable when you want to emphasize any information. You can even make small drawings with the highlighter. Finally, for extra accentuation, you can also use an arrow that will point at the information, just like the red arrow that is shown in the following screenshot. Creating a double-sided arrow is not a standard option in Prezi. So, if you need it, you have to create it yourself by using two separate arrows and putting them next to each other. Objective complete – mini debriefing In this task, you learned how to turn your brainstorm into a mind map. You used Title for the subject and Subtitle for the three main topics. Our subject is The Future, our main topics are metaphor, the past, and yours. Then, you decided for every other word whether it's a part of a main topic or it should be removed. It's an interesting process and a few new words might pop up in your mind. In our case, the word metaphor popped up during our process. A metaphor is a way of describing a subject as something else to make a stronger visual. We will be using a crystal ball as a metaphor for The Future. The result of this process is a mind map for your presentation. You can use arrows, lines, and a highlighter to create relationships or emphasize information. Classified intel You can take your information structure a step further by using frames. Frames are a great way to visualize a structure. You can use frames to group content, so that if you move a frame, the entire content will move as well. This will enable you to rearrange the information really fast just by moving the frames. Not on paper and not even with post-its can you move information this easily. Use frames in frames to show subtopics and more detailed information. Frames do not have to be of the same size. You can resize them in the same way as you resize text. The bigger a frame, the more important the information. Our mind map would look like the following screenshot if we use frames: How should you present your mind map? A lot of people present their prezi online. That's okay as long as you have a good and stable Internet connection. If you do not have a good Internet connection or if you're not sure about doing this, you better download your prezi for your presentation. Engage thrusters It doesn't matter which Prezi account you have. You can download your prezi and present it offline with any type of account. The YouTube videos you have added to your prezi cannot be played if you don't have an Internet connection. These movies are not inserted in to your prezi, it's a link to the YouTube movie. Other movies, which can be inserted by selecting From file (PDF, Video)... under Insert, will always play even without an Internet connection. In your prezis, go to the prezi you want to download and click on the Download button. Click on Presenting (the one on the left-hand side) and then click on the Download button, as shown in the following screenshot. A ZIP file will download to your computer. This ZIP file contains everything you need to present, for both Windows and Mac. You don't need a browser and there is a built-in Flash Player. Unzip the ZIP file and double-click on Prezi.exe if you are presenting on a Windows machine. The other prezi file is to present on a Mac. The file will automatically open in the Flash Player. Click on the Fullscreen button in the bottom-right corner to show the prezi on a fullscreen as shown in the following screenshot. Now, you are ready to present! If you want, you can use a remote to click through your presentation instead of using the keyboard arrows. A remote looks much more professional. Both the iPad and iPhone don't support Flash. However, you can download the free Prezi apps for both iPad and iPhone. Visit the links prezi.com/ipad/ and prezi.com/iphone/ for more information. Objective complete – mini debriefing In this task, we explained how you can present your prezi without an Internet connection. It's okay to present online, but your Internet connection must be stable and have enough bandwidth. If it's not working well, it might slow down your presentation and lead to frustration both for you and your audience. A Hotshot challenge Now, you've seen how you can brainstorm and develop a structure in Prezi and create a presentation to practice these skills. Choose one of the following subjects, start your brainstorm, create a mind map, fill it with information, and present it to your friends and family! My ideal house When I dream, I think of… My favorite food When I'm old… On an extra day off, I'll… Summary This article teaches how to create a prezi by using brainstorming techniques. You will learn how to brainstorm in Prezi, how to go from a brainstorm to a mind map, and how to structure your content. Resources for Article: Further resources on this subject: Using Prezi - The Online Presentation Software Tool [Article] Turning your PowerPoint presentation into a Prezi [Article] Getting Started with Impressive Presentations [Article]
Read more
  • 0
  • 0
  • 2085
Modal Close icon
Modal Close icon