Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-appium-essentials
Packt
09 Apr 2015
9 min read
Save for later

Appium Essentials

Packt
09 Apr 2015
9 min read
In this article by Manoj Hans, author of the book Appium Essentials, we will see how toautomate mobile apps on real devices. Appium provides the support for automating apps on real devices. We can test the apps in a physical device and experience the look and feel that an end user would. (For more resources related to this topic, see here.) Important initial points Before starting with Appium, make sure that you have all the necessary software installed. Let's take a look at the prerequisites for Android and iOS: Prerequisites for Android Prerequisites for iOS Java (Version 7 or later) Mac OS (Version 10.7 or later) The Android SDK API, Version 17 or higher Xcode (Version 4.6.3 or higher; 5.1 is recommended) A real Android device An iOS provisional profile Chrome browser on a real device An iOS real device Eclipse The SafariLauncher app TestNG ios-webkit-debug-proxy The Appium Server Java Version 7 The Appium client library (Java) Eclipse Selenium Server and WebDriver Java library TestNG The Apk Info app The Appium server   The Appium client library (Java)   Selenium Server and WebDriver Java library While working with the Android real device, we need to enable USB debugging under Developer options. To enable USB debugging, we have to perform the following steps: Navigate to Settings | About Phone and tap on Build number seven times (assuming that you have Android Version 4.2 or newer); then, return to the previous screen and find Developer options, as shown in the following screenshot: Tap on Developer options and then tap on the ON switch to switch on the developer settings (You will get an alert to allow developer settings; just click on the OK button.). Make sure that the USB debugging option is checked, as shown here: After performing the preceding steps, you have to use a USB cable to connect your Android device with the desktop. Make sure you have installed the appropriate USB driver for your device before doing this. After connecting, you will get an alert on your device asking you to allow USB debugging; just tap on OK. To ensure that we are ready to automate apps on the device, perform the following steps: Open Command Prompt or terminal (on Mac). Type adb devices and press the Enter button. You will get a list of Android devices; if not, then try to kill and start the adb server with the adb kill-server and adb start-server commands. Now, we've come to the coding part. First, we need to set the desired capabilities and initiate an Android/iOS driver to work with Appium on a real device. Let's discuss them one by one. Desired capabilities for Android and initiating the Android driver We have two ways to set the desired capabilities, one with the Appium GUI and the other by initiating the desired capabilities object. Using the desired capabilities object is preferable; otherwise, we have to change the desired capabilities in the GUI repeatedly whenever we are testing another mobile app. Let's take a look at the Appium GUI settings for native and hybrid apps: Perform the following steps to set the Android Settings: Click on the Android Settings icon. Select Application Path and provide the path of the app. Click on Package and choose the appropriate package from the drop-down menu. Click on Launch Activity and choose an activity from the drop-down menu. If the application is already installed on the real device, then we don't need to follow steps 2-4. In this case, we have to install the Apk Info app on the device to know the package and activities of the app and set them using the desired capabilities object, which we will see in the next section. You can easily get the 'Apk info' app from the Android Play Store. Select PlatformVersion from the dropdown. Select Device Name and type in a device name (For example, Moto X). Now, start the Appium Server. Perform the following steps to set the Android Settings for web apps: Click on the Android Settings icon. Select PlatformVersion from the dropdown. Select Use Browser and choose Chrome from the dropdown. Select Device Name and type in a device name (For example, Moto X). Now, start the Appium Server. Let's discuss how to initiate the desired capabilities object and set the capabilities. Desired capabilities for native and hybrid apps Here we will directly dive into the code with comments. First, we need to import the following packages: import java.io.File;import org.openqa.selenium.remote.DesiredCapabilities;import io.appium.java_client.remote.MobileCapabilityType; Now, let's set the desired capabilities for the native and hybrid apps: DesiredCapabilities caps = new DesiredCapabilities();//To create an objectFile app=new File("path of the apk");//To create file object to specify the app pathcaps.setCapability(MobileCapabilityType.APP,app);//If app is already installed on the device then no need to set this capability.caps.setCapability(MobileCapabilityType.PLATFORM_VERSION, "4.4");//To set the Android versioncaps.setCapability(MobileCapabilityType.PLATFORM_NAME, "Android");//To set the OS namecaps.setCapability(MobileCapabilityType.DEVICE_NAME,"Moto X");//You can change the device name as yours.caps.setCapability(MobileCapabilityType.APP_PACKAGE, "package name of your app (you can get it from apk info app)"); //To specify the android app packagecaps.setCapability(MobileCapabilityType.APP_ACTIVITY, "Launch activity of your app (you can get it from apk info app)");//To specify the activity which we want to launch Desired capabilities for web apps In Android mobile web apps, some of the capabilities that we used in native and hybrid apps such as APP, APP PACKAGE, and APP ACTIVITY are not needed because we launch a browser here. First, we need to import the following packages: import java.io.File;import org.openqa.selenium.remote.DesiredCapabilities;import io.appium.java_client.remote.MobileCapabilityType; Now, let's set the desired capabilities for the web apps: DesiredCapabilities caps = new DesiredCapabilities();//To create an objectcaps.setCapability(MobileCapabilityType.PLATFORM_VERSION, "4.4");//To set the android versioncaps.setCapability(MobileCapabilityType.PLATFORM_NAME, "Android");//To set the OS namecaps.setCapability(MobileCapabilityType.DEVICE_NAME,"Moto X");//You can change the device name as yourscaps.setCapability(MobileCapabilityType.BROWSER_NAME,"Chrome"); //To launch the Chrome browser We are done with the desired capability part; now, we have to initiate the Android driver to connect with the Appium Server by importing the following packages: import io.appium.java_client.android.AndroidDriver;import java.net.URL; Then, initiate the Android driver as shown here: AndroidDriver driver = new AndroidDriver (new URL("http://127.0.0.1:4723/wd/hub"), caps);//TO pass the url where Appium server is running This will launch the app in the Android real device using the configurations specified in the desired capabilities. Installing provisional profile, SafariLauncher, and ios-webkit-debug-proxy Before moving on to the desired capabilities for iOS, we have to make sure that we have set up a provisional profile and installed the SafariLauncher app and ios-webkit-debug-proxy to work with the real device. First, let's discuss the provisional profile. Provisional profile This profile is used to deploy an app on a real iOS device. To do this, we need to join the iOS Developer Program (https://developer.apple.com/programs/ios/). For this, you will have to pay USD 99. Now, visit https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/MaintainingProfiles/MaintainingProfiles.html#//apple_ref/doc/uid/TP40012582-CH30-SW24 to generate the profile. After this, you will also need to install provisional profile on your real device; to do this, perform the following steps: Download the generated provisional profile. Connect the iOS device to a Mac using a USB cable. Open Xcode (version 6) and click on Devices under the Window menu, as shown here: Now, context click on the connected device and click on Show Provisional Profiles...: Click on +, select the downloaded profile, and then click on the Done button, as shown in the following screenshot: SafariLauncher app and ios-webkit-debug-proxy The SafariLauncher app is used to launch the Safari browser on a real device for web app testing. You will need to build and deploy the SafariLauncher app on a real iOS device to work with the Safari browser. To do this, you need to perform the following steps: Download the source code from https://github.com/snevesbarros/SafariLauncher/archive/master.zip. Open Xcode and then open the SafariLauncher project. Select the device to deploy the app on and click on the build button. After this, we need to change the SafariLauncher app in Appium.dmg; to do this, perform the following steps: Right-click on Appium.dmg. Click on Show Package Contents and navigate to Contents/Resources/node_modules/appium/build/SafariLauncher. Now, extract SafariLauncher.zip. Navigate to submodules/SafariLauncher/build/Release-iphoneos and replace the SafariLauncher app with your app. Zip submodules and rename it as SafariLauncher.zip. Now, we need to install the ios-webkit-debug-proxy on Mac to establish a connection in order to access the web view. To install the proxy, you can use brew and run the command brew install ios-webkit-debug-proxy in the terminal (this will install the latest tagged version), or you can clone it from Git and install it using the following steps: Open the terminal, type git clone https://github.com/google/ios-webkit-debug-proxy.git, and then click on the Enter button. Then, enter the following commands: cd ios-webkit-debug-proxy ./autogen.sh ./configure make sudo make install We are now all set to test web and hybrid apps. Summary In this article, we looked at how we can execute the test scripts of native, hybrid, and web mobile apps on iOS and Android real devices. Specifically, we learned how to perform actions on native mobile apps and also got to know about the desired capabilities for real devices. We ran a test with the Android Chrome browser and learned how to load the Chrome browser on an Android real device with the necessary capabilities. We then moved on to starting the Safari browser on a real device and setting up the desired capabilities to test web applications. Lastly, we looked at how easily we can automate hybrid apps and switch from native to web view. Resources for Article: Further resources on this subject: Cross-browser Tests using Selenium WebDriver [article] First Steps with Selenium RC [article] Selenium Testing Tools [article]
Read more
  • 0
  • 0
  • 2602

article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 2483

article-image-giving-containers-data-and-parameters
Packt
07 Apr 2015
12 min read
Save for later

Giving Containers Data and Parameters

Packt
07 Apr 2015
12 min read
In this article by Oskar Hane, author of the book, Build Your Own PaaS with Docker, we'll cover the following topics: Data volumes Creating a data volume image Host on GitHub Publishing on Docker Registry Hub Running on Docker Registry Hub Passing parameters to containers Creating a parameterized image (For more resources related to this topic, see here.) Data volumes There are two ways in which we can mount external volumes on our containers. A data volume lets you share data between containers, and the data inside the data volume is untouched if you update, stop, or even delete your service container. A data volume is mounted with the –v option in the docker run statement: docker run –v /host/dir:container/dir You can add as many data volumes as you want to a container, simply by adding multiple –v directives. A very good thing about data volumes is that the containers that get data volumes passed into them don't know about it, and don't need to know about it either. No changes are needed for the container; it works just as if it were writing to the local filesystem. You can override existing directories inside containers, which is a common thing to do. One usage of this is to have the web root (usually at /var/www inside the container) in a directory at the Docker host. Mounting a host directory as a data volume You can mount a directory (or file) from your host on your container: docker run –d --name some-wordpress –v /home/web/wp-one:/var/www wordpress This will mount the host's local directory, /home/web/wp-one, as /var/www on the container. If you want to give the container only the read permission, you can change the directive to –v /home/web/wp-one:/var/www:ro where the :ro is the read-only flag. It's not very common to use a host directory as a data volume in production, since data in a directory isn't very portable. But it's very convenient when testing how your service container behaves when the source code changes. Any change you make in the host directory is direct in the container's mounted data volume. Mounting a data volume container A more common way of handling data is to use a container whose only task is to hold data. The services running in the container should be as few as possible, thus keeping it as stable as possible. Data volume containers can have exposed volumes via the Dockerfile's VOLUME keyword, and these volumes will be mounted on the service container while using the data volume container with the --volumes-from directive. A very simple Dockerfile with a VOLUME directive can look like this: FROM ubuntu:latest VOLUME ["/var/www"] A container using the preceding Dockerfile will mount /var/www. To mount the volumes from a data container onto a service container, we create the data container and then mount it, as follows: docker run –d --name data-container our-data-container docker run –d --name some-wordpress --volumes-from data-container wordpress Backup and restore data volumes Since the data in a data volume is shared between containers, it's easy to access the data by mounting it onto a temporary container. Here's how you can create a .zip file (from your host) from the data inside a data volume container that has VOLUME ["/var/www"] in its Dockerfile: docker run --volumes-from data-container -v $(pwd):/host ubuntu zip -r /host/data-containers-www /var/www This creates a .zip file named data-containers-www.zip, containing what was in the. www data container from var directory. This .zip file places that content in your current host directory. Creating a data volume image Since our data volume container will just hold our data, we should keep it as small as possible to start with so that it doesn't take lots of unnecessary space on the server. The data inside the container can, of course, grow to be as big as the space on the server's disk. We don't need anything fancy at all; we just need a working file storage system. For this article, we'll keep all our data (MySQL database files and WordPress files) in the same container. You can, of course, separate them into two data volume containers named something like dbdata and webdata. Data volume image Our data volume image does not need anything other than a working filesystem that we can read and write to. That's why our base image of choice will be BusyBox. This is how BusyBox describes itself: "BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system." That sounds great! We'll go ahead and add this to our Dockerfile: FROM busybox:latest Exposing mount points There is a VOLUME instruction for the Dockerfile, where you can define which directories to expose to other containers when this data volume container is added using --volumes-from attribute. In our data volume containers, we first need to add a directory for MySQL data. Let's take a look inside the MySQL image we will be using to see which directory is used for the data storage, and expose that directory to our data volume container so that we can own it: RUN mkdir –p /var/lib/mysql VOLUME ["/var/lib/mysql"] We also want our WordPress installation in this container, including all .php files and graphic images. Once again, we go to the image we will be using and find out which directory will be used. In this case, it's /var/www/html. When you add this to the Dockerfile, don't add new lines; just append the lines with the MySQL data directory: RUN mkdir -p /var/lib/mysql && mkdir -p /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] The Dockerfile The following is a simple Dockerfile for the data image: FROM busybox:latest MAINTAINER Oskar Hane <oh@oskarhane.com> RUN mkdir -p /var/lib/mysql && mkdir -p /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] And that's it! When publishing images to the Docker Registry Hub, it's good to include a MAINTAINER instruction in the Dockerfiles so that you can be contacted if someone wants, for some reason. Host on GitHub When we use our knowledge on how to host Docker image sources on GitHub and how to publish images on the Docker Registry Hub, it'll be no problem creating our data volume image. Let's create a branch and a Dockerfile and add the content for our data volume image: git checkout -b data vi Dockerfile git add Dockerfile On line number 2 in the preceding code, you can use the text editor of your choice. I just happen to find vi suits my needs. The content you should add to the Dockerfile is this: FROM busybox:latest MAINTAINER Oskar Hane <oh@oskarhane.com> RUN mkdir /var/lib/mysql && mkdir /var/www/html VOLUME ["/var/lib/mysql", "/var/www/html"] Replace the maintainer information with your name and e-mail. You can—and should—always ensure that it works before committing and pushing to GitHub. To do so, you need to build a Docker image from your Dockerfile: docker build –t data-test Make sure you notice the dot at the end of the line, which means that Docker should look for a Dockerfile in the current directory. Docker will try to build an image from the instructions in our Dockerfile. It should be pretty fast, since it's a small base image and there's nothing but a couple of VOLUME instructions on top of it. The screenshot is as follows: When everything works as we want, it's time to commit the changes and push it to our GitHub repository: git commit –m "Dockerfile for data volume added." git push origin data When you have pushed it to the repository, head over to GitHub to verify that your new branch is present there. The following screenshot shows the GitHub repository: Publishing on Docker Hub Registry Now that we have our new branch on GitHub, we can go to the Docker Hub Registry and create a new automated build, named data. It will have our GitHub data branch as source. Wait for the build to finish, and then try to pull the image with your Docker daemon to verify that it's there and it's working. The screenshot will be as follows: Amazing! Check out the size of the image; it's just less than 2.5 MB. This is perfect since we just want to store data in it. A container on top of this image can, of course, be as big as your hard drive allows. This is just to show how big the image is. The image is read-only, remember? Running a data volume container Data volume containers are special; they can be stopped and still fulfill their purpose. Personally, I like to see all containers in use when executing docker ps command, since I like to delete stopped containers once in a while. This is totally up to you. If you're okay with keeping the container stopped, you can start it using this command: docker run –d oskarhane/data true The true argument is just there to enter a valid command, and the –d argument places the container in detached mode, running in the background. If you want to keep the container running, you need to place a service in the foreground, like this: docker run –d oskarhane/data tail –f /dev/null The tail –f /dev/null command is a command that never ends, so the container will be running until we stop it. Resource-wise, the tail command is pretty harmless. Passing parameters to containers We have seen how to give containers parameters or environment variables when starting the official MySQL container: docker run --name mysql-one -e MYSQL_ROOT_PASSWORD=pw -d mysql The –e MYSQL_ROOT_PASSWORD=pw command is an example showing how you can do it. It means that the MYSQL_ROOT_PASSWORD environment variable inside the container has pw as the value. This is a very convenient way to have configurable containers where you can have a setup script as ENTRYPOINT or a foreground script configuring passwords; hosts; test, staging, or production environments; and other settings that the container needs. Creating a parameterized image Just to get the hang of this feature, which is very good, let's create a small Docker image that converts a string to uppercase or lowercase, depending on the state of an environment variable. The Docker image will be based on the latest Debian distribution and will have only an ENTRYPOINT command. This is the Dockerfile: FROM debian:latest ADD ./case.sh /root/case.sh RUN chmod +x /root/case.sh ENTRYPOINT /root/case.sh This takes the case.sh file from our current directory, adds it to the container, makes it executable, and assigns it as ENTRYPOINT. The case.sh file may look something like this: #!/bin/bash   if [ -z "$STR" ]; then        echo "No STR string specified."        exit 0 fi   if [ -z "$TO_CASE" ]; then        echo "No TO_CASE specified."        exit 0 fi   if [ "$TO_CASE" = "upper" ]; then        echo "${STR^^*}"        exit 0 fi if [ "$TO_CASE" = "lower" ]; then        echo "${STR,,*}"        exit 0 fi echo "TO_CASE was not upper or lower" This file checks whether the $STR and $TO_CASE environment variables are set. If the check on whether $TO_CASE is upper or lower is done and if that fails, an error message saying that we only handle upper and lower is displayed. If $TO_STR was set to upper or lower, the content of the environment variable $STR is transformed to uppercase or lowercase respectively, and then printed to stdout. Let's try this! Here are some commands we can try: docker run –i case docker run –i case –e STR="My String" case docker run –i case –e STR="My String" –e TO_CASE=camel case docker run –i case –e STR="My String" –e TO_CASE=upper case docker run –i case –e STR="My String" –e TO_CASE=lower case This seems to be working as expected, at least for this purpose. Now we have created a container that takes parameters and acts upon them. Summary In this article, you learned that you can keep your data out of your service containers using data volumes. Data volumes can be any one of directories, files from the host's filesystem, or data volume containers. We explored how we can pass parameters to containers and how to read them from inside ENTRYPOINT. Parameters are a great way to configure containers, making it easier to create more generalized Docker images. We created a data volume container and published it to the Docker Hub Registry. Resources for Article: Further resources on this subject: Supporting hypervisors by OpenNebula [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Platform as a Service and CloudBees [article]
Read more
  • 0
  • 0
  • 1824

article-image-work-item-querying
Packt
07 Apr 2015
9 min read
Save for later

Work Item Querying

Packt
07 Apr 2015
9 min read
In this article by Dipti Chhatrapati, author of Reporting in TFS, shows us that work items are the primary element project managers and team leaders focus on to track and identify the pending work to be completed. A team member uses work items to track their personal work queue. In order to achieve the current status of the project via work items, it's essential to query work items based on the requirements. This article will cover the following topics: Team project scenario Work item queries Search box queries Flat queries Direct link queries Tree queries (For more resources related to this topic, see here.) Team project scenario Here, we are considering a sports item website that helps user to buy sport items from an item gallery based on their category. The user has to register for membership in order to buy sport products such as footballs, tennis rackets, cricket bats, and so on. Moreover, a registered user can also view/add sport-related articles or news, which will be visible to everyone irrespective of whether they are anonymous or registered. This project is mapped with TFS and has a repository created in TFS Server with work items such as user stories, tasks, bugs, and test cases to plan and track the project's work. We have the following TFS configuration settings for the team project: Team Foundation Server: DIPSTFS Website project: SportsWeb Team project: SportsWebTeamProject Team Foundation Server URL: http://dipstfs:8080/tfs Team project collection URL: http://dipstfs:8080/tfs/DefaultCollection Team Project URL: http://dipstfs:8080/tfs/DefaultCollection/SportsWebTeamProject Team project administrators: DIPSTFSDipsAdministrator Team project members: DIPSTFSDipti Chhatrapati, DIPSTFSBjoern H Rapp, DIPSTFSEdric Taylor, DIPSTFSJohn Smith, DIPSTFSNelson Hall, DIPSTFSScott Harley The following figure shows the project with TFS configuration and setup: Work item queries Work item queries smoothen the process of identifying the status of the team project; this helps in creating a custom report in TFS. We can query work items by a search box or a query editor via Team Web Access. For more information on Work Item Queries, have a look at following links: http://msdn.microsoft.com/en-us/library/ms181308(v=vs.110).aspx http://msdn.microsoft.com/en-us/library/dd286638.aspx There are three types of queries: Flat queries Direct link queries Tree queries Search box queries We can find a work item using the search box available in the team project web portal, which is shown in the following screenshot: You can type in keywords in the search box located on top right of the team project web portal site; for example master, will result in the following work items: The search box content menu also has the ability to find work items based on assignment, status, created by, or work item type, as shown in the following screenshot: The search box finds items using shortcut filters or by specifying keywords or phrases, specific fields/field values, assignment or date modifications, or using the equals, contains, and not operators. For more information on search box filtering, have a look at http://msdn.microsoft.com/en-us/library/cc668120.aspx. Flat queries A flat query list of work items is used when you want to perform the following tasks: Finding a work item with an unknown ID Checking the status or other columns of work items Finding work items that you want to link to other work items Exporting work items to Microsoft Office, Microsoft Excel, and Office Project for bulk updates to column fields Generating a report about a set of work items As a general practice, to easily find work items, a team member can create Shared Queries, which are predefined queries shared across the team. They can be created, modified, and saved as a new query too. The following steps demonstrate how to open a flat query list and create a new query list: In the team project web portal, expand Shared Query List located on the left-hand side and click on the My Tasks query, as shown in the following screenshot: The resulting work items generated by the My Tasks query will be shown in the Work item pane, as shown in the following screenshot: As there are now three active tasks and two new tasks, we will create the My Active Tasks flat Query. To do so, click on Editor, as shown here: Add a clause to filter work items by Active State: Now click on the Save Query as… icon to save the query as My Active Task: Enter the query name and folder as appropriate. Here, we will save the query in the Shared Queries Folder and click on OK: Click on Results to view the work items for the My Active Tasks query and it will display the items, as shown in the following screenshot: Now let's have a look at how to create a query that represents all the work item details of different sprints/iterations. For example, you have a number of sprints in the Release 1 iteration and another release to test an application that's named Test Release 1 that you can find in Team Web Access site's settings page under the Iterations tab, as indicated in the following screenshot: In order to fetch the work item data of all the sprints to know which task is allocated to which team member in which sprint, go to the Backlogs tab and click on Create query: Specify the query name and folder location to store the query. Then click on OK: Then click on the link as indicated in the following screenshot, which will redirect you to the created query: Click on Flat list of work items and remove all the conditions except the iteration path, as shown in the following screenshot: Now save the query and run it. Add columns such as Work Item Type, State, Iteration Path, Title, and Assigned To as appropriate. As a result, this query will display the work items available under the team project for different sprints or releases, as indicated in the following screenshot: To filter work items based on the sprintreleaseiteration, change the iteration path condition for Value to Sprint 1, as indicated in the following screenshot: Finally, save and run the query, which will return the work items available under Sprint 1 of the Release 1 iteration: For more information on flat queries, have a look at http://msdn.microsoft.com/en-us/library/ms181308(v=vs.110).aspx. Direct link queries There are work items that are dependent on other work items such as tasks, bugs, and issues, and they can be tracked using direct links. They help determine risks and dependencies in order to collaborate among teams effectively. Direct link queries help perform the following tasks: Creating a custom view of linked work items Tracking dependencies across team projects and manage the commitments made to other project teams Assessing changes to work items that you do not own but that your work items depend on The following steps demonstrate how to generate a linked query list: Open My Tasks List from Shared Queries. Click on Editor. Click on Work items and direct links, as shown in the following screenshot: Specify the clause for the work item type: Task in Filters for linked work items: We can filter the first level work items by choosing the following option: The meanings of the filter options are described as follows: Only return work items that have the specified links: This option returns only the top-level work items that have links to work items. Return all top level work items: This option returns all the work items whether they have linked work items or not. This option also returns the second-level work items that are linked to the first-level work items. Only return work items that do not have the specified links: This option returns only the top-level work items those are not linked to any work items. Run the query, save it as My Linked Tasks and click on OK: Click on Results to view the linked tasks as configured previously. For more information on direct link queries, have a look at http://msdn.microsoft.com/en-us/library/dd286501(v=vs.110).aspx. Tree queries To view nested work items, tree queries are used by selecting the Tree of Work Items query type. Tree queries are used to execute following tasks: Viewing the hierarchy Finding parent or child work items Changing the tree hierarchy Exporting the tree view to Microsoft Excel for either bulk updates to column fields or to change the tree hierarchy The following steps demonstrate how to generate a tree query list: Open the My Tasks list from Shared Queries. Click on Editor. Click on Tree of work items, as shown in the following screenshot: Define the filter criteria for both parent and child work items. Specify the clause for work item type: Task in Filters for linked work items. Also, select Match top-level work items first. We can filter linked work items by choosing the following option: To find linked children, select Match top-level work items first and, to find linked parents, select Match linked work items first. Run the query, save it as My Tree Tasks, and click on OK. Click on Results to view the linked tasks as configured previously: For more information on Tree queries, have a look at: http://msdn.microsoft.com/en-us/library/dd286633(v=vs.110).aspx Summary In this article, we reviewed the team project scenario; and we also walked through the types of work item queries that produce work items we need in order to know the status of work progress. Resources for Article: Further resources on this subject: Creating a basic JavaScript plugin [article] Building Financial Functions into Excel 2010 [article] Team Foundation Server 2012 [article]
Read more
  • 0
  • 0
  • 3573

article-image-zoom-and-turn
Packt
07 Apr 2015
14 min read
Save for later

Zoom and Turn

Packt
07 Apr 2015
14 min read
In this article by Charlotte Olsson and Christina Hoyer, the authors of the book Prezi Cookbook, we will cover the following recipes: Zoom in Zoom out Zooming with frames Zooming out with frames Zooming in with frames Turns Turning an element Turning a frame Anatomy of a turn Combining turns for elements and frames (For more resources related to this topic, see here.) Many things about Prezi are distinctive, but two of the really special and characteristic features are the zoom and the turn. A good way to understand zooming is to compare it with reading letters from your lawyer or your bank. When you switch from the general information to the small writing at the bottom of the page, you zoom in by moving the paper closer to your eyes. When you want to read the general information again, you zoom out by moving the paper further out. The zoom feature allows us to zoom in on the canvas to show even the smallest detail, and to zoom out to show larger elements or beautiful and informative overviews. Prezi's turning feature makes it possible for us to change the direction of our travel on the canvas as we move forward in the presentation. See also Zooming is easier to work with if you understand how to create and edit your prezi's path and steps. Zoom in Zooming occurs between two steps in a prezi. Your work with zooms will be easier (and better) if you understand how steps work. We invite you to follow along with this first recipe. It will quickly recap how steps work, and by following along, you will be able to create your first zoom. If you are unsure about steps but prefer skipping this recap, think of a step as either an element or a frame that you have decided to show when you are in the Present mode. Getting ready Because zooming happens between two steps, we begin by creating two steps. The content of these steps can be images, texts, frames, or any other element that has been added to the path. We will be working with images. Perform the following steps: Open a new blank prezi. Delete any existing frames. Insert an image, sizing the image as you please. We inserted a red car. Insert a second image. Make this image smaller than the first image. We inserted a green car. Take a look at the following screenshot, where our prezi has two cars on the canvas; the green car is smaller than the red car: How to do it… Now you are just about ready to zoom: Switch to the Edit Path mode. First, click on the bigger image to add it as a step to the path lane. Next, click on the smaller image to add it as a step to the path lane. Click on Present to see how Prezi first shows step 1 and then zooms in to show step 2. There! That was your first zoom. Pretty easy, eh? We did it too! You can see it right here: Both cars have been added as steps; steps are shown in the path lane as thumbnails. There's more… To understand the zoom, let's look at the preceding screenshot. Take a look at the two cars on the canvas and compare them to the thumbnails in the path lane. Remember that each thumbnail represents a step. On the canvas there is a difference in the sizes of the two cars. But what is going on in the path lane? Here, in the path lane, the thumbnails for the red and the green cars show the cars at identical sizes. Hmmm! Does this mean that when we switch to the Present mode, Prezi will show the two cars at the same size? Yes, that is exactly what it means! What a thumbnail shows is exactly how the step that it represents will be shown in the Present mode. For this prezi, it means that when we click on Present, the red car will fill the screen entirely. When we move forward to step 2, the green car will also fill the screen entirely. This is how steps function: a step always fills the screen. And that is the anatomy of the zoom! Zooming happens because a step always fills the screen in Present mode. Consequently, if two steps have different sizes on the canvas, Prezi needs to zoom in or out to allow whatever step is next to fill the screen. Remember the equation for steps: 1 step = 1 full screen Zoom out Zooming out means going from a smaller section of the canvas to a relatively bigger section of the canvas in the Present mode. Zooming out is a great tool that is typically used for visual illustrations on the canvas, when the content of the presentation shifts from a detailed level to some degree of overview of the canvas. The biggest zoom-outs in Prezi are path steps that are overviews of the entire canvas, which many presenters use to open or close their presentations. Getting ready Open a new blank prezi. Delete any existing frames. Insert an image, sizing it as you please. We inserted a green car. Insert a second image. Make this image larger than the first image. We inserted a red car. We will use the prezi shown in the following screenshot. We want our presentation to begin by showing the green car. Then we want it to zoom out so that the next step shows the red car. How to do it… Switch to the Edit Path mode. First, click on the smaller image to add it as a step to the path lane. Then, select the larger image to add it as a step to the path lane. Click on Present to see how Prezi first shows step 1 and then zooms out to show the bigger step 2. Take a look at the following screenshot, where both cars have been added as steps; steps are shown in the path lane as thumbnails: Zooming with frames When we work in Prezi, we often need to zoom in or out to show a section or specific area of the canvas, rather than a single element. Sometimes the section that we want to show is small; sometimes it is a larger section. This is easily done with frames. Frames are a fantastic tool because they put you in the driver's seat. By carefully using frames, it is possible to target the exact area of the canvas that we want to show. In the following text and recipes, we will be using a red bracket frame for most purposes. That is because we want to ensure that the frame is clearly visible to you. Normally in Prezi, the reality about zooms is that we typically use invisible frames. Invisible frames are our favorite for zooming purposes because they do not interfere with style, colors, or the general design of our prezi. Take a look at the following screenshot, where the invisible frames on the car to the right do not disturb the design: Zooming out with frames It is easy to zoom out to any section on the canvas using frames. All you have to do is frame the section that you want to show. Then set that frame as a step, and you are ready to go to that frame in the Present mode, where Prezi will show the chosen (framed) section of the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Delete any existing frames. Insert an image, sizing it as you please. How to do it… In the top-left corner, choose a frame type in the Frame drop-down menu (you may use any type). Click on the frame icon to insert the frame into the canvas. Insert and adjust the frame so that it encloses the image and leaves a nice amount of space around it. Switch to the Edit Path mode. First, select the image to add it as a step to the path lane. Then select the frame around the image to add it as step 2 to the path lane. Click on Present to see how Prezi shows step 1 (the image) and then zooms out to step 2 (the frame). The frame around the car, as shown in the following screenshot, enables us to zoom out: There's more… Overviews are used a lot in most Prezi presentations. The overview can be of a portion of the information on the canvas, such as a chapter overview, or it can include your entire Prezi. Overviews are great because they help your audience get just that—an overview! It is easy to create an overview. Just frame the section of the canvas that you want to show, add it to the path, and that's it! The following is a screenshot showing an overview of all the images we have used so far in this article: Zooming in with frames It is easy to zoom in or out to any section on the canvas using frames. Just frame the section that you want to show and set that frame as a step. This recipe demonstrates how to zoom in. We will be zooming into a detail of a car, but you can use these steps to perform any movement from a larger section to a smaller section on the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Insert an image. Delete any existing frames. How to do it… In the top-left corner, choose a frame type in the Frame drop-down menu (you may use any type). Click on the frame icon to insert the frame on the canvas. Drag the frame onto a detail of the image. Resize the frame to your liking by pulling any of the four corner markers of the frame. Switch to the Edit Path mode. Now select the image to add it as a step to the path lane. Then select your new frame to add it as step 2 to the path lane. Click on Present to see how Prezi shows step 1 (the full image) and then zooms in to show step 2 (the frame). Take a look at the following screenshot, where the full image of the car is step 1 and the frame is step 2: When you click on Present, the prezi will show the car (step 1) and then zoom in to show the frame (step 2), allowing you to see the wheel closely. Turns Prezi allows us to create turning effects. The turn feature makes it possible to change directions as we move forward in the presentation. Turns have the potential of adding great dynamics to your Prezi presentation. If used correctly, turns can be a powerful tool that actively support your message. The following recipes will show you how to easily create turns for elements and frames. Towards the end of the article, we will also show you how combining frames and turns create interesting effects. In Appendix B, Transitions, we will show you how to integrate zooms and turns with your overall design. How it works… When you place an element on the canvas, it is not turned. It is in its original or "right-way-up" position. Now suppose you switch to Present mode, and begin moving forward through the steps in your presentation. When a step is shown, the element that is that is this step will always be shown at its original (right-way-up) position, no matter how much you turned it on the canvas. So, if the elements actually do not turn, how does Prezi create this turning effect? Well, as we are about to see, it is actually the canvas that is being turned. Read on! There's more… When we refer to elements that can be turned, it is important to keep in mind that this can be any element that you can put on the canvas. Images, videos, PDF files, text elements, and frames are all elements that can be turned. Confused? Don't be! Try it out on your canvas. It's pretty easy, and you'll quickly get the hang of it. Turning an element Turns create a feel of action that is great for grabbing the attention of your audience. Fortunately, you can easily turn any selected element on the canvas. For this recipe, we will be using an image. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Delete any existing frames. Insert an image. Notice how the image looks on the canvas in its right-way-up position. How to do it… Click on the image to select it. Hover over any one of the square-shaped corner markers. This activates the turning tool (the circle handle). Grab the turning tool and drag it up or down to turn the image. Place the image in its final position by releasing your mouse key. The turning handle is shown in the following screenshot. Use it to drag up or down to turn the selected element: Turning a frame Any element on the canvas can be turned. This applies to frames as well. Practice by experimenting, and you will gradually develop a good sense of how turned frames work on the canvas. Getting ready Perform the following steps on a new prezi: Open a new blank prezi. Insert any frame (except the circle frame, which makes it difficult to notice the turn). How to do it… Click on the frame to select it. Hover over any one of the square-shaped corner markers. This activates the turning tool (the circle handle). Grab the turning tool and drag it up or down to turn the frame. Place it in its chosen final position by releasing your mouse button. In the following screenshot, use the turning handle to drag up or down to turn the frame: Anatomy of a turn The following image provides an overview of the zooms and turns we discussed in this article: Overview of elements and frames showing right-way-up and turned positions To study the anatomy of turns, let's take a look at the preceding screenshot: Step Element Canvas Path lane (and present mode) 1 Image Right-way-up image Image is shown right way up Image fills the screen 2 Image Turned image Image is shown right way up Canvas must turn Image fills the screen 3 Frame Right-way-up frame Frame is shown right way up Frame fills the screen 4 Frame Turned frame Frame is shown right way up Canvas must turn Frame fills the screen There's more… If you need to edit a frame without affecting the content, you can do so by selecting only the frame. This is done by holding down the Alt key while using you mouse to select the frame. Once the frame is selected, you can edit its size and position (including turning) as it pleases you. For more keyboard shortcuts, please refer to Appendix C, Keyboard Shortcuts. Did you notice that other elements sometimes get highlighted when you turn an element? This reflects that their angle is similar to the element you are editing. Combining turns for elements and frames Take a look at the following screenshot, where turned elements that are steps make the canvas turn when in Present mode: Step Element Canvas Path lane (and present mode) 1 Frame Frame is turned Frame is shown right way up Canvas must turn Frame fills the screen 2 Image Image is turned Image is shown right way up Canvas must turn Image fills the screen Zooms and turns can be combined in numerous ways, and the best way to get the feel of them is by experimenting on the canvas. Go for it! Summary In this article, we took a hands-on approach to the "how-to" of zooms and turns. When zooms and turns are used correctly, they become powerful tools that greatly enhance your prezi. We also showed you how to create them, how to combine them with each other, and how they are applied to all the elements we can insert onto the canvas. Resources for Article: Further resources on this subject: The Fastest Way to Go from an Idea to a Prezi [article] Using Prezi - The Online Presentation Software Tool [article] Getting Started with Impressive Presentations/a> [article]
Read more
  • 0
  • 0
  • 1519

article-image-adding-and-editing-content-your-web-pages
Packt
06 Apr 2015
12 min read
Save for later

Adding and Editing Content in Your Web Pages

Packt
06 Apr 2015
12 min read
This article by Miko Coffey, the author of the book, Building Business Websites with Squarespace 7, delves into the processes of adjusting images, adding content to sidebars or footers, and adding links. (For more resources related to this topic, see here.) Adjusting images in Squarespace We've learned how to adjust the size of images in relation to other elements on the page, but so far, the images themselves have remained intact, showing the full image. However, you can actually crop or zoom images so they only show a portion of the photo on the screen without having to leave the Squarespace interface. You can also apply effects to images using the built-in Aviary Image Editor, such as rotating, enhancing color, boosting contrast, whitening teeth, removing blemishes, or hundreds of other adjustments, which means you don't need fancy image editing software to perform even fairly advanced image adjustments. Cropping and zooming images with LayoutEngine If you only want to crop your image, you don't need to use the Aviary Image Editor: you can crop images using LayoutEngine in the Squarespace Content Editor. To crop an image, you perform the same steps as those to adjust the height of a Spacer Block: just click and drag the dot to change the part of the image that is shown. As you drag the dot up or down, you will notice that: Dragging the dot up will chop off the top and bottom of your image Dragging the dot down will zoom in your image, cutting off the sides and making the image appear larger When dragging the dot very near the original dimensions of your image, you will feel and see the cursor pull/snap to the original size Cropping an image in an Image Block in this manner does not remove parts from the original image; it merely adjusts the part of the image that will be shown in the Image Block on the page. You can always change your mind later. Adjusting the Focal Point of images You'll notice that all of the cropping and zooming of images is based on the center of the image. What if your image has elements near the edges that you want to show instead of weighting things towards the center? With Squarespace, you can influence which part of the image displays by adjusting the Focal Point of the image. The Focal Point identifies the most important part of the image to instruct the system to try to use this point as the basis for cropping or zooming. However, if your Image Block is an extreme shape, such as a long skinny rectangle, it may not be possible to fit all of your desired area into the cropped or zoomed image space. Adjusting the Focal Point can also be useful for Gallery images, as certain templates display images in a square format or other formats that may not match the dimensions of your images. You can also adjust the Focal Point of any Thumbnail Images that you have added in Page Settings to select which part to show as the thumbnail or header banner. To adjust an image's Focal Point, follow these steps: Double-click on the image to open the Edit Image overlay window. Hover your mouse over the image thumbnail, and you will see a translucent circle appear at the center of the thumbnail. This is the Focal Point. Click and drag the circle until it sits on top of the part of the image you want to include, as shown in the following screenshot: Using the Aviary Image Editor You can also use the Aviary Image Editor to crop or zoom into your images as well as many more adjustments that are too numerous to list here. It's important to remember that all adjustments carried out in the Aviary Image Editor are permanent: there is no way to go back to a previous version of your image. Therefore, it's better to use LayoutEngine for cropping and zooming and reserve Aviary for other adjustments that you know you want to make permanently, such as rotating a portrait image that was taken sideways to display vertically. Because edits performed in the Aviary Image Editor are permanent, use it with caution and always keep a backup original version of the image on your computer just in case. Here's how to edit an image with the Aviary Image Editor: Double-click on the image to open the Edit Image overlay window. Click on the Edit button below the image thumbnail. This will open the Aviary window, as shown in the following screenshot: Select the type of adjustment you want to perform from the menu at the top. Use the controls to perform the adjustment and click on Apply. The window will show you the effect of the adjustment on the image. Perform any other adjustments in the same manner. You can go back to previous steps using the arrows in the bottom-left section of the editor window. Once you have performed all desired adjustments, click on Save to commit the adjustments to the image permanently. The Aviary window will now close. In the Edit Image window, click on Save to store the Aviary adjustments. The Edit Image window will now close. In the Content Editor window, click on the Save button to refresh the page with the newly edited version of the image. Adding content to sidebars or footers Until this point, all of our content additions and edits have been performed on a single page. However, it's likely that you will want to have certain blocks of content appear on multiple or all pages, such as a copyright notice in your page footer or an About the Author text snippet in the sidebar of all blog posts. You add content to footers or sidebars using the Content Editor in much the same way as adding the page content. However, there are a few restrictions. Certain templates only allow certain types of blocks in footers or sidebars, and some templates have restrictions on positioning elements as well—for example, it's unlikely that you will be able to wrap text around an image in a sidebar due to space limitations. If you are unable to get the system to accept an addition or repositioning move that you are trying to perform to a block in a sidebar or footer, it usually indicates that you are trying to perform something that is prohibited. Adding or editing content in a footer Follow these steps to add or edit content in a footer: In the preview screen, scroll to the bottom of your page and hover over the footer area to activate the Annotations there, and click on the Edit button next to the Footer Content label, as shown in the following screenshot: This will open the Content Editor window. You will notice that Insert Points appear just like before, and you can click on an Insert Point to add a block, or click within an existing Text Block to edit it. You can move blocks in a footer in the same way as those in a page body. Most templates have a footer, but not all of them do. Some templates hide the footer on certain page types, so if you can't see your footer, try looking at a standard page, or double-check whether your template offers one. Adding or editing content in a sidebar Not all templates have a sidebar, but if yours does, here's how you can add or edit content in your sidebar: While in the Pages menu, navigate to a page that you know has a sidebar in your template, such as a Blog page. You should see the template's demo content preloaded into your sidebar. Hover your mouse over the sidebar area to activate the Annotations, and click on the Edit button that appears at the top of the sidebar area. Make sure you click on the correct Annotation. Other Annotations may be activated on the page, so don't get confused and click on anything other than the sidebar Annotations, as shown in the following screenshot: Once the Content Editor window opens, you can click on an Insert Point to add a block or click within an existing Text Block to edit it. You can move blocks in a sidebar in the same way as those in a page body. Enabling a sidebar If you do not see the sidebar, but you know that your template allows one, and you know you are on the correct page type (for example, a Blog post), then it's possible that your sidebar is not enabled. Depending on the template, you enable your sidebar in one of two ways. The first method is in the Style Editor, as follows: First, ensure you are looking at a Blog page (or any page that can have a sidebar in your template). From the Home menu in the side panel, navigate to Design | Style Editor. In the Style Editor menu, scroll down until you see a set of controls related to Blog Styles. Find the control for the sidebar, and select the position you want. The following screenshot shows you an example of this: Click on Save to commit your changes. If you don't see the sidebar control in the Style Editor, you may find it in the Blog or Page Settings instead, as described here: First, ensure you are looking at the Blog page (or any page that can have a sidebar in your template), and then click on the Settings button in the Annotations or the cog icon in the Pages menu to open the Settings window. Look for a menu item called Page Layout and select the sidebar position, as shown in the following screenshot: On smaller screens, many templates use fluid layout to stack the sidebar below the main content area instead of showing it on the left- or right-hand side. If you can't see your sidebar and are viewing the website on a tablet or another small/low-resolution screen, scroll down to the bottom of the page and you will most likely see your sidebar content there, just above the footer. Adding links that point to web pages or files The final type of basic content that we'll cover in this article is adding hyperlinks to your pages. You can use these links to point to external websites, other pages on your own website, or files that visitors can either view within the browser or download to their computers. You can assign a link to any word or phrase in a Text Block, or you can assign a link to an image. You can also use a special type of block called a Button to make links really stand out and encourage users to click. When creating links in any of these scenarios, you will be presented with three main options: External: You can paste or type the full web address of the external website you want the link to point to. You can also choose to have this website open in a new window to allow users to keep your site open instead of navigating away entirely. The following screenshot shows the External option: Files: You can either upload a file directly, or link to a file that you have already uploaded earlier. This screenshot shows the Files option: Content: You can link to any page, category, or tag that you have created on your site. Linking to a category or tag will display a list of all items that have been labeled with that tag or category. Here's an example of the Content option: Assigning a link to word(s) in a Text Block Follow these steps to assign a link to a word or phrase in a Text Block: Highlight the word(s), and then click on the link icon (two interlocked ovals) in the text editor menu. Select the type of link you want to add, input the necessary settings, and then click anywhere outside the Edit Link window. This will close the Edit Link window, and you will see that the word is now a different color to indicate that the link has been applied. Click on the Save button at the top of the page. You can change or remove the link at any time by clicking on the word. A floating window will appear to show you what the link currently points to, along with the options to edit or remove the link. This is shown in the following screenshot: Assigning a link to an image Here's how you can assign a link to an image: Double-click on the image to open the Edit Image window. Under Clickthrough URL, select the type of link that you want to add and input the necessary settings. Click on Save. Creating a Button on your page You can create a Button on your page by following these steps: In the Content Editor window, find the point where you want to insert the button on the page, and click on the Insert Point to open the Add Block menu. Under the Filters & Lists category, choose Button. Type the text that you want to show on the button, select the type of link, and select the button size and alignment you want. The following screenshot shows the Edit Button window: This is how the button appears on the page: Summary In this article, you have acquired skills you need to add and edit basic web content, and you have learned how to move things around to create finished web pages with your desired page layout. Visit www.square-help.com/inspiration if you'd like to see some examples of different page layouts to give you ideas for making your own pages. Here, you'll see how you can use LayoutEngine to create sophisticated layouts for a range of page types. Resources for Article: Further resources on this subject: Welcoming your Visitors: Creating Attractive Home Pages and Overview Pages [article] Selecting Elements [article] Creating Blog Content in WordPress [article]
Read more
  • 0
  • 0
  • 11464
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-getting-started-odoo-development
Packt
06 Apr 2015
14 min read
Save for later

Getting Started with Odoo Development

Packt
06 Apr 2015
14 min read
In this article by Daniel Reis, author of the book Odoo Development Essentials, we will see how to get started with Odoo. Odoo is a powerful open source platform for business applications. A suite of closely integrated applications was built on it, covering all business areas from CRM and Sales to Accounting and Stocks. Odoo has a dynamic and growing community around it, constantly adding features, connectors, and additional business apps. Many can be found at Odoo.com. In this article, we will guide you to install Odoo from the source code and create your first Odoo application. Inspired by the todomvc.com project, we will build a simple to-do application. It should allow us to add new tasks, mark them as completed, and finally, clear the task list from all already completed tasks. (For more resources related to this topic, see here.) Installing Odoo from source We will use a Debian/Ubuntu system for our Odoo server, so you will need to have it installed and available to work on. If you don't have one, you might want to set up a virtual machine with a recent version of Ubuntu Server before proceeding. For a development environment, we will install it directly from Odoo's Git repository. This will end up giving more control on versions and updates. We will need to make sure Git is installed. In the terminal, type the following command: $ sudo apt-get update && sudo apt-get upgrade # Update system $ sudo apt-get install git # Install Git To keep things tidy, we will keep all our work in the /odoo-dev directory inside our home directory: $ mkdir ~/odoo-dev # Create a directory to work in $ cd ~/odoo-dev # Go into our work directory Now, we can use this script to show how to install Odoo from source code in a Debian system: $ git clone https://github.com/odoo/odoo.git -b 8.0 --depth=1 $ ./odoo/odoo.py setup_deps # Installs Odoo system dependencies $ ./odoo/odoo.py setup_pg   # Installs PostgreSQL & db superuser Quick start an Odoo instance In Odoo 8.0, we can create a directory and quick start a server instance for it. We can start by creating the directory called todo-app for our instance as shown here: $ mkdir ~/odoo-dev/todo-app $ cd ~/odoo-dev/todo-app Now we can create our todo_minimal module in it and initialize the Odoo instance: $ ~/odoo-dev/odoo/odoo.py scaffold todo_minimal $ ~/odoo-dev/odoo/odoo.py start -i todo_minimal The scaffold command creates a module directory using a predefined template. The start command creates a database with the current directory name and automatically adds it to the addons path so that its modules are available to be installed. Additionally, we used the -i option to also install our todo_minimal module. It will take a moment to initialize the database, and eventually we will see an INFO log message Modules loaded. Then, the server will be ready to listen to client requests. By default, the database is initialized with demonstration data, which is useful for development databases. Open http://<server-name>:8069 in your browser to be presented with the login screen. The default administrator account is admin with the password admin. Whenever you want to stop the Odoo server instance and return to the command line, press CTRL + C. If you are hosting Odoo in a virtual machine, you might need to do some network configuration to be able to use it as a server. The simplest solution is to change the VM network type from NAT to Bridged. Hopefully, this can help you find the appropriate solution in your virtualization software documentation. Creating the application models Now that we have an Odoo instance and a new module to work with, let's start by creating the data model. Models describe business objects, such as an opportunity, a sales order, or a partner (customer, supplier, and so on). A model has data fields and can also define specific business logic. Odoo models are implemented using a Python class derived from a template class. They translate directly to database objects, and Odoo automatically takes care of that when installing or upgrading the module. Let's edit the models.py file in the todo_minimal module directory so that it contains this: # -*- coding: utf-8 -*- from openerp import models, fields, api   class TodoTask(models.Model):    _name = 'todo.task'    name = fields.Char()    is_done = fields.Boolean()    active = fields.Boolean(default=True) Our to-do tasks will have a name title text, a done flag, and an active flag. The active field has a special meaning for Odoo; by default, records with a False value in it won't be visible to the user. We will use it to clear the tasks out of sight without actually deleting them. Upgrading a module For our changes to take effect, the module has to be upgraded. The simplest and fastest way to make all our changes to a module effective is to go to the terminal window where you have Odoo running, stop it (CTRL + C), and then restart it requesting the module upgrade. To start upgrading the server, the todo_minimal module in the todo-app database, use the following command: $ cd ~/odoo-dev/todo-app # we should be in the right directory $ ./odoo.py start -u todo_minimal The -u option performs an upgrade on a given list of modules. In this case, we upgrade just the todo_minimal module. Developing a module is an iterative process. You should make your changes in gradual steps and frequently install them with a module upgrade. Doing so will make it easier to detect mistakes sooner, and narrowing down the culprit in case the error message is not clear enough. And this can be very frequent when starting with Odoo development. Adding menu options Now that we have a model to store our data, let's make it available on the user interface. All we need is to add a menu option to open the to-do task model, so it can be used. This is done using an XML data file. Let's reuse the templates.xml data file and edit it so that it look like this: <openerp> <data>    <act_window id="todo_task_action"                name="To-do Task Action"                    res_model="todo.task" view_mode="tree,form" />        <menuitem id="todo_task_menu"                 name="To-do Tasks"                  action="todo_task_action"                  parent="mail.mail_feeds"                  sequence="20" /> </data> </openerp> Here, we have two records: a menu option and a window action. The Communication top menu to the user interface was added by the mail module dependency. We can know the identifier of the specific menu option where we want to add our own menu option by inspecting that module, it is mail_feeds. Also, our menu option executes the todo_task_action action we created. And that window action opens a tree view for the todo.task model. If we upgrade the module now and try the menu option just added, it will open an automatically generated view for our model, allowing to add and edit records. Views should be defined for models to be exposed to the users, but Odoo is nice enough to do that automatically if we don't, so we can work with our model right away, without having any form or list views defined yet. So far so good. Let's improve our user interface now. Creating Views Odoo supports several types of views, but the more important ones are list (also known as "tree"), form, and search views. For our simple module, we will just add a list view. Edit the templates.xml file to add the following <record> element just after the <data> opening tag at the top:    <record id="todo_task_tree" model="ir.ui.view">        <field name="name">To-do Task Form</field>        <field name="model">todo.task</field>        <field name="arch" type="xml">            <tree editable="top" colors="gray:is_done==True">                <field name="name" />                <field name="is_done" />            </tree>          </field>    </record> This creates a tree view for the todo.task model with two columns: the title name and the is_done flag. Additionally, it has a color rule to display the tasks done in gray. Adding business logic We want to add business logic to be able to clear the already completed tasks. Our plan is to add an option on the More button, shown at the top of the list when we select lines. We will use a very simple wizard for this, opening a confirmation dialog, where we can execute a method to inactivate the done tasks. Wizards use a special type of model for temporary data: a Transient model. We will now add it to the models.py file as follows: class TodoTaskClear(models.TransientModel):    _name = 'todo.task.clear'      @api.multi    def do_clear_done(self):        Task = self.env['todo.task']        done_recs = Task.search([('is_done', '=', True)])        done_recs.write({'active': False)}        return True Transient models work just like regular models, but their data is temporary and will eventually be purged from the database. In this case, we don't need any fields, since no additional input is going to be asked to the user. It just has a method that will be called when the confirmation button is pressed. It lists all tasks that are done and then sets their active flag to False. Next, we need to add the corresponding user interface. In the templates.xml file, add the following code:    <record id="todo_task_clear_dialog" model="ir.ui.view">      <field name="name">To-do Clear Wizard</field>      <field name="model">todo.task.clear</field>      <field name="arch" type="xml">        <form>           All done tasks will be cleared, even if            unselected.<br/>Continue?            <footer>                <button type="object"                        name="do_clear_done"                        string="Clear                        " class="oe_highlight" />                or <button special="cancel"                            string="Cancel"/>            </footer>        </form>      </field>    </record>      <!-- More button Action -->    <act_window id="todo_task_clear_action"        name="Clear Done"        src_model="todo.task"        res_model="todo.task.clear"        view_mode="form"        target="new" multi="True" /> The first record defines the form for the dialog window. It has a confirmation text and two buttons on the footer: Clear and Cancel. The Clear button when pressed will call the do_clear_done() method defined earlier. The second record is an action that adds the corresponding option in the More button for the to-do tasks model. Configuring security Finally, we need to set the default security configurations for our module. These configurations are usually stored inside the security/ directory. We need to add them to the __openerp__.py manifest file. Change the data attribute to the following: 'data': [  'security/ir.model.access.csv',    'security/todo_access_rules.xml',    'templates.xml'], The access control lists are defined for models and user groups in the ir.model.access.csv file. There is a pre-generated template. Edit it to look like this: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink access_todo_task_user,To-do Task User Access,model_todo_task,base.group_user,1,1,1,1 This gives full access to all users in the base group named Employees. However, we want each user to see only their own to-do tasks. For that, we need a record rule setting a filter on the records the base group can see. Inside the security/ directory, add a todo_access_rules.xml file to define the record rule: <openerp> <data>    <record id="todo_task_user_rule" model="ir.rule">        <field name="name">ToDo Tasks only for owner</field>        <field name="model_id" ref="model_todo_task"/>        <field name="domain_force">[('create_uid','=',user.id)]              </field>        <field name="groups"                eval="[(4, ref('base.group_user'))]"/>    </record> </data> </openerp> This is all we need to set up the module security. Summary We created a new module from start, covering the most frequently used elements in a module: models, user interface views, business logic in model methods, and access security. In the process, we got familiar with the module development process, involving module upgrades and application server restarts to make the gradual changes effective in Odoo. Resources for Article: Further resources on this subject: Making Goods with Manufacturing Resource Planning [article] Machine Learning in IPython with scikit-learn [article] Administrating Solr [article]
Read more
  • 0
  • 0
  • 7710

article-image-working-blender
Packt
06 Apr 2015
15 min read
Save for later

Working with Blender

Packt
06 Apr 2015
15 min read
In this article by Jos Dirksen, author of Learning Three.js – the JavaScript 3D Library for WebGL - Second Edition, we will learn about Blender and also about how to load models in Three.js using different formats. (For more resources related to this topic, see here.) Before we get started with the configuration, we'll show the result that we'll be aiming for. In the following screenshot, you can see a simple Blender model that we exported with the Three.js plugin and imported in Three.js with THREE.JSONLoader: Installing the Three.js exporter in Blender To get Blender to export Three.js models, we first need to add the Three.js exporter to Blender. The following steps are for Mac OS X but are pretty much the same on Windows and Linux. You can download Blender from www.blender.org and follow the platform-specific installation instructions. After installation, you can add the Three.js plugin. First, locate the addons directory from your Blender installation using a terminal window: On my Mac, it's located here: ./blender.app/Contents/MacOS/2.70/scripts/addons. For Windows, this directory can be found at the following location: C:UsersUSERNAMEAppDataRoamingBlender FoundationBlender2.7Xscriptsaddons. And for Linux, you can find this directory here: /home/USERNAME/.config/blender/2.7X/scripts/addons. Next, you need to get the Three.js distribution and unpack it locally. In this distribution, you can find the following folder: utils/exporters/blender/2.65/scripts/addons/. In this directory, there is a single subdirectory with the name io_mesh_threejs. Copy this directory to the addons folder of your Blender installation. Now, all we need to do is start Blender and enable the exporter. In Blender, open Blender User Preferences (File | User Preferences). In the window that opens, select the Addons tab, and in the search box, type three. This will show the following screen: At this point, the Three.js plugin is found, but it is still disabled. Check the small checkbox to the right, and the Three.js exporter will be enabled. As a final check to see whether everything is working correctly, open the File | Export menu option, and you'll see Three.js listed as an export option. This is shown in the following screenshot: With the plugin installed, we can load our first model. Loading and exporting a model from Blender As an example, we've added a simple Blender model named misc_chair01.blend in the assets/models folder, which you can find in the sources for this article. In this section, we'll load this model and show the minimal steps it takes to export this model to Three.js. First, we need to load this model in Blender. Use File | Open and navigate to the folder containing the misc_chair01.blend file. Select this file and click on Open. This will show you a screen that looks somewhat like this: Exporting this model to the Three.js JSON format is pretty straightforward. From the File menu, open Export | Three.js, type in the name of the export file, and select Export Three.js. This will create a JSON file in a format Three.js understands. A part of the contents of this file is shown next: {   "metadata" : {    "formatVersion" : 3.1,    "generatedBy"   : "Blender 2.7 Exporter",    "vertices"     : 208,    "faces"         : 124,    "normals"       : 115,    "colors"       : 0,    "uvs"          : [270,151],    "materials"     : 1,    "morphTargets" : 0,    "bones"         : 0 }, ... However, we aren't completely done. In the previous screenshot, you can see that the chair contains a wooden texture. If you look through the JSON export, you can see that the export for the chair also specifies a material, as follows: "materials": [{ "DbgColor": 15658734, "DbgIndex": 0, "DbgName": "misc_chair01", "blending": "NormalBlending", "colorAmbient": [0.53132, 0.25074, 0.147919], "colorDiffuse": [0.53132, 0.25074, 0.147919], "colorSpecular": [0.0, 0.0, 0.0], "depthTest": true, "depthWrite": true, "mapDiffuse": "misc_chair01_col.jpg", "mapDiffuseWrap": ["repeat", "repeat"], "shading": "Lambert", "specularCoef": 50, "transparency": 1.0, "transparent": false, "vertexColors": false }], This material specifies a texture, misc_chair01_col.jpg, for the mapDiffuse property. So, besides exporting the model, we also need to make sure the texture file is also available to Three.js. Luckily, we can save this texture directly from Blender. In Blender, open the UV/Image Editor view. You can select this view from the drop-down menu on the left-hand side of the File menu option. This will replace the top menu with the following: Make sure the texture you want to export is selected, misc_chair_01_col.jpg in our case (you can select a different one using the small image icon). Next, click on the Image menu and use the Save as Image menu option to save the image. Save it in the same folder where you saved the model using the name specified in the JSON export file. At this point, we're ready to load the model into Three.js. The code to load this into Three.js at this point looks like this: var loader = new THREE.JSONLoader(); loader.load('../assets/models/misc_chair01.js', function (geometry, mat) { mesh = new THREE.Mesh(geometry, mat[0]);   mesh.scale.x = 15; mesh.scale.y = 15; mesh.scale.z = 15;   scene.add(mesh);   }, '../assets/models/'); We've already seen JSONLoader before, but this time, we use the load function instead of the parse function. In this function, we specify the URL we want to load (points to the exported JSON file), a callback that is called when the object is loaded, and the location, ../assets/models/, where the texture can be found (relative to the page). This callback takes two parameters: geometry and mat. The geometry parameter contains the model, and the mat parameter contains an array of material objects. We know that there is only one material, so when we create THREE.Mesh, we directly reference that material. If you open the 05-blender-from-json.html example, you can see the chair we just exported from Blender. Using the Three.js exporter isn't the only way of loading models from Blender into Three.js. Three.js understands a number of 3D file formats, and Blender can export in a couple of those formats. Using the Three.js format, however, is very easy, and if things go wrong, they are often quickly found. In the following section, we'll look at a couple of the formats Three.js supports and also show a Blender-based example for the OBJ and MTL file formats. Importing from 3D file formats At the beginning of this article, we listed a number of formats that are supported by Three.js. In this section, we'll quickly walk through a couple of examples for those formats. Note that for all these formats, an additional JavaScript file needs to be included. You can find all these files in the Three.js distribution in the examples/js/loaders directory. The OBJ and MTL formats OBJ and MTL are companion formats and often used together. The OBJ file defines the geometry, and the MTL file defines the materials that are used. Both OBJ and MTL are text-based formats. A part of an OBJ file looks like this: v -0.032442 0.010796 0.025935 v -0.028519 0.013697 0.026201 v -0.029086 0.014533 0.021409 usemtl Material s 1 f 2731 2735 2736 2732 f 2732 2736 3043 3044 The MTL file defines materials like this: newmtl Material Ns 56.862745 Ka 0.000000 0.000000 0.000000 Kd 0.360725 0.227524 0.127497 Ks 0.010000 0.010000 0.010000 Ni 1.000000 d 1.000000 illum 2 The OBJ and MTL formats by Three.js are understood well and are also supported by Blender. So, as an alternative, you could choose to export models from Blender in the OBJ/MTL format instead of the Three.js JSON format. Three.js has two different loaders you can use. If you only want to load the geometry, you can use OBJLoader. We used this loader for our example (06-load-obj.html). The following screenshot shows this example: To import this in Three.js, you have to add the OBJLoader JavaScript file: <script type="text/javascript" src="../libs/OBJLoader.js"> </script> Import the model like this: var loader = new THREE.OBJLoader(); loader.load('../assets/models/pinecone.obj', function (loadedMesh) { var material = new THREE.MeshLambertMaterial({color: 0x5C3A21});   // loadedMesh is a group of meshes. For // each mesh set the material, and compute the information // three.js needs for rendering. loadedMesh.children.forEach(function (child) {    child.material = material;    child.geometry.computeFaceNormals();    child.geometry.computeVertexNormals(); });   mesh = loadedMesh; loadedMesh.scale.set(100, 100, 100); loadedMesh.rotation.x = -0.3; scene.add(loadedMesh); }); In this code, we use OBJLoader to load the model from a URL. Once the model is loaded, the callback we provide is called, and we add the model to the scene. Usually, a good first step is to print out the response from the callback to the console to understand how the loaded object is built up. Often with these loaders, the geometry or mesh is returned as a hierarchy of groups. Understanding this makes it much easier to place and apply the correct material and take any other additional steps. Also, look at the position of a couple of vertices to determine whether you need to scale the model up or down and where to position the camera. In this example, we've also made the calls to computeFaceNormals and computeVertexNormals. This is required to ensure that the material used (THREE.MeshLambertMaterial) is rendered correctly. The next example (07-load-obj-mtl.html) uses OBJMTLLoader to load a model and directly assign a material. The following screenshot shows this example: First, we need to add the correct loaders to the page: <script type="text/javascript" src="../libs/OBJLoader.js"> </script> <script type="text/javascript" src="../libs/MTLLoader.js"> </script> <script type="text/javascript" src="../libs/OBJMTLLoader.js"> </script> We can load the model from the OBJ and MTL files like this: var loader = new THREE.OBJMTLLoader(); loader.load('../assets/models/butterfly.obj', '../assets/ models/butterfly.mtl', function(object) { // configure the wings var wing2 = object.children[5].children[0]; var wing1 = object.children[4].children[0];   wing1.material.opacity = 0.6; wing1.material.transparent = true; wing1.material.depthTest = false; wing1.material.side = THREE.DoubleSide;   wing2.material.opacity = 0.6; wing2.material.depthTest = false; wing2.material.transparent = true; wing2.material.side = THREE.DoubleSide;   object.scale.set(140, 140, 140); mesh = object; scene.add(mesh);   mesh.rotation.x = 0.2; mesh.rotation.y = -1.3; }); The first thing to mention before we look at the code is that if you receive an OBJ file, an MTL file, and the required texture files, you'll have to check how the MTL file references the textures. These should be referenced relative to the MTL file and not as an absolute path. The code itself isn't that different from the one we saw for THREE.ObjLoader. We specify the location of the OBJ file, the location of the MTL file, and the function to call when the model is loaded. The model we've used as an example in this case is a complex model. So, we set some specific properties in the callback to fix some rendering issues, as follows: The opacity in the source files was set incorrectly, which caused the wings to be invisible. So, to fix that, we set the opacity and transparent properties ourselves. By default, Three.js only renders one side of an object. Since we look at the wings from two sides, we need to set the side property to the THREE.DoubleSide value. The wings caused some unwanted artifacts when they needed to be rendered on top of each other. We've fixed that by setting the depthTest property to false. This has a slight impact on performance but can often solve some strange rendering artifacts. But, as you can see, you can easily load complex models directly into Three.js and render them in real time in your browser. You might need to fine-tune some material properties though. Loading a Collada model Collada models (extension is .dae) are another very common format for defining scenes and models (and animations as well). In a Collada model, it is not just the geometry that is defined, but also the materials. It's even possible to define light sources. To load Collada models, you have to take pretty much the same steps as for the OBJ and MTL models. You start by including the correct loader: <script type="text/javascript" src="../libs/ColladaLoader.js"> </script> For this example, we'll load the following model: Loading a truck model is once again pretty simple: var mesh; loader.load("../assets/models/dae/Truck_dae.dae", function   (result) { mesh = result.scene.children[0].children[0].clone(); mesh.scale.set(4, 4, 4); scene.add(mesh); }); The main difference here is the result of the object that is returned to the callback. The result object has the following structure: var result = {   scene: scene, morphs: morphs, skins: skins, animations: animData, dae: {    ... } }; In this article, we're interested in the objects that are in the scene parameter. I first printed out the scene to the console to look where the mesh was that I was interested in, which was result.scene.children[0].children[0]. All that was left to do was scale it to a reasonable size and add it to the scene. A final note on this specific example—when I loaded this model for the first time, the materials didn't render correctly. The reason was that the textures used the .tga format, which isn't supported in WebGL. To fix this, I had to convert the .tga files to .png and edit the XML of the .dae model to point to these .png files. As you can see, for most complex models, including materials, you often have to take some additional steps to get the desired results. By looking closely at how the materials are configured (using console.log()) or replacing them with test materials, problems are often easy to spot. Loading the STL, CTM, VTK, AWD, Assimp, VRML, and Babylon models We're going to quickly skim over these file formats as they all follow the same principles: Include [NameOfFormat]Loader.js in your web page. Use [NameOfFormat]Loader.load() to load a URL. Check what the response format for the callback looks like and render the result. We have included an example for all these formats: Name Example Screenshot STL 08-load-STL.html CTM 09-load-CTM.html VTK 10-load-vtk.html AWD 11-load-awd.html Assimp 12-load-assimp.html VRML 13-load-vrml.html Babylon The Babylon loader is slightly different from the other loaders in this table. With this loader, you don't load a single THREE.Mesh or THREE.Geometry instance, but with this loader, you load a complete scene, including lights.   14-load-babylon.html If you look at the source code for these examples, you might see that for some of them, we need to change some material properties or do some scaling before the model is rendered correctly. The reason we need to do this is because of the way the model is created in its external application, giving it different dimensions and grouping than we normally use in Three.js. Summary In this article, we've almost shown all the supported file formats. Using models from external sources isn't that hard to do in Three.js. Especially for simple models, you only have to take a few simple steps. When working with external models, or creating them using grouping and merging, it is good to keep a couple of things in mind. The first thing you need to remember is that when you group objects, they still remain available as individual objects. Transformations applied to the parent also affect the children, but you can still transform the children individually. Besides grouping, you can also merge geometries together. With this approach, you lose the individual geometries and get a single new geometry. This is especially useful when you're dealing with thousands of geometries you need to render and you're running into performance issues. Three.js supports a large number of external formats. When using these format loaders, it's a good idea to look through the source code and log out the information received in the callback. This will help you to understand the steps you need to take to get the correct mesh and set it to the correct position and scale. Often, when the model doesn't show correctly, this is caused by its material settings. It could be that incompatible texture formats are used, opacity is incorrectly defined, or the format contains incorrect links to the texture images. It is usually a good idea to use a test material to determine whether the model itself is loaded correctly and log the loaded material to the JavaScript console to check for unexpected values. It is also possible to export meshes and scenes, but remember that GeometryExporter, SceneExporter, and SceneLoader of Three.js are still work in progress. Resources for Article: Further resources on this subject: Creating the maze and animating the cube [article] Mesh animation [article] Working with the Basic Components That Make Up a Three.js Scene [article]
Read more
  • 0
  • 0
  • 4871

article-image-security-microsoft-azure
Packt
06 Apr 2015
9 min read
Save for later

Security in Microsoft Azure

Packt
06 Apr 2015
9 min read
In this article, we highlight some security points of interest, according to the ones explained in the book Microsoft Azure Security, by Roberto Freato. Microsoft Azure is a comprehensive set of services, which enable Cloud computing solutions for enterprises and small businesses. It supports a variety of tools and languages, providing users with building blocks that can be composed as needed. Azure is actually one of the biggest players in the Cloud computing market, solving scalability issues, speeding up the entire management process, and integrating with the existing development tool ecosystem. (For more resources related to this topic, see here.) Standards and Azure It is probably well known that the most widely accepted principles of IT security are confidentiality, integrity, and availability. Despite many security experts defining even more indicators/principles related to IT security, most security controls are focused on these principles, since the vulnerabilities are often expressed as a breach of one (or many) of these three. These three principles are also known as the CIA triangle: Confidentiality: It is about disclosure. A breach of confidentiality means that somewhere, some critical and confidential information has been disclosed unexpectedly. Integrity: It is about the state of information. A breach of integrity means that information has been corrupted or, alternatively, the meaning of the information has been altered unexpectedly. Availability: It is about interruption. A breach of availability means that information access is denied unexpectedly. Ensuring confidentiality, integrity, and availability means that information flows are always monitored and the necessary controls are enforced. To conclude, this is the purpose of a Security Management System, which, when talking about IT, becomes Information Security Management System (ISMS). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) often work together to build international standards for specific technical fields. They released the ISO/IEC 27000 series to provide a family of standards for ISMS, starting from definitions (ISO/IEC 27000), up to governance (ISO/IEC 27014), and even more. Two standards of particular interests are the ISO/IEC 27001 and the ISO/IEC 27002. Microsoft manages the Azure infrastructure, At the most, users can manage the operating system inside a Virtual Machine (VM), but they do not need to administer, edit, or influence the under-the-hood infrastructure. They should not be able to do this at all. Therefore, Azure is a shared environment. This means that a customer's VM can run on the physical server of another customer and, for any given Azure service, two customers can even share the same VM (in some Platform as a Service (PaaS) and Software as a Service (SaaS) scenarios). The Microsoft Azure Trust Center (http://azure.microsoft.com/en-us/support/trust-center/) highlights the attention given to the Cloud infrastructure, in terms of what Microsoft does to enforce security, privacy, and compliance. Identity and Access Management It is very common that different people within the same organization would access and use the same Azure resources. In this case, a few scenarios arise: with the current portal, we can add several co-administrators; with the Preview portal, we can define fine-grained ACLs with the Role-Based Access Control (RBAC) features it implements. By default, we can add external users into the Azure Active Directory (AD), by inviting them through their e-mail address, which must be either a Microsoft account or an Azure AD account. In the Preview portal, the following hierarchy is, as follows: Subscription: It is the permission given at the subscription level, which is valid for each object within the subscription (that is, a Reader subscription can view everything within the subscription). Resource group: This is a fairly new concept of Azure. A resource group is (as the name suggests) a group or resources logically connected, as the collection of resources used for the same web project (a web hosting plan, an SQL server, and so on). Permission given at this level is valid for each object within the resource group. Individual resource: It is the permission given to an individual resource, and is valid only for that resource (that is, giving read-only access to a client to view the Application Insights of a website). Despite it resembles from its name, Azure AD is just an Identity and Access Management (IAM) service, managed and hosted by Microsoft in Azure. We should not even try to make a comparison, because they have different scopes and features. It is true that we can link Azure AD with an on-premise AD, but only for the purpose of extending its functionalities to work with Internet-based applications. Azure AD can be considered a SaaS for IAM before its relationship with Azure Services. A company that is offers its SaaS solution to clients, can also use Azure AD as the Identity Provider, relying on the several existing users of Office 365 (which relies on Azure AD for authentication) or Azure AD itself. Access Control Service (ACS) has been famous for a while for its capability to act as an identity bridge between applications and social identities. In the last few years, if developers wanted to integrate Facebook, Google, Yahoo, and Microsoft accounts (Live ID), they would have probably used ACS. Using Platform as a Service Although there are several ways to host custom code on Azure, the two most important building blocks are Websites and Cloud services. The first is actually a PaaS built on top of the second (a PaaS too), and uses an open source engine named Project Kudu (https://github.com/projectkudu/kudu). Kudu is an open source engine, which works with IIS and manages automatic or manual deployments of Azure Websites in a sandboxed environment. Kudu can also run outside Azure, but it is primarily supported to enable Website services. An Azure Cloud service is a container of roles: a role is the representation of a unit of execution and it can be a worker role (an arbitrary application) or a web role (an IIS application). Each role within a Cloud service can be deployed to several VMs (instances) at the same time, to provide scalability and load-balancing. From the security perspective, we need to pay attention to these aspects: Remote endpoints Remote desktops Startup tasks Microsoft Antimalware Network communication Azure Websites are some of the most advanced PaaS in the Cloud computing market, providing users with a lock-in free solution to run applications built in various languages/platforms. From the security perspective, we need to pay attention to these aspects: Credentials Connection modes Settings and connection strings Backups Extensions Azure services have grown much faster (with regard to the number of services and the surface area) than in the past, at an amazingly increasing rate: consequently, we have several options to store any kind of data (relational, NoSQL, binary, JSON, and so on). Azure Storage is the base service for almost everything on the platform. Storage security is implemented in two different ways: Account Keys Shared Access Signatures While looking at the specifications of many Azure Services, we often see the scalability targets section. For a given service, Azure provides users with a set of upper limits, in terms of capacity, bandwidth, and throughput to let them design their Cloud solutions better. Working with SQL Database is straightforward. However, a few security best practices must be implemented to improve security: Setting up firewall rules Setting up users and roles Connection settings Modern software architectures often rely on an in-memory caching system to save frequently accessed data that do not change too often. Some extreme scenarios require us to use an in-memory cache as the primary data store for sensitive data, pursuing design patterns oriented to eventual persistence and consistency. Azure Managed Cache is the evolution of the former AppFabric Cache for Windows servers and it is a managed by an in-memory cache service. Redis is an open source, high performance data store written in ANSI C: since its name stands for Remote Dictionary Server, it is a key value data store with optional durability. Azure Key Vault is a new and promising service that is used to store cryptographic keys and application secrets. There is an official library to operate against Key Vault from .NET, using Azure AD authentication to get secrets or use the keys. Before using it, it is necessary to set appropriate permissions on the Key Vault for external access, using the Set-AzureKeyVaultAccessPolicy command. Using Infrastructure as a Service Customers choosing Infrastructure as a Service (IaaS) usually have existing project constraints, which are not adaptive to PaaS. We can think about a complex installation of an enterprise-level software suite, such as ERP or a SharePoint farm. This is one of the cases where a service, such as an Azure Website, probably cannot fit. There two main services where the security requirements should be correctly understood and addressed are: Azure Virtual Machines Azure Virtual Networks VMs are the most configurable execution environments for applications that Azure provides. With VMs, we can run arbitrary workloads and run custom tools and applications, but we need to manage and maintain them directly, including the security. From the security perspective, we need to pay attention to these aspects: VM creation Endpoints and ACLs Networking and isolation Microsoft Antimalware Operating system firewalls Auditing and best practices Azure Backup helps protect servers or clients against data loss, providing a second place backup solution. While performing a backup, in fact, one of the primary requirements is the location of the backup: avoid backing up sensitive or critical data to a physical location that is strictly connected to the primary source of the data itself. In case of a disaster, if you involve the facility where the source is located, a higher probability of losing data (including the backup) can occur. Summary In this article we covered the various security related aspects of Microsoft Azure with services, such as PaaS, IaaS, and IAM. Resources for Article: Further resources on this subject: Web API and Client Integration [article] Setting up of Software Infrastructure on the Cloud [article] Windows Phone 8 Applications [article]
Read more
  • 0
  • 0
  • 11699

article-image-high-availability-protection-and-recovery-using-microsoft-azure
Packt
02 Apr 2015
23 min read
Save for later

High Availability, Protection, and Recovery using Microsoft Azure

Packt
02 Apr 2015
23 min read
Microsoft Azure can be used to protect your on-premise assets such as virtual machines, applications, and data. In this article by Marcel van den Berg, the author of Managing Microsoft Hybrid Clouds, you will learn how to use Microsoft Azure to store backup data, replicate data, and even for orchestration of a failover and failback of a complete data center. We will focus on the following topics: High Availability in Microsoft Azure Introduction to geo-replication Disaster recovery using Azure Site Recovery (For more resources related to this topic, see here.) High availability in Microsoft Azure One of the most important limitations of Microsoft Azure is the lack of an SLA for single-instance virtual machines. If a virtual machine is not part of an availability set, that instance is not covered by any kind of SLA. The reason for this is that when Microsoft needs to perform maintenance on Azure hosts, in many cases, a reboot is required. Reboot means the virtual machines on that host will be unavailable for a while. So, in order to accomplish High Availability for your application, you should have at least two instances of the application running at any point in time. Microsoft is working on some sort of hot patching which enables virtual machines to remain active on hosts being patched. Details are not available at the moment of writing. High Availability is a crucial feature that must be an integral part of an architectural design, rather than something that can be "bolted on" to an application afterwards. Designing for High Availability involves leveraging both the development platform as well as available infrastructure in order to ensure an application's responsiveness and overall reliability. The Microsoft Azure Cloud platform offers software developers PaaS extensibility features and network administrators IaaS computing resources that enable availability to be built into an application's design from the beginning. The good news is that organizations with mission-critical applications can now leverage core features within the Microsoft Azure platform in order to deploy highly available, scalable, and fault-tolerant cloud services that have been shown to be more cost-effective than traditional approaches that leverage on-premises systems. Microsoft Failover Clustering support Windows Server Failover Clustering (WSFC) is not supported on Azure. However, Microsoft does support SQL Server AlwaysOn Availability Groups. For AlwaysOn Availability Groups, there is currently no support for availability group listeners in Azure. Also, you must work around a DHCP limitation in Azure when creating WSFC clusters in Azure. After you create a WSFC cluster using two Azure virtual machines, the cluster name cannot start because it cannot acquire a unique virtual IP address from the DHCP service. Instead, the IP address assigned to the cluster name is a duplicate address of one of the nodes. This has a cascading effect that ultimately causes the cluster quorum to fail, because the nodes cannot properly connect to one another. So if your application uses Failover Clustering, it is likely that you will not move it over to Azure. It might run, but Microsoft will not assist you when you encounter issues. Load balancing Besides clustering, we can also create highly available nodes using load balancing. Load balancing is useful for stateless servers. These are servers that are identical to each other and do not have a unique configuration or data. When two or more virtual machines deliver the same application logic, you will need a mechanism that is able to redirect network traffic to those virtual machines. The Windows Network Load Balancing (NLB) feature in Windows Server is not supported on Microsoft Azure. An Azure load balancer does exactly this. It analyzes incoming network traffic of Azure, determines the type of traffic, and reroutes it to a service.   The Azure load balancer is running provided as a cloud service. In fact, this cloud service is running on virtual appliances managed by Microsoft. These are completely software-defined. The moment an administrator adds an endpoint, a set of load balancers is instructed to pass incoming network traffic on a certain port to a port on a virtual machine. If a load balancer fails, another one will take over. Azure load balancing is performed at layer 4 of the OSI mode. This means the load balancer is not aware of the application content of the network packages. It just distributes packets based on network ports. To load balance over multiple virtual machines, you can create a load-balanced set by performing the following steps: In Azure Management Portal, select the virtual machine whose service should be load balanced. Select Endpoints in the upper menu. Click on Add. Select Add a stand-alone endpoint and click on the right arrow. Select a name or a protocol and set the public and private port. Enable create a load-balanced set and click on the right arrow. Next, fill in a name for the load-balanced set. Fill in the probe port, the probe interval, and the number of probes. This information is used by the load balancer to check whether the service is available. It will connect to the probe port number; do that according to the interval. If the specified number of probes all result in unable to connect, the load balancer will no longer distribute traffic to this virtual machine. Click on the check mark. The load balancing mechanism available is based on a hash. Microsoft Azure Load Balancer uses a five tuple (source IP, source port, destination IP, destination port, and protocol type) to calculate the hash that is used to map traffic to the available servers. A second load balancing mode was introduced in October 2014. It is called Source IP Affinity (also known as session affinity or client IP affinity). On using Source IP affinity, connections initiated from the same client computer go to the same DIP endpoint. These load balancers provide high availability inside a single data center. If a virtual machine part of a cluster of instances fails, the load balancer will notice this and remove that virtual machine IP address from a table. However, load balancers will not protect for failure of a complete data center. The domains that are used to direct clients to an application will route to a particular virtual IP that is bound to an Azure data center. To access application even if an Azure region has failed, you can use Azure Traffic Manager. This service can be used for several purposes: To failover to a different Azure region if a disaster occurs To provide the best user experience by directing network traffic to Azure region closest to the location of the user To reroute traffic to another Azure region whenever there's any planned maintenance The main task of Traffic Manager is to map a DNS query to an IP address that is the access point of a service. This job can be compared for example with a job of someone working with the X-ray machine at an airport. I'm guessing that you have all seen those multiple rows of X-ray machines. Each queue at an X-ray machine is different at any moment. An officer standing at the entry of the area distributes people over the available X-rays machine such that all queues remain equal in length. Traffic Manager provides you with a choice of load-balancing methods, including performance, failover, and round-robin. Performance load balancing measures the latency between the client and the cloud service endpoint. Traffic Manager is not aware of the actual load on virtual machines servicing applications. As Traffic Manager resolved endpoints of Azure cloud services only, it cannot be used for load balancing between an Azure region and a non-Azure region (for example, Amazon EC2) or between on-premises and Azure services. It will perform health checks on a regular basis. This is done by querying the endpoints of the services. If the endpoint does not respond, Traffic Manager will stop distributing network traffic to that endpoint for as long as the state of the endpoint is unavailable. Traffic Manager is available in all Azure regions. Microsoft charges for using this service based on the number of DNS queries that are received by Traffic Manager. As the service is attached to an Azure subscription, you will be required to contact Azure support to transfer Traffic Manager to a different subscription. The following table shows the difference between Azure's built-in load balancer and Traffic Manager:   Load balancer Traffic Manager Distribution targets Must reside in same region Can be across regions Load balancing 5 tuple Source IP Affinity  Performance, failover, and round-robin Level OSI layer 4 TCP/UDP ports OSI Layer 4 DNS queries Third-party load balancers In certain configurations, the default Azure load balancer might not be sufficient. There are several vendors supporting or starting to support Azure. One of them is Kemp Technologies. Kemp Technologies offers a free load balancer for Microsoft Azure. The Virtual LoadMaster (VLM) provides layer 7 application delivery. The virtual appliance has some limitations compared to the commercially available unit. The maximum bandwidth is limited to 100 Mbps and High Availability is not offered. This means the Kemp LoadMaster for Azure free edition is a single point of failure. Also, the number of SSL transactions per second is limited. One of the use cases in which a third-party load balancer is required is when we use Microsoft Remote Desktop Gateway. As you might know, Citrix has been supporting the use of Citrix XenApp and Citrix XenDesktop running on Azure since 2013. This means service providers can offer cloud-based desktops and applications using these Citrix solutions. To make this a working configuration, session affinity is required. Session affinity makes sure that network traffic is always routed over the same server. Windows Server 2012 Remote Desktop Gateway uses two HTTP channels, one for input and one for output, which must be routed over the same Remote Desktop Gateway. The Azure load balancer is only able to do round-robin load balancing, which does not guarantee both channels using the same server. However, hardware and software load balancers that support IP affinity, cookie-based affinity, or SSL ID-based affinity (and thus ensure that both HTTP connections are routed to the same server) can be used with Remote Desktop Gateway. Another use case is load balancing of Active Directory Federation Services (ADFS). Microsoft Azure can be used as a backup for on-premises Active Directory (AD). Suppose your organization is using Office 365. To provide single sign-on, a federation has been set up between Office 365 directory and your on-premises AD. If your on-premises ADFS fails, external users would not be able to authenticate. By using Microsoft Azure for ADFS, you can provide high availability for authentication. Kemp LoadMaster for Azure can be used to load balance network traffic to ADFS and is able to do proper load balancing. To install Kemp LoadMaster, perform the following steps: Download the Publish Profile settings file from https://windows.azure.com/download/publishprofile.aspx. Use PowerShell for Azure with the Import-AzurePublishSettingsFile command. Upload the KEMP supplied VHD file to your Microsoft Azure storage account. Publish the VHD as an image. The VHD will be available as an image. The image can be used to create virtual machines. The complete steps are described in the documentation provided by Kemp. Geo-replication of data Microsoft Azure has geo-replication of Azure Storage enabled by default. This means all of your data is not only stored at three different locations in the primary region, but also replicated and stored at three different locations at the paired region. However, this data cannot be accessed by the customer. Microsoft has to declare a data center or storage stamp as lost before Microsoft will failover to the secondary location. In the rare circumstance where a failed storage stamp cannot be recovered, you will experience many hours of downtime. So, you have to make sure you have your own disaster recovery procedures in place. Zone Redundant Storage Microsoft offers a third option you can use to store data. Zone Redundant Storage (ZRS) is a mix of two options for data redundancy and allows data to be replicated to a secondary data center / facility located in the same region or to a paired region. Instead of storing six copies of data like geo-replicated storage does, only three copies of data are stored. So, ZRS is a mix of local redundant storage and geo-replicated storage. The cost for ZRS is about 66 percent of the cost for GRS. Snapshots of the Microsoft Azure disk Server virtualization solutions such as Hyper-V and VMware vSphere offer the ability to save the state of a running virtual machine. This can be useful when you're making changes to the virtual machine but want to have the ability to reverse those changes if something goes wrong. This feature is called a snapshot. Basically, a virtual disk is saved by marking it as read only. All writes to the disk after a snapshot has been initiated are stored on a temporary virtual disk. When a snapshot is deleted, those changes are committed from the delta disk to the initial disk. While the Microsoft Azure Management Portal does not have a feature to create snapshots, there is an ability to make point-in-time copies of virtual disks attached to virtual machines. Microsoft Azure Storage has the ability of versioning. Under the hood, this works differently than snapshots in Hyper-V. It creates a snapshot blob of the base blob. Snapshots are by no ways a replacement for a backup, but it is nice to know you can save the state as well as quickly reverse if required. Introduction to geo-replication By default, Microsoft replicates all data stored on Microsoft Azure Storage to the secondary location located in the paired region. Customers are able to enable or disable the replication. When enabled, customers are charged. When Geo Redundant Storage has been enabled on a storage account, all data is asynchronous replicated. At the secondary location, data is stored on three different storage nodes. So even when two nodes fail, the data is still accessible. However, before the read access Geo-Redundant feature was available, customers had no way to actually access replicated data. The replicated data could only be used by Microsoft when the primary storage could not be recovered again. Microsoft will try everything to restore data in the primary location and avoid a so-called geo-failover process. A geo-failover process means that a storage account's secondary location (the replicated data) will be configured as the new primary location. The problem is that a geo-failover process cannot be done per storage account, but needs to be done at the storage stamp level. A storage stamp has multiple racks of storage nodes. You can imagine how much data and how many customers are involved when a storage stamp needs to failover. Failover will have an effect on the availability of applications. Also, because of the asynchronous replication, some data will be lost when a failover is performed. Microsoft is working on an API that allows customers to failover a storage account themselves. When geo-redundant replication is enabled, you will only benefit from it when Microsoft has a major issue. Geo-redundant storage is neither a replacement for a backup nor for a disaster recovery solution. Microsoft states that the Recover Point Objective (RPO) for Geo Redundant Storage will be about 15 minutes. That means if a failover is required, customers can lose about 15 minutes of data. Microsoft does not provide a SLA on how long geo-replication will take. Microsoft does not give an indication for the Recovery Time Objective (RTO). The RTO indicates the time required by Microsoft to make data available again after a major failure that requires a failover. Microsoft once had to deal with a failure of storage stamps. They did not do a failover but it took many hours to restore the storage service to a normal level. In 2013, Microsoft introduced a new feature called Read Access Geo Redundant Storage (RA-GRS). This feature allows customers to perform reads on the replicated data. This increases the read availability from 99.9 percent when GRS is used to above 99.99 percent when RA-GRS is enabled. Microsoft charges more when RA-GRS is enabled. RA-GRS is an interesting addition for applications that are primarily meant for read-only purposes. When the primary location is not available and Microsoft has not done a failover, writes are not possible. The availability of the Azure Virtual Machine service is not increased by enabling RA-GRS. While the VHD data is replicated and can be read, the virtual machine itself is not replicated. Perhaps this will be a feature for the future. Disaster recovery using Azure Site Recovery Disaster recovery has always been on the top priorities for organizations. IT has become a very important, if not mission-critical factor for doing business. A failure of IT could result in loss of money, customers, orders, and brand value. There are many situations that can disrupt IT such as: Hurricanes Floods Earthquakes Disasters such as a failure of a nuclear power plant Fire Human error Outbreak of a virus Hardware or software failure While these threads are clear and the risk of being hit by such a thread can be calculated, many organizations do not have a proper protection against those threads. In three different situations, disaster recovery solutions can help an organization to continue doing business: Avoiding a possible failure of IT infrastructure by moving servers to a different location. Avoiding a disaster situation, such as hurricanes or floods, since such situations are generally well known in advance due to weather forecasting capabilities. Recovering as quickly as possible when a disaster has hit the data center. Disaster recovery is done when a disaster unexpectedly hit the data center, such as a fire, hardware error, or human error. Some reasons for not having a proper disaster recovery plan are complexity, lack of time, and ignorance; however, in most cases, a lack of budget and the belief that disaster recovery is expensive are the main reasons. Almost all organizations that have been hit by a major disaster causing unacceptable periods of downtime started to implement a disaster recovery plan, including technology immediately after they recovered. However, in many cases, this insight came too late. According to Gartner, 43 percent of companies experiencing disasters never reopen and 29 percent close within 2 years. Server virtualization has made disaster recovery a lot easier and cost effective. Verifying that your DR procedure actually works as designed and matches RTO and RPO is much easier using virtual machines. Since Windows Server 2012, Hyper-V has a feature for asynchronous replication of virtual machine virtual disks to another location. This feature, Hyper-V Replica, is very easy to enable and configure. It does not cost extra. Hyper-V Replica is storage agnostic, which means the storage type at the primary site can be different than the storage type used in the secondary site. So, Hyper-V Replica perfectly works when your virtual machines are hosted on, for example, EMC storage while in the secondary a HP solution is used. While replication is a must for DR, another very useful feature in DR is automation. As an administrator, you really appreciate the option to click on a button after deciding to perform a failover and sit back and relax. Recovery is mostly a stressful job when your primary location is flooded or burned and lots of things can go wrong if recovery is done manually. This is why Microsoft designed Azure Site Recovery. Azure Site Recovery is able to assist in disaster recovery in several scenarios: A customer has two data centers both running Hyper-V managed by System Center Virtual Machine Manager. Hyper-V Replica is used to replicate data at the virtual machine level. A customer has two data centers both running Hyper-V managed by System Center Virtual Machine Manager. NetApp storage is used to replicate between two sites at the storage level. A customer has a single data center running Hyper-V managed by System Center Virtual Machine Manager. A customer has two data centers both running VMware vSphere. In this case InMage Scout software is used to replicate between two datacenters. Azure is not used for orchestration. A customer has a single data centers not managed by System Center Virtual Machine Manager. In the second scenario, Microsoft Azure is used as a secondary data center if a disaster makes the primary data center unavailable. Microsoft announced also to support a scenario where vSphere is used on-premises and Azure Site Recovery can be used to replicate data to Azure. To enable this InMage software will be used. Details were not available at the time this article was written. In the first two described scenarios Site Recovery is used to orchestrate the failover and failback to the secondary location. The management is done using Azure Management Portal. This is available using any browser supporting HTML5. So a failover can be initiated even from a tablet or smartphone. Using Azure as a secondary data center for disaster recovery Azure Site Recovery went into preview in June 2014. For organizations using Hyper-V, there is no direct need to have a secondary data center as Azure can be used as a target for Hyper-V Replica. Some of the characteristics of the service are as follows: Allows nondisruptive disaster recovery failover testing Automated reconfigure of network configuration of guests Storage agnostic supports any type of on-premises storage supported by Hyper-V Support for VSS to enable application consistency Protects more than 1,000 virtual machines (Microsoft tested with 2,000 virtual machines and this went well) To be able to use Site Recovery, customers do not have to use System Center Virtual Machine Manager. Site Recovery can be used without this installed. System Center Virtual Machine Manager. Site Recovery will use information such as? virtual networks provided by SCVMM to map networks available in Microsoft Azure. Site Recovery does not support the ability to send a copy of the virtual hard disks on removable media to an Azure data center to prevent the initial replication using WAN (seeding). Customers will need to transfer all the replication data over the network. ExpressRoute will help to get a much better throughput compared to a site-to-site VPN over the Internet. Failover to Azure can be as simple as clicking on a single button. Site Recovery will then create new virtual machines in Azure and start the virtual machines in the order defined in the recovery plan. A recovery plan is a workflow that defines the startup sequence of virtual machines. It is possible to stop the recovery plan to allow a manual check, for example. If all is okay, the recovery plan will continue doing its job. Multiple recovery plans can be created. Microsoft Volume Shadow Copy Services (VSS) is supported. This allows application consistency. Replication of data can be configured at intervals of 15 seconds, 5 minutes, or 15 minutes. Replication is performed asynchronously. For recovery, 24 recovery points are available. These are like snapshots or point-in-time copies. If the most recent replica cannot be used (for example, because of damaged data), another replica can be used for restore. You can configure extended replication. In extended replication, your Replica server forwards changes that occur on the primary virtual machines to a third server (the extended Replica server). After a planned or unplanned failover from the primary server to the Replica server, the extended Replica server provides further business continuity protection. As with ordinary replication, you configure extended replication by using Hyper-V Manager, Windows PowerShell (using the –Extended option), or WMI. At the moment, only VHD virtual disk format is supported. Generation 2 virtual machines that can be created on Hyper-V are not supported by Site Recovery. Generation 2 virtual machines have a simplified virtual hardware model and support Unified Extensible Firmware Interface (UEFI) firmware instead of BIOS-based firmware. Also, boot from PXE, SCSI hard disk, SCSCI DVD, and Secure Boot are supported in Generation 2 virtual machines. However on March 19 Microsoft responded to numerous customer requests on support of Site Recovery for Generation 2 virtual machines. Site Recovery will soon support Gen 2 VM's. On failover, the VM will be converted to a Gen 1 VM. On failback, the VM will be converted to Gen 2. This conversion is done till the Azure platform natively supports Gen 2 VM's. Customers using Site Recovery are charged only for consumption of storage as long as they do not perform a failover or failover test. Failback is also supported. After running for a while in Microsoft Azure customers are likely to move their virtual machines back to the on-premises, primary data center. Site Recovery will replicate back only the changed data. Mind that customer data is not stored in Microsoft Azure when Hyper-V Recovery Manager is used. Azure is used to coordinate the failover and recovery. To be able to do this, it stores information on network mappings, runbooks, and names of virtual machines and virtual networks. All data sent to Azure is encrypted. By using Azure Site Recovery, we can perform service orchestration in terms of replication, planned failover, unplanned failover, and test failover. The entire engine is powered by Azure Site Recovery Manager. Let's have a closer look on the main features of Azure Site Recovery. It enables three main scenarios: Test Failover or DR Drills: Enable support for application testing by creating test virtual machines and networks as specified by the user. Without impacting production workloads or their protection, HRM can quickly enable periodic workload testing. Planned Failovers (PFO): For compliance or in the event of a planned outage, customers can use planned failovers, virtual machines are shutdown, final changes are replicated to ensure zero data loss, and then virtual machines are brought up in order on the recovery site as specified by the RP. More importantly, failback is a single-click gesture that executes a planned failover in the reverse direction. Unplanned Failovers (UFO): In the event of unplanned outage or a natural disaster, HRM opportunistically attempts to shut down the primary machines if some of the virtual machines are still running when the disaster strikes. It then automates their recovery on the secondary site as specified by the RP. If your secondary site uses a different IP subnet, Site Recovery is able to change the IP configuration of your virtual machines during the failover. Part of the Site Recovery installation is the installation of a VMM provider. This component communicates with Microsoft Azure. Site Recovery can be used even if you have a single VMM to manage both primary and secondary sites. Site Recovery does not rely on availability of any component in the primary site when performing a failover. So it doesn't matter if the complete site including link to Azure has been destroyed, as Site Recovery will be able to perform the coordinated failover. Azure Site Recovery to customer owned sites is billed per protected virtual machine per month. The costs are approximately €12 per month. Microsoft bills for the average consumption of virtual machines per month. So if you are protecting 20 virtual machines in the first half and 0 in the second half, you will be charged for 10 virtual machines for that month. When Azure is used as a target, Microsoft will only charge for consumption of storage during replication. The costs for this scenario are €40.22/month per instance protected. As soon as you perform a test failover or actual failover Microsoft will charge for the virtual machine CPU and memory consumption. Summary Thus this article has covered the concepts of High Availability in Microsoft Azure and disaster recovery using Azure Site Recovery, and also gives an introduction to the concept of geo-replication. Resources for Article: Further resources on this subject: Windows Azure Mobile Services - Implementing Push Notifications using [article] Configuring organization network services [article] Integration with System Center Operations Manager 2012 SP1 [article]
Read more
  • 0
  • 0
  • 4023
article-image-getting-ready-coffeescript
Packt
02 Apr 2015
20 min read
Save for later

Getting Ready with CoffeeScript

Packt
02 Apr 2015
20 min read
In this article by Mike Hatfield, author of the book, CoffeeScript Application Development Cookbook, we will see that JavaScript, though very successful, can be a difficult language to work with. JavaScript was designed by Brendan Eich in a mere 10 days in 1995 while working at Netscape. As a result, some might claim that JavaScript is not as well rounded as some other languages, a point well illustrated by Douglas Crockford in his book titled JavaScript: The Good Parts, O'Reilly Media. These pitfalls found in the JavaScript language led Jeremy Ashkenas to create CoffeeScript, a language that attempts to expose the good parts of JavaScript in a simple way. CoffeeScript compiles into JavaScript and helps us avoid the bad parts of JavaScript. (For more resources related to this topic, see here.) There are many reasons to use CoffeeScript as your development language of choice. Some of these reasons include: CoffeeScript helps protect us from the bad parts of JavaScript by creating function closures that isolate our code from the global namespace by reducing the curly braces and semicolon clutter and by helping tame JavaScript's notorious this keyword CoffeeScript helps us be more productive by providing features such as list comprehensions, classes with inheritance, and many others Properly written CoffeeScript also helps us write code that is more readable and can be more easily maintained As Jeremy Ashkenas says: "CoffeeScript is just JavaScript." We can use CoffeeScript when working with the large ecosystem of JavaScript libraries and frameworks on all aspects of our applications, including those listed in the following table: Part Some options User interfaces UI frameworks including jQuery, Backbone.js, AngularJS, and Kendo UI Databases Node.js drivers to access SQLite, Redis, MongoDB, and CouchDB Internal/external services Node.js with Node Package Manager (NPM) packages to create internal services and interfacing with external services Testing Unit and end-to-end testing with Jasmine, Qunit, integration testing with Zombie, and mocking with Persona Hosting Easy API and application hosting with Heroku and Windows Azure Tooling Create scripts to automate routine tasks and using Grunt Configuring your environment and tools One significant aspect to being a productive CoffeeScript developer is having a proper development environment. This environment typically consists of the following: Node.js and the NPM CoffeeScript Code editor Debugger In this recipe, we will look at installing and configuring the base components and tools necessary to develop CoffeeScript applications. Getting ready In this section, we will install the software necessary to develop applications with CoffeeScript. One of the appealing aspects of developing applications using CoffeeScript is that it is well supported on Mac, Windows, and Linux machines. To get started, you need only a PC and an Internet connection. How to do it... CoffeeScript runs on top of Node.js—the event-driven, non-blocking I/O platform built on Chrome's JavaScript runtime. If you do not have Node.js installed, you can download an installation package for your Mac OS X, Linux, and Windows machines from the start page of the Node.js website (http://nodejs.org/). To begin, install Node.js using an official prebuilt installer; it will also install the NPM. Next, we will use NPM to install CoffeeScript. Open a terminal or command window and enter the following command: npm install -g coffee-script This will install the necessary files needed to work with CoffeeScript, including the coffee command that provides an interactive Read Evaluate Print Loop (REPL)—a command to execute CoffeeScript files and a compiler to generate JavaScript. It is important to use the -g option when installing CoffeeScript, as this installs the CoffeeScript package as a global NPM module. This will add the necessary commands to our path. On some Windows machines, you might need to add the NPM binary directory to your path. You can do this by editing the environment variables and appending ;%APPDATA%npm to the end of the system's PATH variable. Configuring Sublime Text What you use to edit code can be a very personal choice, as you, like countless others, might use the tools dictated by your team or manager. Fortunately, most popular editing tools either support CoffeeScript out of the box or can be easily extended by installing add-ons, packages, or extensions. In this recipe, we will look at adding CoffeeScript support to Sublime Text and Visual Studio. Getting ready This section assumes that you have Sublime Text or Visual Studio installed. Sublime Text is a very popular text editor that is geared to working with code and projects. You can download a fully functional evaluation version from http://www.sublimetext.com. If you find it useful and decide to continue to use it, you will be encouraged to purchase a license, but there is currently no enforced time limit. How to do it... Sublime Text does not support CoffeeScript out of the box. Thankfully, a package manager exists for Sublime Text; this package manager provides access to hundreds of extension packages, including ones that provide helpful and productive tools to work with CoffeeScript. Sublime Text does not come with this package manager, but it can be easily added by following the instructions on the Package Control website at https://sublime.wbond.net/installation. With Package Control installed, you can easily install the CoffeeScript packages that are available using the Package Control option under the Preferences menu. Select the Install Package option. You can also access this command by pressing Ctrl + Shift + P, and in the command list that appears, start typing install. This will help you find the Install Package command quickly. To install the CoffeeScript package, open the Install Package window and enter CoffeeScript. This will display the CoffeeScript-related packages. We will use the Better CoffeeScript package: As you can see, the CoffeeScript package includes syntax highlighting, commands, shortcuts, snippets, and compilation. How it works... In this section, we will explain the different keyboard shortcuts and code snippets available with the Better CoffeeScript package for Sublime. Commands You can run the desired command by entering the command into the Sublime command pallet or by pressing the related keyboard shortcut. Remember to press Ctrl + Shift + P to display the command pallet window. Some useful CoffeeScript commands include the following: Command Keyboard shortcut Description Coffee: Check Syntax Alt + Shift + S This checks the syntax of the file you are editing or the currently selected code. The result will display in the status bar at the bottom. Coffee: Compile File Alt + Shift + C This compiles the file being edited into JavaScript. Coffee: Run Script Alt + Shift + R This executes the selected code and displays a buffer of the output. The keyboard shortcuts are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by choosing CoffeeScript in the list of file types in the bottom-left corner of the screen. Snippets Snippets allow you to use short tokens that are recognized by Sublime Text. When you enter the code and press the Tab key, Sublime Text will automatically expand the snippet into the full form. Some useful CoffeeScript code snippets include the following: Token Expands to log[Tab] console.log cla class Name constructor: (arguments) ->    # ... forin for i in array # ... if if condition # ... ifel if condition # ... else # ... swi switch object when value    # ... try try # ... catch e # ... The snippets are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by selecting CoffeeScript in the list of file types in the bottom-left corner of the screen. Configuring Visual Studio In this recipe, we will demonstrate how to add CoffeeScript support to Visual Studio. Getting ready If you are on the Windows platform, you can use Microsoft's Visual Studio software. You can download Microsoft's free Express edition (Express 2013 for Web) from http://www.microsoft.com/express. How to do it... If you are a Visual Studio user, Version 2010 and above can work quite effectively with CoffeeScript through the use of Visual Studio extensions. If you are doing any form of web development with Visual Studio, the Web Essentials extension is a must-have. To install Web Essentials, perform the following steps: Launch Visual Studio. Click on the Tools menu and select the Extensions and Updates menu option. This will display the Extensions and Updates window (shown in the next screenshot). Select Online in the tree on the left-hand side to display the most popular downloads. Select Web Essentials 2012 from the list of available packages and then click on the Download button. This will download the package and install it automatically. Once the installation is finished, restart Visual Studio by clicking on the Restart Now button. You will likely find Web Essentials 2012 ranked highly in the list of Most Popular packages. If you do not see it, you can search for Web Essentials using the Search box in the top-right corner of the window. Once installed, the Web Essentials package provides many web development productivity features, including CSS helpers, tools to work with Less CSS, enhancements to work with JavaScript, and, of course, a set of CoffeeScript helpers. To add a new CoffeeScript file to your project, you can navigate to File | New Item or press Ctrl + Shift + A. This will display the Add New Item dialog, as seen in the following screenshot. Under the Web templates, you will see a new CoffeeScript File option. Select this option and give it a filename, as shown here: When we have our CoffeeScript file open, Web Essentials will display the file in a split-screen editor. We can edit our code in the left-hand pane, while Web Essentials displays a live preview of the JavaScript code that will be generated for us. The Web Essentials CoffeeScript compiler will create two JavaScript files each time we save our CoffeeScript file: a basic JavaScript file and a minified version. For example, if we save a CoffeeScript file named employee.coffee, the compiler will create employee.js and employee.min.js files. Though I have only described two editors to work with CoffeeScript files, there are CoffeeScript packages and plugins for most popular text editors, including Emacs, Vim, TextMate, and WebMatrix. A quick dive into CoffeeScript In this recipe, we will take a quick look at the CoffeeScript language and command line. How to do it... CoffeeScript is a highly expressive programming language that does away with much of the ceremony required by JavaScript. It uses whitespace to define blocks of code and provides shortcuts for many of the programming constructs found in JavaScript. For example, we can declare variables and functions without the var keyword: firstName = 'Mike' We can define functions using the following syntax: multiply = (a, b) -> a * b Here, we defined a function named multiply. It takes two arguments, a and b. Inside the function, we multiplied the two values. Note that there is no return statement. CoffeeScript will always return the value of the last expression that is evaluated inside a function. The preceding function is equivalent to the following JavaScript snippet: var multiply = function(a, b) { return a * b; }; It's worth noting that the CoffeeScript code is only 28 characters long, whereas the JavaScript code is 50 characters long; that's 44 percent less code. We can call our multiply function in the following way: result = multiply 4, 7 In CoffeeScript, using parenthesis is optional when calling a function with parameters, as you can see in our function call. However, note that parenthesis are required when executing a function without parameters, as shown in the following example: displayGreeting = -> console.log 'Hello, world!' displayGreeting() In this example, we must call the displayGreeting() function with parenthesis. You might also wish to use parenthesis to make your code more readable. Just because they are optional, it doesn't mean you should sacrifice the readability of your code to save a couple of keystrokes. For example, in the following code, we used parenthesis even though they are not required: $('div.menu-item').removeClass 'selected' Like functions, we can define JavaScript literal objects without the need for curly braces, as seen in the following employee object: employee = firstName: 'Mike' lastName: 'Hatfield' salesYtd: 13204.65 Notice that in our object definition, we also did not need to use a comma to separate our properties. CoffeeScript supports the common if conditional as well as an unless conditional inspired by the Ruby language. Like Ruby, CoffeeScript also provides English keywords for logical operations such as is, isnt, or, and and. The following example demonstrates the use of these keywords: isEven = (value) -> if value % 2 is 0    'is' else    'is not'   console.log '3 ' + isEven(3) + ' even' In the preceding code, we have an if statement to determine whether a value is even or not. If the value is even, the remainder of value % 2 will be 0. We used the is keyword to make this determination. JavaScript has a nasty behavior when determining equality between two values. In other languages, the double equal sign is used, such as value == 0. In JavaScript, the double equal operator will use type coercion when making this determination. This means that 0 == '0'; in fact, 0 == '' is also true. CoffeeScript avoids this using JavaScript's triple equals (===) operator. This evaluation compares value and type such that 0 === '0' will be false. We can use if and unless as expression modifiers as well. They allow us to tack if and unless at the end of a statement to make simple one-liners. For example, we can so something like the following: console.log 'Value is even' if value % 2 is 0 Alternatively, we can have something like this: console.log 'Value is odd' unless value % 2 is 0 We can also use the if...then combination for a one-liner if statement, as shown in the following code: if value % 2 is 0 then console.log 'Value is even' CoffeeScript has a switch control statement that performs certain actions based on a list of possible values. The following lines of code show a simple switch statement with four branching conditions: switch task when 1    console.log 'Case 1' when 2    console.log 'Case 2' when 3, 4, 5    console.log 'Case 3, 4, 5' else    console.log 'Default case' In this sample, if the value of a task is 1, case 1 will be displayed. If the value of a task is 3, 4, or 5, then case 3, 4, or 5 is displayed, respectively. If there are no matching values, we can use an optional else condition to handle any exceptions. If your switch statements have short operations, you can turn them into one-liners, as shown in the following code: switch value when 1 then console.log 'Case 1' when 2 then console.log 'Case 2' when 3, 4, 5 then console.log 'Case 3, 4, 5' else console.log 'Default case' CoffeeScript provides a number of syntactic shortcuts to help us be more productive while writing more expressive code. Some people have claimed that this can sometimes make our applications more difficult to read, which will, in turn, make our code less maintainable. The key to highly readable and maintainable code is to use a consistent style when coding. I recommend that you follow the guidance provided by Polar in their CoffeeScript style guide at http://github.com/polarmobile/coffeescript-style-guide. There's more... With CoffeeScript installed, you can use the coffee command-line utility to execute CoffeeScript files, compile CoffeeScript files into JavaScript, or run an interactive CoffeeScript command shell. In this section, we will look at the various options available when using the CoffeeScript command-line utility. We can see a list of available commands by executing the following command in a command or terminal window: coffee --help This will produce the following output: As you can see, the coffee command-line utility provides a number of options. Of these, the most common ones include the following: Option Argument Example Description None None coffee This launches the REPL-interactive shell. None Filename coffee sample.coffee This command will execute the CoffeeScript file. -c, --compile Filename coffee -c sample.coffee This command will compile the CoffeeScript file into a JavaScript file with the same base name,; sample.js, as in our example. -i, --interactive   coffee -i This command will also launch the REPL-interactive shell. -m, --map Filename coffee--m sample.coffee This command generates a source map with the same base name, sample.js.map, as in our example. -p, --print Filename coffee -p sample.coffee This command will display the compiled output or compile errors to the terminal window. -v, --version None coffee -v This command will display the correct version of CoffeeScript. -w, --watch Filename coffee -w -c sample.coffee This command will watch for file changes, and with each change, the requested action will be performed. In our example, our sample.coffee file will be compiled each time we save it. The CoffeeScript REPL As we have been, CoffeeScript has an interactive shell that allows us to execute CoffeeScript commands. In this section, we will learn how to use the REPL shell. The REPL shell can be an excellent way to get familiar with CoffeeScript. To launch the CoffeeScript REPL, open a command window and execute the coffee command. This will start the interactive shell and display the following prompt: For example, if we enter the expression x = 4 and press return, we would see what is shownin the following screenshot In the coffee> prompt, we can assign values to variables, create functions, and evaluate results. When we enter an expression and press the return key, it is immediately evaluated and the value is displayed. For example, if we enter the expression x = 4 and press return, we would see what is shown in the following screenshot: This did two things. First, it created a new variable named x and assigned the value of 4 to it. Second, it displayed the result of the command. Next, enter timesSeven = (value) -> value * 7 and press return: You can see that the result of this line was the creation of a new function named timesSeven(). We can call our new function now: By default, the REPL shell will evaluate each expression when you press the return key. What if we want to create a function or expression that spans multiple lines? We can enter the REPL multiline mode by pressing Ctrl + V. This will change our coffee> prompt to a ------> prompt. This allows us to enter an expression that spans multiple lines, such as the following function: When we are finished with our multiline expression, press Ctrl + V again to have the expression evaluated. We can then call our new function: The CoffeeScript REPL offers some handy helpers such as expression history and tab completion. Pressing the up arrow key on your keyboard will circulate through the expressions we previously entered. Using the Tab key will autocomplete our function or variable name. For example, with the isEvenOrOdd() function, we can enter isEven and press Tab to have the REPL complete the function name for us. Debugging CoffeeScript using source maps If you have spent any time in the JavaScript community, you would have, no doubt, seen some discussions or rants regarding the weak debugging story for CoffeeScript. In fact, this is often a top argument some give for not using CoffeeScript at all. In this recipe, we will examine how to debug our CoffeeScript application using source maps. Getting ready The problem in debugging CoffeeScript stems from the fact that CoffeeScript compiles into JavaScript which is what the browser executes. If an error arises, the line that has caused the error sometimes cannot be traced back to the CoffeeScript source file very easily. Also, the error message is sometimes confusing, making troubleshooting that much more difficult. Recent developments in the web development community have helped improve the debugging experience for CoffeeScript by making use of a concept known as a source map. In this section, we will demonstrate how to generate and use source maps to help make our CoffeeScript debugging easier. To use source maps, you need only a base installation of CoffeeScript. How to do it... You can generate a source map for your CoffeeScript code using the -m option on the CoffeeScript command: coffee -m -c employee.coffee How it works... Source maps provide information used by browsers such as Google Chrome that tell the browser how to map a line from the compiled JavaScript code back to its origin in the CoffeeScript file. Source maps allow you to place breakpoints in your CoffeeScript file and analyze variables and execute functions in your CoffeeScript module. This creates a JavaScript file called employee.js and a source map called employee.js.map. If you look at the last line of the generated employee.js file, you will see the reference to the source map: //# sourceMappingURL=employee.js.map Google Chrome uses this JavaScript comment to load the source map. The following screenshot demonstrates an active breakpoint and console in Goggle Chrome: Debugging CoffeeScript using Node Inspector Source maps and Chrome's developer tools can help troubleshoot our CoffeeScript that is destined for the Web. In this recipe, we will demonstrate how to debug CoffeeScript that is designed to run on the server. Getting ready Begin by installing the Node Inspector NPM module with the following command: npm install -g node-inspector How to do it... To use Node Inspector, we will use the coffee command to compile the CoffeeScript code we wish to debug and generate the source map. In our example, we will use the following simple source code in a file named counting.coffee: for i in [1..10] if i % 2 is 0    console.log "#{i} is even!" else    console.log "#{i} is odd!" To use Node Inspector, we will compile our file and use the source map parameter with the following command: coffee -c -m counting.coffee Next, we will launch Node Inspector with the following command: node-debug counting.js How it works... When we run Node Inspector, it does two things. First, it launches the Node debugger. This is a debugging service that allows us to step through code, hit line breaks, and evaluate variables. This is a built-in service that comes with Node. Second, it launches an HTTP handler and opens a browser that allows us to use Chrome's built-in debugging tools to use break points, step over and into code, and evaluate variables. Node Inspector works well using source maps. This allows us to see our native CoffeeScript code and is an effective tool to debug server-side code. The following screenshot displays our Chrome window with an active break point. In the local variables tool window on the right-hand side, you can see that the current value of i is 2: The highlighted line in the preceding screenshot depicts the log message. Summary This article introduced CoffeeScript and lays the foundation to use CoffeeScript to develop all aspects of modern cloud-based applications. Resources for Article: Further resources on this subject: Writing Your First Lines of CoffeeScript [article] Why CoffeeScript? [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 4474

article-image-bsp-layer
Packt
02 Apr 2015
14 min read
Save for later

The BSP Layer

Packt
02 Apr 2015
14 min read
In this article by Alex González, author of the book Embedded LinuxProjects Using Yocto Project Cookbook, we will see how the embedded Linux projects require both custom hardware and software. An early task in the development process is to test different hardware reference boards and the selection of one to base our design on. We have chosen the Wandboard, a Freescale i.MX6-based platform, as it is an affordable and open board, which makes it perfect for our needs. On an embedded project, it is usually a good idea to start working on the software as soon as possible, probably before the hardware prototypes are ready, so that it is possible to start working directly with the reference design. But at some point, the hardware prototypes will be ready and changes will need to be introduced into Yocto to support the new hardware. This article will explain how to create a BSP layer to contain those hardware-specific changes, as well as show how to work with the U-Boot bootloader and the Linux kernel, components which are likely to take most of the customization work. (For more resources related to this topic, see here.) Creating a custom BSP layer These custom changes are kept on a separate Yocto layer, called a Board Support Package (BSP) layer. This separation is best for future updates and patches to the system. A BSP layer can support any number of new machines and any new software feature that is linked to the hardware itself. How to do it... By convention, Yocto layer names start with meta, short for metadata. A BSP layer may then add a bsp keyword, and finally a unique name. We will call our layer meta-bsp-custom. There are several ways to create a new layer: Manually, once you know what is required By copying the meta-skeleton layer included in Poky By using the yocto-layer command-line tool You can have a look at the meta-skeleton layer in Poky and see that it includes the following elements: A layer.conf file, where the layer configuration variables are set A COPYING.MIT license file Several directories named with the recipes prefix with example recipes for BusyBox, the Linux kernel and an example module, an example service recipe, an example user management recipe, and a multilib example. How it works... We will cover some of the use cases that appear in the available examples, so for our needs, we will use the yocto-layer tool, which allows us to create a minimal layer. Open a new terminal and change to the fsl-community-bsp directory. Then set up the environment as follows: $ source setup-environment wandboard-quad Note that once the build directory has been created, the MACHINE variable has already been configured in the conf/local.conf file and can be omitted from the command line. Change to the sources directory and run: $ yocto-layer create bsp-custom Note that the yocto-layer tool will add the meta prefix to your layer, so you don't need to. It will prompt a few questions: The layer priority which is used to decide the layer precedence in cases where the same recipe (with the same name) exists in several layers simultaneously. It is also used to decide in what order bbappends are applied if several layers append the same recipe. Leave the default value of 6. This will be stored in the layer's conf/layer.conf file as BBFILE_PRIORITY. Whether to create example recipes and append files. Let's leave the default no for the time being. Our new layer has the following structure: meta-bsp-custom/    conf/layer.conf    COPYING.MIT    README There's more... The first thing to do is to add this new layer to your project's conf/bblayer.conf file. It is a good idea to add it to your template conf directory's bblayers.conf.sample file too, so that it is correctly appended when creating new projects. The highlighted line in the following code shows the addition of the layer to the conf/bblayers.conf file: LCONF_VERSION = "6"   BBPATH = "${TOPDIR}" BSPDIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', "" True)) + '/../..')}"   BBFILES ?= "" BBLAYERS = " ${BSPDIR}/sources/poky/meta ${BSPDIR}/sources/poky/meta-yocto ${BSPDIR}/sources/meta-openembedded/meta-oe ${BSPDIR}/sources/meta-openembedded/meta-multimedia ${BSPDIR}/sources/meta-fsl-arm ${BSPDIR}/sources/meta-fsl-arm-extra ${BSPDIR}/sources/meta-fsl-demos ${BSPDIR}/sources/meta-bsp-custom " Now, BitBake will parse the bblayers.conf file and find the conf/layers.conf file from your layer. In it, we find the following line: BBFILES += "${LAYERDIR}/recipes-*/*/*.bb        ${LAYERDIR}/recipes-*/*/*.bbappend" It tells BitBake which directories to parse for recipes and append files. You need to make sure your directory and file hierarchy in this new layer matches the given pattern, or you will need to modify it. BitBake will also find the following: BBPATH .= ":${LAYERDIR}" The BBPATH variable is used to locate the bbclass files and the configuration and files included with the include and require directives. The search finishes with the first match, so it is best to keep filenames unique. Some other variables we might consider defining in our conf/layer.conf file are: LAYERDEPENDS_bsp-custom = "fsl-arm" LAYERVERSION_bsp-custom = "1" The LAYERDEPENDS literal is a space-separated list of other layers your layer depends on, and the LAYERVERSION literal specifies the version of your layer in case other layers want to add a dependency to a specific version. The COPYING.MIT file specifies the license for the metadata contained in the layer. The Yocto project is licensed under the MIT license, which is also compatible with the General Public License (GPL). This license applies only to the metadata, as every package included in your build will have its own license. The README file will need to be modified for your specific layer. It is usual to describe the layer and provide any other layer dependencies and usage instructions. Adding a new machine When customizing your BSP, it is usually a good idea to introduce a new machine for your hardware. These are kept under the conf/machine directory in your BSP layer. The usual thing to do is to base it on the reference design. For example, wandboard-quad has the following machine configuration file: include include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_config"   KERNEL_DEVICETREE = "imx6q-wandboard.dtb"   MACHINE_FEATURES += "bluetooth wifi"   MACHINE_EXTRA_RRECOMMENDS += " bcm4329-nvram-config bcm4330-nvram-config " A machine based on the Wandboard design could define its own machine configuration file, wandboard-quad-custom.conf, as follows: include conf/machine/include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_custom_config"   KERNEL_DEVICETREE = "imx6q-wandboard-custom.dtb"   MACHINE_FEATURES += "wifi" The wandboard.inc file now resides on a different layer, so in order for BitBake to find it, we need to specify the full path from the BBPATH variable in the corresponding layer. This machine defines its own U-Boot configuration file and Linux kernel device tree in addition to defining its own set of machine features. Adding a custom device tree to the Linux kernel To add this device tree file to the Linux kernel, we need to add the device tree file to the arch/arm/boot/dts directory under the Linux kernel source and also modify the Linux build system's arch/arm/boot/dts/Makefile file to build it as follows: dtb-$(CONFIG_ARCH_MXC) += +imx6q-wandboard-custom.dtb This code uses diff formatting, where the lines with a minus prefix are removed, the ones with a plus sign are added, and the ones without a prefix are left as reference. Once the patch is prepared, it can be added to the meta-bsp-custom/recipes-kernel/linux/linux-wandboard-3.10.17/ directory and the Linux kernel recipe appended adding a meta-bsp-custom/recipes-kernel/linux/linux-wandboard_3.10.17.bbappend file with the following content: SRC_URI_append = " file://0001-ARM-dts-Add-wandboard-custom-dts- "" file.patch" Adding a custom U-Boot machine In the same way, the U-Boot source may be patched to add a new custom machine. Bootloader modifications are not as likely to be needed as kernel modifications though, and most custom platforms will leave the bootloader unchanged. The patch would be added to the meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc-v2014.10/ directory and the U-Boot recipe appended with a meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc_2014.10.bbappend file with the following content: SRC_URI_append = " file://0001-boards-Add-wandboard-custom.patch" Adding a custom formfactor file Custom platforms can also define their own formfactor file with information that the build system cannot obtain from other sources, such as defining whether a touchscreen is available or defining the screen orientation. These are defined in the recipes-bsp/formfactor/ directory in our meta-bsp-custom layer. For our new machine, we could define a meta-bsp-custom/recipes-bsp/formfactor/formfactor_0.0.bbappend file to include a formfactor file as follows: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" And the machine-specific meta-bsp-custom/recipes-bsp/formfactor/formfactor/wandboard-quadcustom/machconfig file would be as follows: HAVE_TOUCHSCREEN=1 Debugging the Linux kernel booting process We have seen the most general techniques for debugging the Linux kernel. However, some special scenarios require the use of different methods. One of the most common scenarios in embedded Linux development is the debugging of the booting process. This section will explain some of the techniques used to debug the kernel's booting process. How to do it... A kernel crashing on boot usually provides no output whatsoever on the console. As daunting as that may seem, there are techniques we can use to extract debug information. Early crashes usually happen before the serial console has been initialized, so even if there were log messages, we would not see them. The first thing we will show is how to enable early log messages that do not need the serial driver. In case that is not enough, we will also show techniques to access the log buffer in memory. How it works... Debugging booting problems have two distinctive phases, before and after the serial console is initialized. After the serial is initialized and we can see serial output from the kernel, debugging can use the techniques described earlier. Before the serial is initialized, however, there is a basic UART support in ARM kernels that allows you to use the serial from early boot. This support is compiled in with the CONFIG_DEBUG_LL configuration variable. This adds supports for a debug-only series of assembly functions that allow you to output data to a UART. The low-level support is platform specific, and for the i.MX6, it can be found under arch/arm/include/debug/imx.S. The code allows for this low-level UART to be configured through the CONFIG_DEBUG_IMX_UART_PORT configuration variable. We can use this support directly by using the printascii function as follows: extern void printascii(const char *); printascii("Literal stringn"); However, much more preferred would be to use the early_print function, which makes use of the function explained previously and accepts formatted input in printf style; for example: early_print("%08xt%sn", p->nr, p->name); Dumping the kernel's printk buffer from the bootloader Another useful technique to debug Linux kernel crashes at boot is to analyze the kernel log after the crash. This is only possible if the RAM memory is persistent across reboots and does not get initialized by the bootloader. As U-Boot keeps the memory intact, we can use this method to peek at the kernel login memory in search of clues. Looking at the kernel source, we can see how the log ring buffer is set up in kernel/printk/printk.c and also note that it is stored in __log_buf. To find the location of the kernel buffer, we will use the System.map file created by the Linux build process, which maps symbols with virtual addresses using the following command: $grep __log_buf System.map 80f450c0 b __log_buf To convert the virtual address to physical address, we look at how __virt_to_phys() is defined for ARM: x - PAGE_OFFSET + PHYS_OFFSET The PAGE_OFFSET variable is defined in the kernel configuration as: config PAGE_OFFSET        hex        default 0x40000000 if VMSPLIT_1G        default 0x80000000 if VMSPLIT_2G        default 0xC0000000 Some of the ARM platforms, like the i.MX6, will dynamically patch the __virt_to_phys() translation at runtime, so PHYS_OFFSET will depend on where the kernel is loaded into memory. As this can vary, the calculation we just saw is platform specific. For the Wandboard, the physical address for 0x80f450c0 is 0x10f450c0. We can then force a reboot using a magic SysRq key, which needs to be enabled in the kernel configuration with CONFIG_MAGIC_SYSRQ, but is enabled in the Wandboard by default: $ echo b > /proc/sysrq-trigger We then dump that memory address from U-Boot as follows: > md.l 0x10f450c0 10f450c0: 00000000 00000000 00210038 c6000000   ........8.!..... 10f450d0: 746f6f42 20676e69 756e694c 6e6f2078   Booting Linux on 10f450e0: 79687020 61636973 5043206c 78302055     physical CPU 0x 10f450f0: 00000030 00000000 00000000 00000000   0............... 10f45100: 009600a8 a6000000 756e694c 65762078   ........Linux ve 10f45110: 6f697372 2e33206e 312e3031 2e312d37   rsion 3.10.17-1. 10f45120: 2d322e30 646e6177 72616f62 62672b64   0.2-wandboard+gb 10f45130: 36643865 62323738 20626535 656c6128   e8d6872b5eb (ale 10f45140: 6f6c4078 696c2d67 2d78756e 612d7068   x@log-linux-hp-a 10f45150: 7a6e6f67 20296c61 63636728 72657620   gonzal) (gcc ver 10f45160: 6e6f6973 392e3420 2820312e 29434347   sion 4.9.1 (GCC) 10f45170: 23202920 4d532031 52502050 504d4545     ) #1 SMP PREEMP 10f45180: 75532054 6546206e 35312062 3a323120   T Sun Feb 15 12: 10f45190: 333a3733 45432037 30322054 00003531   37:37 CET 2015.. 10f451a0: 00000000 00000000 00400050 82000000   ........P.@..... 10f451b0: 3a555043 4d524120 50203776 65636f72   CPU: ARMv7 Proce There's more... Another method is to store the kernel log messages and kernel panics or oops into persistent storage. The Linux kernel's persistent store support (CONFIG_PSTORE) allows you to log in to the persistent memory kept across reboots. To log panic and oops messages into persistent memory, we need to configure the kernel with the CONFIG_PSTORE_RAM configuration variable, and to log kernel messages, we need to configure the kernel with CONFIG_PSTORE_CONSOLE. We then need to configure the location of the persistent storage on an unused memory location, but keep the last 1 MB of memory free. For example, we could pass the following kernel command-line arguments to reserve a 128 KB region starting at 0x30000000: ramoops.mem_address=0x30000000 ramoops.mem_size=0x200000 We would then mount the persistent storage by adding it to /etc/fstab so that it is available on the next boot as well: /etc/fstab: pstore /pstore pstore defaults 0 0 We then mount it as follows: # mkdir /pstore # mount /pstore Next, we force a reboot with the magic SysRq key: # echo b > /proc/sysrq-trigger On reboot, we will see a file inside /pstore: -r--r--r-- 1 root root 4084 Sep 16 16:24 console-ramoops This will have contents such as the following: SysRq : Resetting CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.14.0-rc4-1.0.0-wandboard-37774-g1eae [<80014a30>] (unwind_backtrace) from [<800116cc>] (show_stack+0x10/0x14) [<800116cc>] (show_stack) from [<806091f4>] (dump_stack+0x7c/0xbc) [<806091f4>] (dump_stack) from [<80013990>] (handle_IPI+0x144/0x158) [<80013990>] (handle_IPI) from [<800085c4>] (gic_handle_irq+0x58/0x5c) [<800085c4>] (gic_handle_irq) from [<80012200>] (__irq_svc+0x40/0x70) Exception stack(0xee4c1f50 to 0xee4c1f98) We should move it out of /pstore or remove it completely so that it doesn't occupy memory. Summary This article guides you through the customization of the BSP for your own product. It then explains how to debug the Linux kernel booting process. Resources for Article: Further resources on this subject: Baking Bits with Yocto Project [article] An Introduction to the Terminal [article] Linux Shell Scripting – various recipes to help you [article]
Read more
  • 0
  • 0
  • 29199

article-image-optimizing-javascript-ios-hybrid-apps
Packt
01 Apr 2015
17 min read
Save for later

Optimizing JavaScript for iOS Hybrid Apps

Packt
01 Apr 2015
17 min read
In this article by Chad R. Adams, author of the book, Mastering JavaScript High Performance, we are going to take a look at the process of optimizing JavaScript for iOS web apps (also known as hybrid apps). We will take a look at some common ways of debugging and optimizing JavaScript and page performance, both in a device's web browser and a standalone app's web view. Also, we'll take a look at the Apple Web Inspector and see how to use it for iOS development. Finally, we will also gain a bit of understanding about building a hybrid app and learn the tools that help to better build JavaScript-focused apps for iOS. Moreover, we'll learn about a class that might help us further in this. We are going to learn about the following topics in the article: Getting ready for iOS development iOS hybrid development (For more resources related to this topic, see here.) Getting ready for iOS development Before starting this article with Xcode examples and using iOS Simulator, I will be displaying some native code and will use tools that haven't been covered in this course. Mobile app developments, regardless of platform, are books within themselves. When covering the build of the iOS project, I will be briefly going over the process of setting up a project and writing non-JavaScript code to get our JavaScript files into a hybrid iOS WebView for development. This is essential due to the way iOS secures its HTML5-based apps. Apps on iOS that use HTML5 can be debugged, either from a server or from an app directly, as long as the app's project is built and deployed in its debug setting on a host system (meaning the developers machine). Readers of this article are not expected to know how to build a native app from the beginning to the end. And that's completely acceptable, as you can copy-and-paste, and follow along as I go. But I will show you the code to get us to the point of testing JavaScript code, and the code used will be the smallest and the fastest possible to render your content. All of these code samples will be hosted as an Xcode project solution of some type on Packt Publishing's website, but they will also be shown here if you want to follow along, without relying on code samples. Now with that said, lets get started… iOS hybrid development Xcode is the IDE provided by Apple to develop apps for both iOS devices and desktop devices for Macintosh systems. As a JavaScript editor, it has pretty basic functions, but Xcode should be mainly used in addition to a project's toolset for JavaScript developers. It provides basic code hinting for JavaScript, HTML, and CSS, but not more than that. To install Xcode, we will need to start the installation process from the Mac App Store. Apple, in recent years, has moved its IDE to the Mac App Store for faster updates to developers and subsequently app updates for iOS and Mac applications. Installation is easy; simply log in with your Apple ID in the Mac App Store and download Xcode; you can search for it at the top or, if you look in the right rail among popular free downloads, you can find a link to the Xcode Mac App Store page. Once you reach this, click Install as shown in the following screenshot: It's important to know that, for the sake of simplicity in this article, we will not deploy an app to a device; so if you are curious about it, you will need to be actively enrolled in Apple's Developer Program. The cost is 99 dollars a year, or 299 dollars for an enterprise license that allows deployment of an app outside the control of the iOS App Store. If you're curious to learn more about deploying to a device, the code in this article will run on the device assuming that your certificates are set up on your end. For more information on this, check out Apple's iOS Developer Center documentation online at https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40012582. Once it's installed, we can open up Xcode and look at the iOS Simulator; we can do this by clicking XCode, followed by Open Developer Tool, and then clicking on iOS Simulator. Upon first opening iOS Simulator, we will see what appears to be a simulation of an iOS device, shown in the next screenshot. Note that this is a simulation, not a real iOS device (even if it feels pretty close). A neat trick for JavaScript developers working with local HTML files outside an app is that they can quickly drag-and-drop an HTML file. Due to this, the simulator will open the mobile version of Safari, the built-in browser for iPhone and iPads, and render the page as it would do on an iOS device; this is pretty helpful when testing pages before deploying them to a web server. Setting up a simple iOS hybrid app JavaScript performance on a built-in hybrid application can be much slower than the same page run on the mobile version of Safari. To test this, we are going to build a very simple web browser using Apple's new programming language Swift. Swift is an iOS-ready language that JavaScript developers should feel at home with. Swift itself follows a syntax similar to JavaScript but, unlike JavaScript, variables and objects can be given types allowing for stronger, more accurate coding. In that regard, Swift follows syntax similar to what can be seen in the ECMAScript 6 and TypeScript styles of coding practice. If you are checking these newer languages out, I encourage you to check out Swift as well. Now let's create a simple web view, also known as a UIWebView, which is the class used to create a web view in an iOS app. First, let's create a new iPhone project; we are using an iPhone to keep our app simple. Open Xcode and select the Create new XCode project project; then, as shown in the following screenshot, select the Single View Application option and click the Next button. On the next view of the wizard, set the product name as JS_Performance, the language to Swift, and the device to iPhone; the organization name should autofill with your name based on your account name in the OS. The organization identifier is a reverse domain name unique identifier for our app; this can be whatever you deem appropriate. For instructional purposes, here's my setup: Once your project names are set, click the Next button and save to a folder of your choice with Git repository left unchecked. When that's done, select Main.storyboard under your Project Navigator, which is found in the left panel. We should be in the storyboard view now. Let's open the Object Library, which can be found in the lower-right panel in the subtab with an icon of a square inside a circle. Search for Web View in the Object Library in the bottom-right search bar, and then drag that to the square view that represents our iOS view. We need to consider two more things before we link up an HTML page using Swift; we need to set constraints as native iOS objects will be stretched to fit various iOS device windows. To fill the space, you can add the constraints by selecting the UIWebView object and pressing Command + Option + Shift + = on your Mac keyboard. Now you should see a blue border appear briefly around your UIWebView. Lastly, we need to connect our UIWebView to our Swift code; for this, we need to open the Assistant Editor by pressing Command + Option + Return on the keyboard. We should see ViewController.swift open up in a side panel next to our Storyboard. To link this as a code variable, right-click (or option-click the UIWebView object) and, with the button held down, drag the UIWebView to line number 12 in the ViewController.swift code in our Assistant Editor. This is shown in the following diagram: Once that's done, a popup will appear. Now leave everything the same as it comes up, but set the name to webview; this will be the variable referencing our UIWebView. With that done, save your Main.storyboard file and navigate to your ViewController.swift file. Now take a look at the Swift code shown in the following screenshot, and copy it into the project; the important part is on line 19, which contains the filename and type loaded into the web view; which in this case, this is index.html. Now obviously, we don't have an index.html file, so let's create one. Go to File and then select New followed by the New File option. Next, under iOS select Empty Application and click Next to complete the wizard. Save the file as index.html and click Create. Now open the index.html file, and type the following code into the HTML page: <br />Hello <strong>iOS</strong> Now click Run (the play button in the main iOS task bar), and we should see our HTML page running inside our own app, as shown here: That's nice work! We built an iOS app with Swift (even if it's a simple app). Let's create a structured HTML page; we will override our Hello iOS text with the HTML shown in the following screenshot: Here, we use the standard console.time function and print a message to our UIWebView page when finished; if we hit Run in Xcode, we will see the Loop Completed message on load. But how do we get our performance information? How can we get our console.timeEnd function code on line 14 on our HTML page? Using Safari web inspector for JavaScript performance Apple does provide a Web Inspector for UIWebViews, and it's the same inspector for desktop Safari. It's easy to use, but has an issue: the inspector only works on iOS Simulators and devices that have started from an Xcode project. This limitation is due to security concerns for hybrid apps that may contain sensitive JavaScript code that could be exploited if visible. Let's check our project's embedded HTML page console. First, open desktop Safari on your Mac and enable developer mode. Launch the Preferences option. Under the Advanced tab, ensure that the Show develop menu in menu bar option is checked, as shown in the following screenshot: Next, let's rerun our Xcode project, start up iOS Simulator and then rerun our page. Once our app is running with the Loop Completed result showing, open desktop Safari and click Develop, then iOS Simulator, followed by index.html. If you look closely, you will see iOS simulator's UIWebView highlighted in blue when you place the mouse over index.html; a visible page is seen as shown in the following screenshot: Once we release the mouse on index.html, we Safari's Web Inspector window appears featuring our hybrid iOS app's DOM and console information. The Safari's Web Inspector is pretty similar to Chrome's Developer tools in terms of feature sets; the panels used in the Developer tools also exist as icons in Web Inspector. Now let's select the Console panel in Web Inspector. Here, we can see our full console window including our Timer console.time function test included in the for loop. As we can see in the following screenshot, the loop took 0.081 milliseconds to process inside iOS. Comparing UIWebView with Mobile Safari What if we wanted to take our code and move it to Mobile Safari to test? This is easy enough; as mentioned earlier in the article, we can drag-and-drop the index.html file into our iOS Simulator, and then the OS will open the mobile version of Safari and load the page for us. With that ready, we will need to reconnect Safari Web Inspector to the iOS Simulator and reload the page. Once that's done, we can see that our console.time function is a bit faster; this time it's roughly 0.07 milliseconds, which is a full .01 milliseconds faster than UIWebView, as shown here: For a small app, this is minimal in terms of a difference in performance. But, as an application gets larger, the delay in these JavaScript processes gets longer and longer. We can also debug the app using the debugging inspector in the Safari's Web Inspector tool. Click Debugger in the top menu panel in Safari's Web Inspector. We can add a break point to our embedded script by clicking a line number and then refreshing the page with Command + R. In the following screenshot, we can see the break occurring on page load, and we can see our scope variable displayed for reference in the right panel: We can also check page load times using the timeline inspector. Click Timelines at the top of the Web Inspector and now we will see a timeline similar to the Resources tab found in Chrome's Developer tools. Let's refresh our page with Command + R on our keyboard; the timeline then processes the page. Notice that after a few seconds, the timeline in the Web Inspector stops when the page fully loads, and all JavaScript processes stop. This is a nice feature when you're working with the Safari Web Inspector as opposed to Chrome's Developer tools. Common ways to improve hybrid performance With hybrid apps, we can use all the techniques for improving performance using a build system such as Grunt.js or Gulp.js with NPM, using JSLint to better optimize our code, writing code in an IDE to create better structure for our apps, and helping to check for any excess code or unused variables in our code. We can use best performance practices such as using strings to apply an HTML page (like the innerHTML property) rather than creating objects for them and applying them to the page that way, and so on. Sadly, the fact that hybrid apps do not perform as well as native apps still holds true. Now, don't let that dismay you as hybrid apps do have a lot of good features! Some of these are as follows: They are (typically) faster to build than using native code They are easier to customize They allow for rapid prototyping concepts for apps They are easier to hand off to other JavaScript developers rather than finding a native developer They are portable; they can be reused for another platform (with some modification) for Android devices, Windows Modern apps, Windows Phone apps, Chrome OS, and even Firefox OS They can interact with native code using helper libraries such as Cordova At some point, however, application performance will be limited to the hardware of the device, and it's recommended you move to native code. But, how do we know when to move? Well, this can be done using Color Blended Layers. The Color Blended Layers option applies an overlay that highlights slow-performing areas on the device display, for example, green for good performance and red for slow performance; the darker the color is, the more impactful will be the performance result. Rerun your app using Xcode and, in the Mac OS toolbar for iOS Simulator, select Debug and then Color Blended Layers. Once we do that, we can see that our iOS Simulator shows a green overlay; this shows us how much memory iOS is using to process our rendered view, both native and non-native code, as shown here: Currently, we can see a mostly green overlay with the exception of the status bar elements, which take up more render memory as they overlay the web view and have to be redrawn over that object repeatedly. Let's make a copy of our project and call it JS_Performance_CBL, and let's update our index.html code with this code sample, as shown in the following screenshot: Here, we have a simple page with an empty div; we also have a button with an onclick function called start. Our start function will update the height continuously using the setInterval function, increasing the height every millisecond. Our empty div also has a background gradient assigned to it with an inline style tag. CSS background gradients are typically a huge performance drain on mobile devices as they can potentially re-render themselves over and over as the DOM updates itself. Some other issues include listener events; some earlier or lower-end devices do not have enough RAM to apply an event listener to a page. Typically, it's a good practice to apply onclick attributes to HTML either inline or through JavaScript. Going back to the gradient example, let's run this in iOS Simulator and enable Color Blended Layers after clicking our HTML button to trigger the JavaScript animation. As expected, our div element that we've expanded now has a red overlay indicating that this is a confirmed performance issue, which is unavoidable. To correct this, we would need to remove the CSS gradient background, and it would show as green again. However, if we had to include a gradient in accordance with a design spec, a native version would be required. When faced with UI issues such as these, it's important to understand tools beyond normal developer tools and Web Inspectors, and take advantage of the mobile platform tools that provide better analysis of our code. Now, before we wrap this article, let's take note of something specific for iOS web views. The WKWebView framework At the time of writing, Apple has announced the WebKit framework, a first-party iOS library intended to replace UIWebView with more advanced and better performing web views; this was done with the intent of replacing apps that rely on HTML5 and JavaScript with better performing apps as a whole. The WebKit framework, also known in developer circles as WKWebView, is a newer web view that can be added to a project. WKWebView is also the base class name for this framework. This framework includes many features that native iOS developers can take advantage of. These include listening for function calls that can trigger native Objective-C or Swift code. For JavaScript developers like us, it includes a faster JavaScript runtime called Nitro, which has been included with Mobile Safari since iOS6. Hybrid apps have always run worse that native code. But with the Nitro JavaScript runtime, HTML5 has equal footing with native apps in terms of performance, assuming that our view doesn't consume too much render memory as shown in our color blended layers example. WKWebView does have limitations though; it can only be used for iOS8 or higher and it doesn't have built-in Storyboard or XIB support like UIWebView. So, using this framework may be an issue if you're new to iOS development. Storyboards are simply XML files coded in a specific way for iOS user interfaces to be rendered, while XIB files are the precursors to Storyboard. XIB files allow for only one view whereas Storyboards allow multiple views and can link between them too. If you are working on an iOS app, I encourage you to reach out to your iOS developer lead and encourage the use of WKWebView in your projects. For more information, check out Apple's documentation of WKWebView at their developer site at https://developer.apple.com/library/IOs/documentation/WebKit/Reference/WKWebView_Ref/index.html. Summary In this article, we learned the basics of creating a hybrid-application for iOS using HTML5 and JavaScript; we learned about connecting the Safari Web Inspector to our HTML page while running an application in iOS Simulator. We also looked at Color Blended Layers for iOS Simulator, and saw how to test for performance from our JavaScript code when it's applied to device-rendering performance issues. Now we are down to the wire. As for all JavaScript web apps before they go live to a production site, we need to smoke-test our JavaScript and web app code and see if we need to perform any final improvements before final deployment. Resources for Article: Further resources on this subject: GUI Components in Qt 5 [article] The architecture of JavaScriptMVC [article] JavaScript Promises – Why Should I Care? [article]
Read more
  • 0
  • 0
  • 9664
article-image-woocommerce-basics
Packt
01 Apr 2015
16 min read
Save for later

WooCommerce Basics

Packt
01 Apr 2015
16 min read
In this article by Patrick Rauland, author of the book WooCommerce Cookbook, we will focus on the following topics: Installing WooCommerce Installing official WooThemes plugins Manually creating WooCommerce pages Creating a WooCommerce plugin (For more resources related to this topic, see here.) A few years ago, building an online store used to be an incredibly complex task. You had to install bulky software onto your own website and pay expensive developers a significant sum of money to customize even the simplest elements of your store. Luckily, nowadays, adding e-commerce functionality to your existing WordPress-powered website can be done by installing a single plugin. In this article, we'll go over the settings that you'll need to configure before launching your online store with WooCommerce. Most of the recipes in this article are simple to execute. We do, however, add a relatively complex recipe near the end of the article to show you how to create a plugin specifically for WooCommerce. If you're going to be customizing WooCommerce with code, it's definitely worth looking at that recipe to know the best way to customize WooCommerce without affecting other parts of your site. The recipes in this article form the very basics of setting up a store, installing plugins that enhance WooCommerce, and managing those plugins. There are recipes for official WooCommerce plugins written using WooThemes as well as a recipe for unofficial plugins. Feel free to select either one. In general, the official plugins are better supported, more up to date, and have more functionality than unofficial plugins. You could always try an unofficial plugin to see whether it meets your needs, and if it doesn't, then use an official plugin that is much more likely to meet your needs. At the end of this article, your store will be fully functional and ready to display products. Installing WooCommerce WooCommerce is a WordPress plugin, which means that you need to have WordPress running on your own server to add WooCommerce. The first step is to install WooCommerce. You could do this on an established website or a brand new website—it doesn't matter. Since e-commerce is more complex than your average plugin, there's more to the installation process than just installing the plugin. Getting ready Make sure you have the permissions necessary to install plugins on your WordPress site. The easiest way to have the correct permissions is to make sure your account on your WordPress site has the admin role. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding the required pages to the site. Let's have a look at the following steps for further clarity: Log in to your WordPress site. Click on the Plugins menu. Click on the Add New menu item. These steps have been demonstrated visually in the following screenshot: Search for WooCommerce. Click on the Install Now button, as shown in the following screenshot: Once the plugin has been installed, click on the Activate Plugin button. You now have WooCommerce activated on your site, which means we're half way there. E-commerce platforms need to have certain pages (such as a cart page, a checkout page, an account page, and so on) to function. We need to add those to your site. Click on the Install WooCommerce Pages button, which appears after you've activated WooCommerce. This is demonstrated in the following screenshot: How it works… WordPress has an infrastructure that allows any WordPress site to install a plugin hosted on WordPress.org. This is a secure process that is managed by WordPress.org. Installing the WooCommerce pages allows all of the e-commerce functionality to run. Without installing the pages, WooCommerce won't know which page is the cart page or the checkout page. Once these pages are set up, we're ready to have a basic store up and running. If WordPress prompts you for FTP credentials when installing the plugin, that's likely to be a permissions issue with your web host. It is a huge pain if you have to enter FTP credentials every time you want to install or update a plugin, and it's something you should take care of. You can send this link to your web host provider so they know how to change their permissions. You can refer to http://www.chrisabernethy.com/why-wordpress-asks-connection-info/ for more information to resolve this WordPress issue. Installing official WooThemes plugins WooThemes doesn't just create the WooCommerce plugin. They also create standalone plugins and hundreds of extensions that add extra functionality to WooCommerce. The beauty of this system is that WooCommerce is very easy to use because users only add extra complexity when they need it. If you only need simple shipping options, you don't ever have to see the complex shipping settings. On the WooThemes website, you may browse for WooCommerce extensions, purchase them, and download and install them on your site. WooThemes has made the whole process very easy to maintain. They have built an updater similar to the one in WordPress, which, once configured, will allow a user to update a plugin with one click instead of having to through the whole plugin upload process again. Getting ready Make sure you have the necessary permissions to install plugins on your WordPress site. You also need to have a WooThemes product. There are several free WooThemes products including Pay with Amazon which you can find at http://www.woothemes.com/products/pay-with-amazon/. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding your license for future updates. Follow these steps: Log in to http://www.woothemes.com. Click on the Downloads menu: Find the product you wish to download and click on the Download link for the product. You will see that you get a ZIP file. On your WordPress site, go the Plugins menu and click on Add New. Click on Upload Plugin. Select the file you just downloaded and click on the Install Now button. After the plugin has finished installing, click on the Activate Plugin link. You now have WooCommerce as well as a WooCommerce extension activated on your site. They're both functioning and will continue to function. You will, however, want to perform a few more steps to make sure it's easy to update your extensions: Once you have an extension activated on your site, you'll see a link in the WordPress admin: Install the WooThemes Updater plugin. Click on that link: The updater will be installed automatically. Once it is installed, you need to activate the updater. After activation, you'll see a new link in the WordPress admin: activate your product licenses. Click that link to go straight to the page where you can enter your licenses. You could also navigate to that page manually by going to Dashboard | WooThemes Helper from the menu. Keep your WordPress site open in one tab and log back in to your WooThemes account in another browser tab. On the WooThemes browser tab, go to My Licenses and you'll see a list of your products with a license key under the heading KEY: Copy the key, go back to your WordPress site, and enter it in the Licenses field. Click on the Activate Products button at the bottom of the page. The activation process can take a few seconds to complete. If you've successfully put in your key, you should see a message at the top of the screen saying so. How it works… A plugin that's not hosted on WordPress.org can't update without someone manually reuploading it. The WooThemes updater was built to make this process easier so you can press the update button and have your website do all the heavy lifting. Some websites sell official WooCommerce plugins without a license key. These sales aren't licensed and you won't be getting updates, bug fixes, or access to the support desk. With a regular website, it's important to stay up to date. However, with e-commerce, it's even more important since you'll be handling very sensitive payment information. That's why I wouldn't ever recommend using a plugin that can't update. Manually creating WooCommerce pages Every e-commerce platform will need to have some way of creating extra pages for e-commerce functionality, such as a cart page, a checkout page, an account page, and so on. WooCommerce prompts to helps you create these pages for you when you first install the plugin. So if you installed it correctly, you shouldn't have to do this. But if you were trying multiple e-commerce systems and for some reason deleted some pages, you may have to recreate those pages. How to do it… There's a very useful Tools menu in WooCommerce. It's a bit hard to find since you won't be needing it everyday, but it has some pretty useful tools if you ever need to do some troubleshooting. One of these tools is the one that allows you to recreate your WooCommerce pages. Let's have a look at how to use that tool: Log in to the WordPress admin. Click on WooCommerce | System Status: Click on Tools: Click on the Install Pages button: How it works… WooCommerce keeps track of which pages run e-commerce functionality. When you click on the Install Pages button, it checks which pages exist and if they don't exist, it will automatically create them for you. You could create them by creating new WordPress pages and then manually assigning each page with specific e-commerce functionality. You may want to do this if you already have a cart page and don't want to recreate a new cart page but just copy the content from the old page to the new page. All you want to do is tell WooCommerce which page should have the cart functionality. Let's have a look at the following manual settings: The Cart, Checkout, and Terms & Conditions page can all be set by going to WooCommerce | Settings | Checkout The My Account page can be set by going to WooCommerce | Settings | Accounts There's more... You can manually set some pages, such as the Cart and Checkout page, but you can't set subpages. WooCommerce uses a WordPress functionality called end points to create these subpages. Pages such as the Order Received page, which is displayed right after payment, can't be manually created. These endpoints are created on the fly based on the parent page. The Order Received page is part of the checkout process, so it's based on the Checkout page. Any content on the Checkout page will appear on both the Checkout page and on the Order Received page. You can't add content to the parent page without it affecting the subpage, but you can change the subpage URLs. The checkout endpoints can be configured by going to WooCommerce | Settings | Checkout | Checkout Endpoints. Creating a WooCommerce plugin Unlike a lot of hosted e-commerce solutions, WooCommerce is entirely customizable. That's one of the huge advantages for anyone who builds on open source software. If you don't like it, you can change it. At some point, you'll probably want to change something that's not on a settings page, and that's when you may want to dig into the code. Even if you don't know how to code, you may want to look this over so that when you work with a developer, you would know they're doing it the right way. Getting ready In addition to having admin access to a WordPress site, you'll also need FTP credentials so you can upload a plugin. You'll also need a text editor. Popular code editors include Sublime Text, Coda, Dreamweaver, and Atom. I personally use Atom. You could also use Notepad on a Windows machine or Text Edit on a Mac in a pinch. How to do it… We're going to be creating a plugin that interacts with WooCommerce. It will take the existing WooCommerce functionality and change it. These are the WooCommerce basics. If you build a plugin like this correctly, when WooCommerce isn't active, it won't do anything at all and won't slow down your website. Let's create a plugin by performing the following steps: Open your text editor and create a new file. Save the file as woocommerce-demo-plugin.php. In that file, add the opening PHP tag, which looks like this: <?php. On the next line, add a plugin header. This allows WordPress to recognize the file as a plugin so that it can be activated. It looks something like the following: /** * Plugin Name: WooCommerce Demo Plugin * Plugin URI: https://gist.github.com/BFTrick/3ab411e7cec43eff9769 * Description: A WooCommerce demo plugin * Author: Patrick Rauland * Author URI: http://speakinginbytes.com/ * Version: 1.0 * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * */ Now that WordPress knows that your file is a plugin, it's time to add some functionality to this. The first thing a good developer does is makes sure their plugin won't conflict with another plugin. To do that, we make sure an existing class doesn't have the same name as our class. I'll be using the WC_Demo_Plugin class, but you can use any class name you want. Add the following code beneath the plugin header: if ( class_exists( 'WC_Demo_Plugin' ) ) {    return; }   class WC_Demo_Plugin {   } Our class doesn't do anything yet, but at least we've written it in such a way that it won't break another plugin. There's another good practice we should add to our plugin before we add the functionality, and that's some logic to make sure another plugin won't misuse our plugin. In the vast majority of use cases, you want to make sure there can't be two instances of your code running. In computer science, this is called the Singleton pattern. This can be controlled by tracking the instances of the plugin with a variable. Right after the WC_Demo_Plugin { line, add the following: protected static $instance = null;     /** * Return an instance of this class. * * @return object A single instance of this class. * @since 1.0 */ public static function get_instance() {    // If the single instance hasn't been set, set it now.    if ( null == self::$instance ) {        self::$instance = new self;    }      return self::$instance; } And get the plugin started by adding this right before the endif; line: add_action( 'plugins_loaded', array( 'WC_Demo_Plugin', 'get_instance' ), 0 ); At this point, we've made sure our plugin doesn't break other plugins and we've also dummy-proofed our own plugin so that we or other developers don't misuse it. Let's add just a bit more logic so that we don't run our logic unless WooCommerce is already loaded. This will make sure that we don't accidentally break something if we turn WooCommerce off temporarily. Right after the protected static $instance = null; line, add the following: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {      } } And now our plugin only runs when WooCommerce is loaded. I'm guessing that at this point, you finally want it to do something, right? After we make sure WooCommerce is running, let's add some functionality. Right after the if ( class_exists( 'WooCommerce' ) ) { line, add the following code so that we add an admin notice: // print an admin notice to the screen. add_action( 'admin_notices', array( $this, 'my_admin_notice' ) ); This code will call a method called my_admin_notice, but we haven't written that yet, so it's not doing anything. Let's write that method. Have a look at the __construct method, which should now look like this: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {          // print an admin notice to the screen.        add_action( 'admin_notices', array( $this, 'display_admin_notice' ) );      } } Add the following after the preceding __construct method: /** * Print an admin notice * * @since 1.0 */ public function display_admin_notice() {    ?>    <div class="updated">        <p><?php _e( 'The WooCommerce dummy plugin notice.', 'woocommerce-demo-plugin' ); ?></p>    </div>    <?php } This will print an admin notice on every single admin page. This notice includes all the messages you typically see in the WordPress admin. You could replace this admin notice method with just about any other hook in WooCommerce to provide additional customizations in other areas of WooCommerce, whether it be for shipping, the product page, the checkout process, or any other area. This plugin is the easiest way to get started with WooCommerce customizations. If you'd like to see the full code sample, you can see it at https://gist.github.com/BFTrick/3ab411e7cec43eff9769. Now that the plugin is complete, you need to upload it to your plugins folder. You can do this via the WordPress admin or more commonly via FTP. Once the plugin has been uploaded to your site, you'll need to activate the plugin just like any other WordPress plugin. The end result is a notice in the WordPress admin letting us know we did everything successfully. Whenever possible, use object-oriented code. That means using objects (like the WC_Demo_Plugin class) to encapsulate your code. It will prevent a lot of naming conflicts down the road. If you see some procedural code online, you can usually convert it to object-oriented code pretty easily. Summary In this article, you have learned the basic steps in installing WooCommerce, installing WooThemes plugins, manually creating WooCommerce pages, and creating a WooCommerce plugin. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [article] Tips and Tricks [article] Setting Up WooCommerce [article]
Read more
  • 0
  • 0
  • 11289

article-image-introduction-testing-angularjs-directives
Packt
01 Apr 2015
14 min read
Save for later

An introduction to testing AngularJS directives

Packt
01 Apr 2015
14 min read
In this article by Simon Bailey, the author of AngularJS Testing Cookbook, we will cover the following recipes: Starting with testing directives Setting up templateUrl Searching elements using selectors Accessing basic HTML content Accessing repeater content (For more resources related to this topic, see here.) Directives are the cornerstone of AngularJS and can range in complexity providing the foundation to many aspects of an application. Therefore, directives require comprehensive tests to ensure they are interacting with the DOM as intended. This article will guide you through some of the rudimentary steps required to embark on your journey to test directives. The focal point of many of the recipes revolves around targeting specific HTML elements and how they respond to interaction. You will learn how to test changes on scope based on a range of influences and finally begin addressing testing directives using Protractor. Starting with testing directives Testing a directive involves three key steps that we will address in this recipe to serve as a foundation for the duration of this article: Create an element. Compile the element and link to a scope object. Simulate the scope life cycle. Getting ready For this recipe, you simply need a directive that applies a scope value to the element in the DOM. For example: angular.module('chapter5', []) .directive('writers', function() {    return {      restrict: 'E',      link: function(scope, element) {        element.text('Graffiti artist: ' + scope.artist);      }    }; }); How to do it… First, create three variables accessible across all tests:     One for the element: var element;     One for scope: var scope;     One for some dummy data to assign to a scope value: var artist = 'Amara Por Dios'; Next, ensure that you load your module: beforeEach(module('chapter5')); Create a beforeEach function to inject the necessary dependencies and create a new scope instance and assign the artist to a scope: beforeEach(inject(function ($rootScope, $compile) { scope = $rootScope.$new(); scope.artist = artist; })); Next, within the beforeEach function, add the following code to create an Angular element providing the directive HTML string: element = angular.element('<writers></writers>'); Compile the element providing our scope object: $compile(element)(scope); Now, call $digest on scope to simulate the scope life cycle: scope.$digest(); Finally, to confirm whether these steps work as expected, write a simple test that uses the text() method available on the Angular element. The text() method will return the text contents of the element, which we then match against our artist value: it('should display correct text in the DOM', function() { expect(element.text()).toBe('Graffiti artist: ' + artist); }); Here is what your code should look like to run the final test: var scope; var element; var artist;   beforeEach(module('chapter5'));   beforeEach(function() { artist = 'Amara Por Dios'; });   beforeEach(inject(function($compile) { element = angular.element('<writers></writers>'); scope.artist = artist; $compile(element)(scope); scope.$digest(); }));   it('should display correct text in the DOM', function() {    expect(element.text()).toBe('Graffiti artist: ' + artist); }); How it works… In step 4, the directive HTML tag is provided as a string to the angular.element function. The angular element function wraps a raw DOM element or an HTML string as a jQuery element if jQuery is available; otherwise, it defaults to using Angular's jQuery lite which is a subset of jQuery. This wrapper exposes a range of useful jQuery methods to interact with the element and its content (for a full list of methods available, visit https://docs.angularjs.org/api/ng/function/angular.element). In step 6, the element is compiled into a template using the $compile service. The $compile service can compile HTML strings into a template and produces a template function. This function can then be used to link the scope and the template together. Step 6 demonstrates just this, linking the scope object created in step 3. The final step to getting our directive in a testable state is in step 7 where we call $digest to simulate the scope life cycle. This is usually part of the AngularJS life cycle within the browser and therefore needs to be explicitly called in a test-based environment such as this, as opposed to end-to-end tests using Protractor. There's more… One beforeEach() method containing the logic covered in this recipe can be used as a reference to work from for the rest of this article: beforeEach(inject(function($rootScope, $compile) { // Create scope scope = $rootScope.$new(); // Replace with the appropriate HTML string element = angular.element('<deejay></deejay>'); // Replace with test scope data scope.deejay = deejay; // Compile $compile(element)(scope); // Digest scope.$digest(); })); See also The Setting up templateUrl recipe The Searching elements using selectors recipe The Accessing basic HTML content recipe The Accessing repeater content recipe Setting up templateUrl It's fairly common to separate the template content into an HTML file that can then be requested on demand when the directive is invoked using the templateUrl property. However, when testing directives that make use of the templateUrl property, we need to load and preprocess the HTML files to AngularJS templates. Luckily, the AngularJS team preempted our dilemma and provided a solution using Karma and the karma-ng-html2js-preprocessor plugin. This recipe will show you how to use Karma to enable us to test a directive that uses the templateUrl property. Getting ready For this recipe, you will need to ensure the following: You have installed Karma You installed the karma-ng-html2js-preprocessor plugin by following the instructions at https://github.com/karma-runner/karma-ng-html2js-preprocessor/blob/master/README.md#installation. You configured the karma-ng-html2js-preprocessor plugin by following the instructions at https://github.com/karma-runner/karma-ng-html2js-preprocessor/blob/master/README.md#configuration. Finally, you'll need a directive that loads an HTML file using templateUrl and for this example, we apply a scope value to the element in the DOM. Consider the following example: angular.module('chapter5', []) .directive('emcees', function() {    return {      restrict: 'E',      templateUrl: 'template.html',      link: function(scope, element) {        scope.emcee = scope.emcees[0];      }    }; }) An example template could be as simple as what we will use for this example (template.html): <h1>{{emcee}}</h1> How to do it… First, create three variables accessible across all tests:     One for the element: var element;     One for the scope: var scope;     One for some dummy data to assign to a scope value: var emcees = ['Roxanne Shante', 'Mc Lyte']; Next, ensure that you load your module: beforeEach(module('chapter5')); We also need to load the actual template. We can do this by simply appending the filename to the beforeEach function we just created in step 2: beforeEach(module('chapter5', 'template.html')); Next, create a beforeEach function to inject the necessary dependencies and create a new scope instance and assign the artist to a scope: beforeEach(inject(function ($rootScope, $compile) { scope = $rootScope.$new(); Scope.emcees = emcees; })); Within the beforeEach function, add the following code to create an Angular element providing the directive HTML string:    element = angular.element('<emcees></emcees>'); Compile the element providing our scope object: $compile(element)(scope); Call $digest on scope to simulate the scope life cycle: scope.$digest(); Next, create a basic test to establish that the text contained within the h1 tag is what we expect: it('should set the scope property id to the correct initial value', function () {}); Now, retrieve a reference to the h1 tag using the find() method on the element providing the tag name as the selector: var h1 = element.find('h1'); Finally, add the expectation that the h1 tag text matches our first emcee from the array we provided in step 4: expect(h1.text()).toBe(emcees[0]); You will see the following passing test within your console window: How it works… The karma-ng-html2js-preprocessor plugin works by converting HTML files into JS strings and generates AngularJS modules that we load in step 3. Once loaded, AngularJS makes these modules available by putting the HTML files into the $templateCache. There are libraries available to help incorporate this into your project build process, for example using Grunt or Gulp. There is a popular example specifically for Gulp at https://github.com/miickel/gulp-angular-templatecache. Now that the template is available, we can access the HTML content using the compiled element we created in step 5. In this recipe, we access the text content of the element using the find() method. Be aware that if using the smaller jQuery lite subset of jQuery, there are certain limitations compared to the full-blown jQuery version. The find() method in particular is limited to look up by tag name only. To read more about the find() method, visit the jQuery API documentation at http://api.jquery.com/find. See also The Starting with testing directives recipe Searching elements using selectors Directives, as you should know, attach special behavior to a DOM element. When AngularJS compiles and returns the element on which the directive is applied, it is wrapped by either jqLite or jQuery. This exposes an API on the element, offering many useful methods to query the element and its contents. In this recipe, you will learn how to use these methods to retrieve elements using selectors. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I tested against a property on scope named deejay: var deejay = { name: 'Shortee', style: 'turntablism' }; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within an h2 tag is as we expected: it('should return an element using find()', function () {}); Next, retrieve a reference to the h2 tag using the find() method on the element providing the tag name as the selector: var h2 = element.find('h2'); Finally, we create an expectation that the element is actually defined: expect(h2[0]).toBeDefined(); How it works… In step 2, we use the find() method with the h2 selector to test against in step 3's expectation. Remember, the element returned is wrapped by jqLite or jQuery. Therefore, even if the element is not found, the object returned will have jQuery-specific properties; this means that we cannot run an expectation on the element alone being defined. A simple way to determine if the element itself is indeed defined is to access it via jQuery's internal array of DOM objects, typically the first. So, this is why in our recipe we run an expectation against element[0] as opposed to element itself. There's more… Here is an example using the querySelector() method. The querySelector() method is available on the actual DOM so we need to access it on an actual HTML element and not the jQuery wrapped element. The following code shows the selector we use in a CSS selector: it('should return an element using querySelector and css selector', function() { var elementByClass = element[0].querySelector('.deejay- style'); expect(elementByClass).toBeDefined(); }); Here is a another example using the querySelector() method that uses an id selector: it(should return an element using querySelector and id selector', function() { var elementByClass = element[0].querySelector(' #deejay_name'); expect(elementByClass).toBeDefined(); }); You can read more about the querySelector() method at https://developer.mozilla.org/en-US/docs/Web/API/document.querySelector. See also The Starting with testing directives recipe The Accessing basic HTML content recipe Accessing basic HTML content A substantial number of directive tests will involve interacting with the HTML content within the rendered HTML template. This recipe will teach you how to test whether a directive's HTML content is as expected. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I will test against a property on a scope named deejay: var deejay = { name: 'Shortee', style: 'turntablism' }; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within a h2 tag is as we expected: it('should display correct deejay data in the DOM', function () {}); Next, retrieve a reference to the h2 tag using the find() method on the element providing the tag name as the selector: var h2 = element.find('h2'); Finally, using the html() method on the returned element from step 2, we can get the HTML contents within an expectation that the h2 tag HTML code matches our scope's deejay name: expect(h2.html()).toBe(deejay.name); How it works… We made heavy use of the jQuery (or jqLite) library methods available for our element. In step 2, we use the find() method with the h2 selector. This returns a match for us to further utilize in step 3, in our expectation where we access the HTML contents of the element using the html() method this time (http://api.jquery.com/html/). There's more… We could also run a similar expectation for text within our h2 element using the text() method (http://api.jquery.com/text/) on the element, for example: it('should retrieve text from <h2>', function() { var h2 = element.find('h2'); expect(h2.text()).toBe(deejay.name); }); See also The Starting with testing directives recipe The Searching elements using selectors recipe Accessing repeater content AngularJS facilitates generating repeated content with ease using the ngRepeat directive. In this recipe, we'll learn how to access and test repeated content. Getting ready Follow the logic to define a beforeEach() function with the relevant logic to set up a directive as outlined in the Starting with testing directives recipe in this article. For this recipe, you can replicate the template that I suggested in the first recipe's There's more… section. For the purpose of this recipe, I tested against a property on scope named breakers: var breakers = [{ name: 'China Doll' }, { name: 'Crazy Legs' }, { name: 'Frosty Freeze' }]; You can replace this with whatever code you have within the directive you're testing. How to do it… First, create a basic test to establish that the HTML code contained within the h2 tag is as we expected: it('should display the correct breaker name', function () {}); Next, retrieve a reference to the li tag using the find() method on the element providing the tag name as the selector: var list = element.find('li'); Finally, targeting the first element in the list, we retrieve the text content expecting it to match the first item in the breakers array: expect(list.eq(0).text()).toBe('China Doll'); How it works… In step 2, the find() method using li as the selector will return all the list items. In step 3, using the eq() method (http://api.jquery.com/eq/) on the returned element from step 2, we can get the HTML contents at a specific index, zero in this particular case. As the returned object from the eq() method is a jQuery object, we can call the text() method, which immediately after that will return the text content of the element. We can then run an expectation that the first li tag text matches the first breaker within the scope array. See also The Starting with testing directives recipe The Searching elements using selectors recipe The Accessing basic HTML content recipe Summary In this article you have learned to focus on testing changes within a directive based on interaction from either UI events or application updates to the model. Directives are one of the important jewels of AngularJS and can range in complexity. They can provide the foundation to many aspects of the application and therefore require comprehensive tests. Resources for Article: Further resources on this subject: The First Step [article] AngularJS Performance [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 2076
Modal Close icon
Modal Close icon